pax_global_header00006660000000000000000000000064145760573060014527gustar00rootroot0000000000000052 comment=eef25264e5ca5f96a77129308edb83ccf84cb1b1 varnish-7.5.0/000077500000000000000000000000001457605730600132125ustar00rootroot00000000000000varnish-7.5.0/.circleci/000077500000000000000000000000001457605730600150455ustar00rootroot00000000000000varnish-7.5.0/.circleci/Dockerfile000066400000000000000000000003731457605730600170420ustar00rootroot00000000000000FROM centos:7 RUN set -e;\ yum install -y epel-release; \ yum install -y \ automake \ git \ jemalloc-devel \ libedit-devel \ libtool \ libunwind-devel \ make \ pcre2-devel \ python3 \ python-sphinx varnish-7.5.0/.circleci/Dockerfile.alpine000066400000000000000000000004571457605730600203140ustar00rootroot00000000000000FROM alpine RUN set -e; \ apk update; \ apk add -q \ autoconf \ automake \ build-base \ ca-certificates \ cpio \ git \ gzip \ libedit-dev \ libtool \ libunwind-dev \ linux-headers \ pcre2-dev \ py-docutils \ py3-sphinx \ tar varnish-7.5.0/.circleci/Dockerfile.archlinux000066400000000000000000000004301457605730600210300ustar00rootroot00000000000000FROM archlinux:base-devel RUN set -e; \ pacman -Sy --noconfirm \ ca-certificates \ cpio \ git \ libedit \ libtool \ libunwind \ linux-headers \ pcre2 \ python-docutils \ python-sphinx \ tar varnish-7.5.0/.circleci/Dockerfile.ubuntu000066400000000000000000000006551457605730600203660ustar00rootroot00000000000000FROM ubuntu RUN set -e; \ export DEBIAN_FRONTEND=noninteractive; \ export DEBCONF_NONINTERACTIVE_SEEN=true; \ apt-get update; \ apt-get install -y \ autoconf \ automake \ build-essential \ ca-certificates \ cpio \ git \ graphviz \ libedit-dev \ libjemalloc-dev \ libncurses-dev \ libpcre2-dev \ libtool \ libunwind-dev \ pkg-config \ python3-sphinx varnish-7.5.0/.circleci/README.rst000066400000000000000000000075451457605730600165470ustar00rootroot00000000000000.. Copyright (c) 2020 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license Multiarch building, testing & packaging ======================================= Varnish Cache uses CircleCI_ for building, testing and creating packages for several Linux distributions for both x86_64 and aarch64 architectures. Since CircleCI provides only x86_64 VMs the setup uses Docker and QEMU to be able to build, test and create packages for aarch64. This is accomplished by registering ``qemu-user-static`` for the CircleCI ``machine`` executor:: sudo docker run --rm --privileged multiarch/qemu-user-static --reset --credential yes --persistent yes Note 1: **--credential yes** is needed so that *setuid* flag is working. Without it ``sudo`` does not work in the Docker containers with architecture different than x86_64. Note 2: **--persistent yes** is needed so that there is no need to use ``:register`` tag. This way one can run locally pure foreign arch Docker images, like the official ``arm64v8/***`` ones. With QEMU registered each build step can start a Docker image for any of the supported architectures to execute the ``configure``, ``make``, package steps. Workflows --------- There are two CircleCI workflows: commit ~~~~~~ It is executed after each push to any branch, including Pull Requests The ``commit`` workflow runs two jobs: - ``dist`` - this job creates the source code distribution of Varnish Cache as compressed archive (``varnish-${VERSION}.tar.gz``). - ``distcheck`` - untars the source code distribution from ``dist`` job and builds (*configure*, *make*) on different Linux distributions nightly ~~~~~~~ It is executed once per day at 04:00 AM UTC time. This workflow also builds binary packages for different Linux distributions and CPU architectures (x86_64 & aarch64) and for this reason its run takes longer. It runs the following jobs: - The first two jobs that run in parallel are: - ``tar_pkg_tools`` - this step checks out pkg-varnish-cache_ with the packaging descriptions for Debian, RedHat and Alpine, and stores them in the build workspace for the next steps in the pipeline. - ``dist`` - this step creates the source code distribution of Varnish Cache as compressed archive (``varnish-${VERSION}.tar.gz``). This archive is also stored in the build workspace and used later by the packaging steps. - The next job in the workflow is ``package`` - a job that creates the packages (e.g. .rpm, .deb) for each supported CPU architecture, Linux distribution and its major version (e.g. *x64_centos_7*, *aarch64_ubuntu_bionic*, *x64_alpine_3*, etc.). This step creates a Dockerfile on the fly by using a base Docker image. This custom Docker image executes a Shell script that has the recipe for creating the package for the specific Linux flavor, e.g. *make-rpm-packages.sh*. The step stores the packages in the build workspace. - Finally, if the previous jobs are successful, a final step is executed - ``collect_packages``. This step creates an archive with all packages and stores it as an artifact that can be uploaded to PackageCloud_. More ---- This setup can be easily extended for any CPU architectures supported by QEMU and for any Linux distributions which have Docker image. To do this one needs to add a new ``package`` job with the proper parameters for it. At the moment the setup uses *raw* Docker images and installs the required Linux distribution dependencies before running the tests/build/packaging code. This could be optimized to save some execution time by creating custom Docker images that extend the current ones and pre-installs the required dependencies. .. _CircleCI: https://app.circleci.com/pipelines/github/varnishcache/varnish-cache .. _pkg-varnish-cache: https://github.com/varnishcache/pkg-varnish-cache .. _PackageCloud: https://packagecloud.io/varnishcache/ varnish-7.5.0/.circleci/config.yml000066400000000000000000000400401457605730600170330ustar00rootroot00000000000000version: 2.1 parameters: vc-commit: type: string default: "HEAD" pkg-commit: type: string default: "master" dist-url: type: string default: "" dist-url-sha256: type: string default: "" configure_args: type: string default: | --with-contrib \ --with-unwind \ --enable-developer-warnings \ --enable-debugging-symbols \ --disable-stack-protector \ --with-persistent-storage \ build-pkgs: type: string default: "" jobs: dist: description: Build or download varnish-x.y.z.tar.gz that is used later for the packaging jobs docker: - image: centos:7 steps: - run: name: Install deps command: | yum install -y epel-release yum install -y \ automake \ jemalloc-devel \ git \ libedit-devel \ libtool \ libunwind-devel \ make \ pcre2-devel \ python3 \ python-sphinx - checkout - when: condition: << pipeline.parameters.dist-url >> steps: - run: name: Download the dist tarball command: | curl -Ls '<< pipeline.parameters.dist-url >>' -o varnish-dist.tar.gz - when: condition: << pipeline.parameters.dist-url-sha256 >> steps: - run: name: Verify downloaded tarball command: | echo "<< pipeline.parameters.dist-url-sha256 >> varnish-dist.tar.gz" | sha256sum -c - run: name: Rename the dist tarball by parsed version command: | mkdir parse-version-tmp cd parse-version-tmp tar xzf ../varnish-dist.tar.gz VERSION=$(varnish-*/configure --version | awk 'NR == 1 {print $NF}') cd .. mv -v varnish-dist.tar.gz varnish-${VERSION}.tar.gz - unless: condition: << pipeline.parameters.dist-url >> steps: - run: name: Create the dist tarball command: | git checkout << pipeline.parameters.vc-commit >> # Locally built tarballs are always built with weekly in package name touch .is_weekly # If version is "trunk", override version to add date if grep 'AC_INIT.*trunk.*' ./configure.ac; then sed -i -e "s/AC_INIT(\[\(.*\)\],\[\(.*\)\],\[\(.*\)\])/AC_INIT([\1],[$(date +%Y%m%d)],[\3])/" ./configure.ac else sed -i -e "s/AC_INIT(\[\(.*\)\],\[\(.*\)\],\[\(.*\)\])/AC_INIT([\1],[\2-$(date +%Y%m%d)],[\3])/" ./configure.ac fi ./autogen.des make dist -j 16 - persist_to_workspace: root: . paths: - .is_weekly - varnish*.tar.gz - tools/*.suppr - .circleci tar_pkg_tools: description: Builds archives with the packaging tools from https://github.com/varnishcache/pkg-varnish-cache docker: - image: centos:7 steps: - add_ssh_keys: fingerprints: - "11:ed:57:75:32:81:9d:d0:a4:5e:af:15:4b:d8:74:27" - run: name: Grab the pkg repo command: | yum install -y git mkdir -p ~/.ssh ssh-keyscan -H github.com >> ~/.ssh/known_hosts echo ${CIRCLE_REPOSITORY_URL} git clone https://github.com/varnishcache/pkg-varnish-cache.git . git checkout << pipeline.parameters.pkg-commit >> tar cvzf debian.tar.gz debian --dereference tar cvzf redhat.tar.gz redhat --dereference tar cvzf alpine.tar.gz alpine --dereference - persist_to_workspace: root: . paths: - debian.tar.gz - redhat.tar.gz - alpine.tar.gz package: parameters: platform: description: the Linux distribution, with release, e.g. debian:buster, centos:7 type: string rclass: description: the resource class to use, usuall arm.medium or medium type: string machine: image: ubuntu-2004:202111-02 resource_class: << parameters.rclass >> steps: - attach_workspace: at: ~/project - when: condition: matches: pattern: ^alpine.* value: << parameters.platform >> steps: - run: # https://wiki.alpinelinux.org/wiki/Release_Notes_for_Alpine_3.14.0#faccessat2 name: grab the latest docker version command: | # using https://docs.docker.com/engine/install/ubuntu/ sudo apt-get update sudo apt-get install -y apt-transport-https ca-certificates curl gnupg lsb-release curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg case "<< parameters.rclass >>" in arm.*) ARCH=arm64;; *) ARCH=amd64;; esac echo \ "deb [signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt-get update sudo apt-get install docker-ce docker-ce-cli containerd.io - run: name: Build for << parameters.platform >> on << parameters.rclass >> command: | mkdir -p packages case "<< parameters.platform >>" in debian:*|ubuntu:*) EXT=deb ;; centos:*|fedora:*) EXT=rpm ;; almalinux:*) EXT=rpm ;; alpine:*) EXT=apk ;; *) echo "unrecognized platform: << parameters.platform >>" exit 1 esac case "<< parameters.platform >>" in centos:stream) REPO=quay.io/centos/ ;; *) REPO= ;; esac case "<< parameters.rclass >>" in arm.*) ARCH=arm64 ;; *) ARCH=amd64 ;; esac docker run \ --rm \ -it \ --security-opt seccomp=unconfined \ -e PARAM_DIST=$(echo "<< parameters.platform >>" | cut -d: -f1) \ -e PARAM_RELEASE=$(echo "<< parameters.platform >>" | cut -d: -f2) \ -v$(pwd):/varnish-cache \ --platform linux/$ARCH \ ${REPO}<< parameters.platform >> \ /varnish-cache/.circleci/make-$EXT-packages.sh - run: name: List created packages command: find ./packages -type f - persist_to_workspace: root: . paths: - "packages" build: parameters: prefix: description: the container image prefix (repository or architecture) type: string default: "" dist: description: the Linux distribution (debian|ubuntu) type: string release: description: the release name (buster|bullseye|bookworm|focal|jammy|noble) type: string make_target: description: the make target to execute during the build default: distcheck type: string extra_conf: description: platform-specific configure arguments default: "" type: string docker: - image: centos:7 working_directory: /workspace steps: - setup_remote_docker - run: name: Install docker command: yum install -y docker - checkout - run: name: Extract and build command: | docker create --name workspace -v /workspace << parameters.prefix >><< parameters.dist >>:<< parameters.release >> /bin/true docker cp /workspace workspace:/ docker run --volumes-from workspace -w /workspace << parameters.prefix >><< parameters.dist >>:<< parameters.release >> sh -c ' case "<< parameters.dist >>" in centos|almalinux|fedora) yum groupinstall -y "Development Tools" case "<< parameters.dist >>:<< parameters.release >>" in almalinux:9) dnf install -y "dnf-command(config-manager)" yum config-manager --set-enabled crb yum install -y diffutils yum install -y epel-release ;; centos:stream|almalinux:8) dnf install -y "dnf-command(config-manager)" yum config-manager --set-enabled powertools yum install -y diffutils yum install -y epel-release ;; centos:7) yum install -y epel-release ;; esac yum install -y \ cpio \ automake \ git \ jemalloc-devel \ libedit-devel \ libtool \ libunwind-devel \ make \ pcre2-devel \ python3 \ /usr/bin/sphinx-build \ sudo ;; debian|ubuntu) export DEBIAN_FRONTEND=noninteractive export DEBCONF_NONINTERACTIVE_SEEN=true apt-get update apt-get install -y \ autoconf \ automake \ build-essential \ ca-certificates \ cpio \ git \ graphviz \ libedit-dev \ libjemalloc-dev \ libncurses-dev \ libpcre2-dev \ libtool \ libunwind-dev \ pkg-config \ python3-sphinx \ sudo ;; alpine) apk update apk add -q \ autoconf \ automake \ build-base \ ca-certificates \ cpio \ git \ gzip \ libedit-dev \ libtool \ libunwind-dev \ linux-headers \ pcre2-dev \ py-docutils \ py3-sphinx \ tar \ sudo ;; archlinux) pacman -Syu --noconfirm \ ca-certificates \ cpio \ git \ libedit \ libtool \ libunwind \ linux-headers \ pcre2 \ python-docutils \ python-sphinx \ tar ;; esac case "<< parameters.dist >>" in archlinux) useradd varnish ;; centos|almalinux|fedora) adduser varnish ;; *) adduser --disabled-password --gecos "" varnish ;; esac chown -R varnish:varnish . export ASAN_OPTIONS=abort_on_error=1,detect_odr_violation=1,detect_leaks=1,detect_stack_use_after_return=1,detect_invalid_pointer_pairs=1,handle_segv=0,handle_sigbus=0,use_sigaltstack=0,disable_coredump=0 export LSAN_OPTIONS=abort_on_error=1,use_sigaltstack=0,suppressions=$(pwd)/tools/lsan.suppr export TSAN_OPTIONS=abort_on_error=1,halt_on_error=1,use_sigaltstack=0,suppressions=$(pwd)/tools/tsan.suppr export UBSAN_OPTIONS=halt_on_error=1,print_stacktrace=1,use_sigaltstack=0,suppressions=$(pwd)/tools/ubsan.suppr sudo -u varnish \ autoreconf -i -v sudo -u varnish \ ./configure \ << pipeline.parameters.configure_args >> \ << parameters.extra_conf >> sudo -u varnish \ --preserve-env=ASAN_OPTIONS,LSAN_OPTIONS,TSAN_OPTIONS,UBSAN_OPTIONS \ make -j 4 -k << parameters.make_target >> VERBOSE=1 \ DISTCHECK_CONFIGURE_FLAGS="<< pipeline.parameters.configure_args >> \ << parameters.extra_conf >>" ' collect_packages: docker: - image: centos:7 steps: - attach_workspace: at: ~/project - run: ls -la ~/project/ - run: name: Tar the packages command: | tar cvzf packages.tar.gz packages - store_artifacts: destination: packages.tar.gz path: packages.tar.gz workflows: version: 2 commit: unless: &packaging_cond or: - << pipeline.parameters.build-pkgs >> - << pipeline.parameters.dist-url >> jobs: - build: name: build_centos_7 dist: centos release: "7" - build: name: build_centos_stream prefix: quay.io/centos/ dist: centos release: stream - build: name: build_almalinux_8 dist: almalinux release: "8" - build: name: build_almalinux_9 dist: almalinux release: "9" # fedora is our witness - build: name: build_fedora_latest dist: fedora release: latest make_target: witness.dot # oldest debian goes 32bit - build: name: build_debian_buster dist: debian release: buster prefix: i386/ - build: name: build_debian_bullseye dist: debian release: bullseye - build: name: build_debian_bookworm dist: debian release: bookworm # latest ubuntu uses sanitizers - build: name: build_ubuntu_focal dist: ubuntu release: focal - build: name: build_ubuntu_jammy dist: ubuntu release: jammy - build: name: build_ubuntu_noble dist: ubuntu release: noble extra_conf: --enable-asan --enable-ubsan --enable-workspace-emulator make_target: check -j2 - build: name: build_alpine dist: alpine release: "latest" extra_conf: --without-contrib make_target: check - build: name: build_archlinux dist: archlinux release: base-devel packaging: when: *packaging_cond jobs: &packaging_jobs - dist - tar_pkg_tools - package: name: << matrix.platform >> packages (<< matrix.rclass >>) requires: - dist - tar_pkg_tools matrix: parameters: platform: - ubuntu:focal - ubuntu:jammy - ubuntu:noble - debian:buster - debian:bullseye - debian:bookworm - centos:7 - centos:stream - almalinux:8 - almalinux:9 - fedora:latest - alpine:3 rclass: - arm.medium - medium - collect_packages: requires: - package nightly: triggers: - schedule: cron: "0 4 * * *" filters: branches: only: - master jobs: *packaging_jobs varnish-7.5.0/.circleci/make-apk-packages.sh000077500000000000000000000027431457605730600206540ustar00rootroot00000000000000#!/usr/bin/env sh set -eux apk add -q --no-progress --update tar alpine-sdk sudo echo "PARAM_RELEASE: $PARAM_RELEASE" echo "PARAM_DIST: $PARAM_DIST" if [ -z "$PARAM_RELEASE" ]; then echo "Env variable PARAM_RELEASE is not set! For example PARAM_RELEASE=8, for CentOS 8" exit 1 elif [ -z "$PARAM_DIST" ]; then echo "Env variable PARAM_DIST is not set! For example PARAM_DIST=centos" exit 1 fi cd /varnish-cache tar xazf alpine.tar.gz --strip 1 adduser -D builder echo "builder ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers addgroup builder abuild mkdir -p /var/cache/distfiles chmod -R a+w /var/cache/distfiles echo "Generate key" su builder -c "abuild-keygen -nai" echo "Fix APKBUILD's variables" tar xavf varnish-*.tar.gz VERSION=$(varnish-*/configure --version | awk 'NR == 1 {print $NF}') echo "Version: $VERSION" sed -i "s/@VERSION@/$VERSION/" APKBUILD rm -rf varnish-*/ echo "Change the ownership so that abuild is able to write its logs" chown builder -R . echo "Fix checksums, build" su builder -c "abuild checksum" su builder -c "abuild -r" echo "Fix the APKBUILD's version" su builder -c "mkdir apks" ARCH=`uname -m` su builder -c "cp /home/builder/packages/$ARCH/*.apk apks" echo "Import the packages into the workspace" mkdir -p packages/$PARAM_DIST/$PARAM_RELEASE/$ARCH/ mv /home/builder/packages/$ARCH/*.apk packages/$PARAM_DIST/$PARAM_RELEASE/$ARCH/ echo "Allow to read the packages by 'circleci' user outside of Docker after 'chown builder -R .' above" chmod -R a+rwx . varnish-7.5.0/.circleci/make-deb-packages.sh000077500000000000000000000031241457605730600206250ustar00rootroot00000000000000#!/usr/bin/env bash set -eux export DEBIAN_FRONTEND=noninteractive export DEBCONF_NONINTERACTIVE_SEEN=true apt-get update apt-get install -y dpkg-dev debhelper devscripts equivs pkg-config apt-utils fakeroot echo "PARAM_RELEASE: $PARAM_RELEASE" echo "PARAM_DIST: $PARAM_DIST" if [ -z "$PARAM_RELEASE" ]; then echo "Env variable PARAM_RELEASE is not set! For example PARAM_RELEASE=focal for Ubuntu 20.04" exit 1 elif [ -z "$PARAM_DIST" ]; then echo "Env variable PARAM_DIST is not set! For example PARAM_DIST=debian" exit 1 fi # Ubuntu 20.04 aarch64 fails when using fakeroot-sysv with: # semop(1): encountered an error: Function not implemented update-alternatives --set fakeroot /usr/bin/fakeroot-tcp cd /varnish-cache ls -la echo "Untar debian..." tar xavf debian.tar.gz echo "Untar orig..." tar xavf varnish-*.tar.gz --strip 1 echo "Update changelog version..." if [ -e .is_weekly ]; then WEEKLY='-weekly' else WEEKLY= fi VERSION=$(./configure --version | awk 'NR == 1 {print $NF}')$WEEKLY-1~$PARAM_RELEASE sed -i -e "s|@VERSION@|$VERSION|" "debian/changelog" echo "Install Build-Depends packages..." yes | mk-build-deps --install debian/control || true echo "Build the packages..." dpkg-buildpackage -us -uc -j16 echo "Prepare the packages for storage..." mkdir -p packages/$PARAM_DIST/$PARAM_RELEASE/ mv ../*.deb packages/$PARAM_DIST/$PARAM_RELEASE/ if [ "`uname -m`" = "x86_64" ]; then ARCH="amd64" else ARCH="arm64" fi DSC_FILE=$(ls ../*.dsc) DSC_FILE_WO_EXT=$(basename ${DSC_FILE%.*}) mv $DSC_FILE packages/$PARAM_DIST/$PARAM_RELEASE/${DSC_FILE_WO_EXT}_${ARCH}.dsc varnish-7.5.0/.circleci/make-rpm-packages.sh000077500000000000000000000040361457605730600206740ustar00rootroot00000000000000#!/usr/bin/env bash set -eux echo "PARAM_RELEASE: $PARAM_RELEASE" echo "PARAM_DIST: $PARAM_DIST" if [ -z "$PARAM_RELEASE" ]; then echo "Env variable PARAM_RELEASE is not set! For example PARAM_RELEASE=stream, for CentOS stream" exit 1 elif [ -z "$PARAM_DIST" ]; then echo "Env variable PARAM_DIST is not set! For example PARAM_DIST=centos" exit 1 fi case "$PARAM_DIST:$PARAM_RELEASE" in almalinux:9) dnf install -y 'dnf-command(config-manager)' yum config-manager --set-enabled crb yum install -y epel-release ;; centos:stream|almalinux:8) dnf install -y 'dnf-command(config-manager)' yum config-manager --set-enabled powertools yum install -y epel-release ;; centos:7) yum install -y epel-release ;; esac yum install -y rpm-build yum-utils export DIST_DIR=build cd /varnish-cache rm -rf $DIST_DIR mkdir $DIST_DIR echo "Untar redhat..." tar xavf redhat.tar.gz -C $DIST_DIR echo "Untar orig..." tar xavf varnish-*.tar.gz -C $DIST_DIR --strip 1 echo "Build Packages..." if [ -e .is_weekly ]; then WEEKLY='.weekly' else WEEKLY= fi VERSION=$("$DIST_DIR"/configure --version | awk 'NR == 1 {print $NF}')$WEEKLY cp -r -L "$DIST_DIR"/redhat/* "$DIST_DIR"/ tar zcf "$DIST_DIR.tgz" --exclude "$DIST_DIR/redhat" "$DIST_DIR"/ RPMVERSION="$VERSION" RESULT_DIR="rpms" CUR_DIR="$(pwd)" rpmbuild() { command rpmbuild \ --define "_smp_mflags -j10" \ --define "_sourcedir $CUR_DIR" \ --define "_srcrpmdir $CUR_DIR/${RESULT_DIR}" \ --define "_rpmdir $CUR_DIR/${RESULT_DIR}" \ --define "versiontag ${RPMVERSION}" \ --define "releasetag 1" \ --define "srcname $DIST_DIR" \ --define "nocheck 1" \ "$@" } yum-builddep -y "$DIST_DIR"/redhat/varnish.spec rpmbuild -bs "$DIST_DIR"/redhat/varnish.spec rpmbuild --rebuild "$RESULT_DIR"/varnish-*.src.rpm echo "Prepare the packages for storage..." mkdir -p packages/$PARAM_DIST/$PARAM_RELEASE/ mv rpms/*/*.rpm packages/$PARAM_DIST/$PARAM_RELEASE/ varnish-7.5.0/.dir-locals.el000066400000000000000000000001021457605730600156340ustar00rootroot00000000000000((c-mode . ((indent-tabs-mode . t) (c-file-style . "BSD")))) varnish-7.5.0/.editorconfig000066400000000000000000000002771457605730600156750ustar00rootroot00000000000000root = true [*] charset = utf-8 max_line_length = 80 end_of_line = lf trim_trailing_whitespace = true insert_final_newline = true [*.{c,h}] indent_style = tab indent_size = 8 tab_width = 8 varnish-7.5.0/.github/000077500000000000000000000000001457605730600145525ustar00rootroot00000000000000varnish-7.5.0/.github/ISSUE_TEMPLATE.md000066400000000000000000000003171457605730600172600ustar00rootroot00000000000000 varnish-7.5.0/.github/ISSUE_TEMPLATE/000077500000000000000000000000001457605730600167355ustar00rootroot00000000000000varnish-7.5.0/.github/ISSUE_TEMPLATE/bug-report.yml000066400000000000000000000047131457605730600215530ustar00rootroot00000000000000name: Bug report description: Create a report to help us improve body: - type: markdown attributes: value: |+ Did you check that there are no similar bug reports or pull requests? If your panic happens in the child_sigsegv_handler function, look at the backtrace to determine whether it is similar to another issue. When in doubt, open a new one and it will be closed as a duplicate if needed. If it's a packaging bug (including sysv or systemd services bugs) please open an issue on [varnishcache/pkg-varnish-cache](https://github.com/varnishcache/pkg-varnish-cache) instead. If it's a feature request, please start a thread on the [varnish-misc](https://varnish-cache.org/support/index.html#mailing-lists) list instead. - type: textarea attributes: label: Expected Behavior description: |+ If you're describing a bug, tell us what should happen. If you're suggesting a change/improvement, tell us how it should work. validations: required: true - type: textarea attributes: label: Current Behavior description: |+ If describing a bug, tell us what happens instead of the expected behavior. If suggesting a change/improvement, explain the difference from current behavior. validations: required: true - type: textarea attributes: label: Possible Solution description: |+ Not obligatory, but suggest a fix/reason for the bug, or ideas how to implement the addition or change - type: textarea attributes: label: Steps to Reproduce (for bugs) description: |+ Provide a link to a live example, or an unambiguous set of steps to reproduce this bug. Include code to reproduce, if relevant. placeholder: |+ 1. 2. 3. 4. - type: textarea attributes: label: Context description: |+ How has this issue affected you? What are you trying to accomplish? Providing context helps us come up with a solution that is most useful in the real world. validations: required: true - type: input attributes: label: Varnish Cache version description: |+ The version can be obtained by "varnishd -V". placeholder: "varnishd (varnish-7.3.0 revision 84d79120b6d17b11819a663a93160743f293e63f)" - type: input attributes: placeholder: Ubuntu22.04 label: Operating system - type: input attributes: label: Source of binary packages used (if any) placeholder: https://packagecloud.io/varnishcache/ varnish-7.5.0/.github/ISSUE_TEMPLATE/config.yml000066400000000000000000000005441457605730600207300ustar00rootroot00000000000000blank_issues_enabled: false contact_links: - name: Getting Help url: https://varnish-cache.org/support/index.html about: If you have questions or need help, please click here. - name: Report a security vulnerability url: https://varnish-cache.org/security/index.html#i-have-found-a-security-hole about: Report a security vulnerability. varnish-7.5.0/.github/workflows/000077500000000000000000000000001457605730600166075ustar00rootroot00000000000000varnish-7.5.0/.github/workflows/cifuzz.yml000066400000000000000000000013511457605730600206440ustar00rootroot00000000000000name: CIFuzz on: [pull_request] jobs: Fuzzing: if: github.repository_owner == 'varnishcache' runs-on: ubuntu-latest steps: - name: Build Fuzzers id: build uses: google/oss-fuzz/infra/cifuzz/actions/build_fuzzers@master with: oss-fuzz-project-name: 'varnish' dry-run: false language: c - name: Run Fuzzers uses: google/oss-fuzz/infra/cifuzz/actions/run_fuzzers@master with: oss-fuzz-project-name: 'varnish' fuzz-seconds: 600 dry-run: false language: c - name: Upload Crash uses: actions/upload-artifact@v1 if: failure() && steps.build.outcome == 'success' with: name: artifacts path: ./out/artifacts varnish-7.5.0/.github/workflows/coverity.yml000066400000000000000000000015531457605730600212020ustar00rootroot00000000000000name: Coverity Scan on: schedule: - cron: 0 8 * * MON workflow_dispatch: jobs: coverity: if: github.repository_owner == 'varnishcache' runs-on: ubuntu-latest steps: - run: | sudo apt-get install -y \ autoconf \ automake \ build-essential \ ca-certificates \ cpio \ libedit-dev \ libjemalloc-dev \ libncurses-dev \ libpcre2-dev \ libtool \ libunwind-dev \ pkg-config \ python3-sphinx - uses: actions/checkout@v2 - run: ./autogen.sh - run: ./configure --with-persistent-storage - uses: vapier/coverity-scan-action@v0 with: project: varnish email: varnish-dev@varnish-cache.org token: ${{ secrets.COVERITY_SCAN_TOKEN }} varnish-7.5.0/.gitignore000066400000000000000000000046621457605730600152120ustar00rootroot00000000000000# Matches ALL Makefile and Makefile.in occurrences Makefile Makefile.in # ... _* .deps/ .libs/ *.o *.a *.lo *.la *~ *.sw[op] *.trs *.log # Various auto-tools artifacts /aclocal.m4 /autom4te.cache/ /build-aux /compile /config.guess /config.h /config.h.in /config.status /config.sub /configure /configure.lineno /depcomp /install-sh /libtool /ltmain.sh /m4/libtool.m4 /m4/ltoptions.m4 /m4/ltsugar.m4 /m4/ltversion.m4 /m4/lt~obsolete.m4 /missing /stamp-h1 /varnishapi.pc /varnishapi-uninstalled.pc TAGS tags cscope.*out .dirstamp # Default vcl made from bin/varnishd/default.vcl /bin/varnishd/builtin_vcl.c /etc/builtin.vcl # Various auto-generated code snippets /bin/varnishd/vhp_hufdec.h /bin/varnishd/vhp_gen_hufdec /bin/varnishtest/vtc_h2_dectbl.h /include/vcl.h /include/vrt_obj.h /include/vmod_abi.h /include/tbl/vcl_returns.h /include/tbl/vrt_stv_var.h /include/tbl/vcl_context.h /include/vcs_version.h /lib/libvcc/vcc_fixed_token.c /lib/libvcc/vcc_obj.c /lib/libvcc/vcc_token_defs.h /lib/libvcc/vcc_types.h /lib/libvarnishapi/vsl2rst /lib/libvarnishapi/vxp_fixed_token.c /lib/libvarnishapi/vxp_tokens.h # Stats /lib/libvsc/VSC_*.c /lib/libvsc/VSC_*.h /lib/libvsc/VSC_*.rst /lib/libvsc/counters.rst # Misc. generated files for included vmods. /vmod/VSC_*.c /vmod/VSC_*.h /vmod/vcc_*_if.c /vmod/vcc_*_if.h /vmod/vmod_*.rst # Man-files and binaries /man/*.1 /man/*.3 /man/*.7 /doc/sphinx/include /bin/varnishadm/varnishadm /bin/varnishd/varnishd /bin/varnishhist/varnishhist /bin/varnishlog/varnishlog /bin/varnishncsa/varnishncsa /bin/varnishstat/varnishstat /bin/varnishstat/varnishstat_help_gen /bin/varnishstat/varnishstat_curses_help.c /bin/varnishstat/vsc2rst /bin/varnishtest/teken_state.h /bin/varnishtest/varnishtest /bin/varnishtop/varnishtop # Doc-stuff generated from xml /doc/*.html /doc/sphinx/build/ /doc/sphinx/conf.py /doc/sphinx/reference/vmod_*.generated.rst # graphviz-generated /doc/graphviz/*.pdf # NetBeans insists on this /nbproject/private/ # Test droppings /bin/varnishd/*_test /bin/varnishtest/tests/*.log-t /include/vrt_test* /include/vbm_test /lib/libvarnish/*_test /lib/libvarnishapi/*_test # GCOV droppings *.gcda *.gcno # vtc-bisect.sh default vtc /bisect.vtc # vtest.sh droppings /tools/tmp/ /tools/_vtest_tmp/ /tools/varnish-cache/ /tools/vt_key /tools/vt_key.pub # fuzzers /bin/varnishd/esi_parse_fuzzer # Coverity output /cov-int /myproject.tgz # Witness droppings witness.dot witness.svg # Flexelint droppings _.fl _.fl.old varnish-7.5.0/.lgtm.yml000066400000000000000000000002531457605730600147560ustar00rootroot00000000000000queries: - exclude: cpp/empty-block - exclude: cpp/missing-header-guard - exclude: cpp/short-global-name extraction: cpp: prepare: packages: "python3.7" varnish-7.5.0/.syntastic_c_config000066400000000000000000000000621457605730600170610ustar00rootroot00000000000000let g:syntastic_c_include_dirs = [".", "include"] varnish-7.5.0/.travis.yml000066400000000000000000000110141457605730600153200ustar00rootroot00000000000000--- language: c jobs: allow_failures: - os: osx - stage: sanitizers - arch: arm64 fast_finish: true include: - &test-linux stage: test os: linux dist: bionic arch: amd64 compiler: clang addons: apt: packages: - nghttp2 - python3-docutils - python3-sphinx - libunwind-dev - libpcre2-dev before_script: - ./autogen.sh - ./configure --enable-maintainer-mode --with-unwind script: &script-common - make -j16 check VERBOSE=1 - <<: *test-linux arch: arm64 - <<: *test-linux compiler: gcc before_script: - ./autogen.sh - ./configure --enable-maintainer-mode - <<: *test-linux compiler: gcc arch: arm64 before_script: - ./autogen.sh - ./configure --enable-maintainer-mode - <<: *test-linux env: WITNESS=1 script: make -j16 witness VERBOSE=1 - <<: *test-linux stage: sanitizers addons: apt: sources: - sourceline: 'deb http://apt.llvm.org/bionic/ llvm-toolchain-bionic-9 main' key_url: https://apt.llvm.org/llvm-snapshot.gpg.key - ubuntu-toolchain-r-test packages: - clang-9 - libunwind-dev - llvm-9 - nghttp2 - python3-docutils - python3-sphinx - libpcre2-dev env: ASAN=1 UBSAN=1 before_script: - | export ASAN_OPTIONS=abort_on_error=1,detect_odr_violation=1,detect_leaks=1,detect_stack_use_after_return=1,detect_invalid_pointer_pairs=1,handle_segv=0,handle_sigbus=0,use_sigaltstack=0,disable_coredump=0 export LSAN_OPTIONS=abort_on_error=1,use_sigaltstack=0,suppressions=$(pwd)/tools/lsan.suppr export TSAN_OPTIONS=abort_on_error=1,halt_on_error=1,use_sigaltstack=0,suppressions=$(pwd)/tools/tsan.suppr export UBSAN_OPTIONS=halt_on_error=1,print_stacktrace=1,use_sigaltstack=0,suppressions=$(pwd)/tools/ubsan.suppr export CC=clang-9 - ./autogen.sh - ./configure --enable-maintainer-mode --with-unwind --enable-debugging-symbols --disable-stack-protector --with-persistent-storage --enable-asan --enable-ubsan --enable-workspace-emulator - stage: test os: osx osx_image: xcode12.5 compiler: clang addons: homebrew: packages: - docutils - nghttp2 - sphinx-doc before_script: - export PATH="/usr/local/opt/sphinx-doc/bin:$PATH" - ./autogen.sh - ./configure --enable-maintainer-mode script: *script-common - <<: *test-linux stage: coverity env: - secure: "TndnHrwJk9FRSuVQWUk+ZrRc0jcNc0PW3TnvbRicIIwvYSLkMV5Y1tCQ5Jq/P98DA48/N/gf9DCAiFkxrNSKVeOY70FKgHYWlS130GhTv7r0c8zd+CdEXNORclcbBNV5F3Pli/LxZ+RUImjOfwcIcWV4eYv54Xv7aNFDAaDt4G9QlkSwXykLlZkoWLJQXFbhDBFioT1F1mucD9q9izEEeE+kqO1QH/IfobAq9v7/WrcS38sYI+0WvB1S0ajWuZJgRYqy1bocDNcQd05Vbr9NfAdJ9y+4VTuluZtTUyLxu3/0Tw8mAjHkcpOeNU26r3LnpdRk+5JuOFej/MrCmYRRawVfyvNGtu9RwcMkv8jl48TTs5kTf6UwFqJhe85QSlSi7IszfrE8HfB7B6u8eRr67rqjTr9k/BwEQyoBdK4JElQDj4A1GYHClomxgzmMZnVLvStnAm+IjdNlee4SfY0jj2KfPBd/v6Ms+LGVqNV9NDDKRQdOQD+H52MkIWs5Xu9fU5VaWP+xjFomA9aXex3r5FCssgyQ2P+HtWPdjNEtrkNezzfZ5b+VBVP87RdxfSqkZaRxi6gof0AgeTHWoi7GN1scseiKLxxCI7C0dfQgKrXTN7mZdcED1MMYdiaSI9mlSYQDDUHMQGeY1n3a9D6bUcC/TcmYo524PoTFBZgbbYM=" before_script: - curl --data "token=$COVTOKEN&project=varnish" --insecure -o coverity_tool.tgz https://scan.coverity.com/download/cxx/linux64 - tar xfz coverity_tool.tgz - export PATH=$PATH:$(echo $(pwd)/cov-analysis-*/bin) script: - ./autogen.sh - ./configure --enable-maintainer-mode --with-unwind - cov-build --dir cov-int make - tar cfz varnish.tgz cov-int - curl --form token="$COVTOKEN" --form email=varnish-dev@varnish-cache.org --form file=@varnish.tgz --form version="$TRAVIS_COMMIT" --form description="$TRAVIS_BRANCH" --insecure 'https://scan.coverity.com/builds?project=varnish' stages: - name: test if: type != cron - name: sanitizers if: type != cron AND type != pull_request - name: coverity if: type = cron notifications: irc: if: branch = master AND repo = varnishcache/varnish-cache channels: - "irc.linpro.no#varnish-hacking" on_success: change use_notice: true varnish-7.5.0/CONTRIBUTING000066400000000000000000000015651457605730600150530ustar00rootroot00000000000000Contributing to Varnish Cache ============================= Official development tree is here: https://github.com/varnishcache/varnish-cache These days we prefer patches as pull requests directly to that tree. Bugreports go there too. Our main project communication is through our developer IRC channel:: #varnish-hacking on server irc.linpro.no (That channel is not for user questions, use the #varnish channel for that.) Mondays at 15:00 EU time we hold our weekly "bugwash" where we go through new (and old) tickets. It speeds things up a lot if you can join the channel and answer questions directly when we go over the ticket. Github pull requests -------------------- Pull requests are handled like other tickets. Trivial pull requests (fix typos, etc) are welcomed, but they may be committed by a core team member and the author credited in the commit message. varnish-7.5.0/ChangeLog000066400000000000000000000001371457605730600147650ustar00rootroot00000000000000 Please note that this file is no longer maintained. Please refer to the changes files in doc/varnish-7.5.0/INSTALL000066400000000000000000000013261457605730600142450ustar00rootroot00000000000000 Installation Instructions See https://varnish-cache.org/docs/trunk/installation/install_source.html for complete and up to date install instructions. This file only mentions the basic steps: * Install prerequesites * When building from the source repository, run sh autogen.sh * To build and install Varnish, run sh configure make make install Varnish will store run-time state in /var/run/varnishd; you may want to tune this using configure's --localstatedir parameter. Additional configure options of interest: --enable-developer-warnings enable strict warnings (default is NO) --enable-debugging-symbols enable debugging symbols (default is NO) varnish-7.5.0/LICENSE000066400000000000000000000026611457605730600142240ustar00rootroot00000000000000The compilation of software known as "Varnish Cache" is distributed under the following terms: Copyright (c) 2006 Verdens Gang AS Copyright (c) 2006-2021 Varnish Software AS All rights reserved. SPDX-License-Identifier: BSD-2-Clause Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. varnish-7.5.0/Makefile.am000066400000000000000000000040251457605730600152470ustar00rootroot00000000000000ACLOCAL_AMFLAGS = -I m4 -I . SUBDIRS = include lib bin vmod etc doc man contrib TESTS = tools/magic_check.sh pkgconfigdir = $(libdir)/pkgconfig pkgconfig_DATA = varnishapi.pc m4dir = $(datadir)/aclocal m4_DATA = varnish.m4 varnish-legacy.m4 CLEANFILES = \ cscope.in.out \ cscope.out \ cscope.po.out \ witness.dot \ witness.svg EXTRA_DIST = \ $(TESTS) \ README.rst \ README.Packaging \ LICENSE \ autogen.sh \ varnishapi.pc.in \ varnish.m4 \ varnish-legacy.m4 \ vsc.am \ vtc.am \ wflags.py CONFIGURE_DEPENDENCIES = wflags.py AM_DISTCHECK_CONFIGURE_FLAGS = \ --enable-developer-warnings \ --enable-debugging-symbols \ --enable-dependency-tracking \ --with-contrib \ CFLAGS="$(EXTCFLAGS)" if WITH_UNWIND AM_DISTCHECK_CONFIGURE_FLAGS += --with-unwind endif install-data-local: $(install_sh) -d -m 0755 $(DESTDIR)$(localstatedir)/varnish distclean-local: find . '(' -name '*.gcda' -o -name '*.gcda' ')' -exec rm '{}' ';' distcleancheck_listfiles = \ find . -type f -exec sh -c 'test -f $(srcdir)/$$1 || echo $$1' \ sh '{}' ';' vtest-clean: $(am__remove_distdir) # XXX: This is a hack to ensure we have a built source tree when # running make dist If we had used non-recursive make we could have # solved it better, but we don't, so use this at least for now. LICENSE: all # XXX: This is a similar hack to ensure we have a built varnishtest # (and technically any other binary that may be involved in a VTC) # before we try to run tests anywhere in the tree. check-recursive: all # XXX: This is the exact same hack since some parts of the documentation # are generated as regular targets but needed by the html special target. html-recursive: all cscope: -rm -f cscope* find . -name '*.[hcS]' > cscope.files cscope -b gcov_digest: ${PYTHON} tools/gcov_digest.py -o _gcov witness.dot: all $(MAKE) check AM_VTC_LOG_FLAGS=-pdebug=+witness $(AM_V_GEN) $(srcdir)/tools/witness.sh witness.dot bin/varnishtest/ \ vmod/ .dot.svg: $(AM_V_GEN) $(DOT) -Tsvg $< >$@ witness: witness.svg .PHONY: cscope witness.dot varnish-7.5.0/README.Packaging000066400000000000000000000014321457605730600157550ustar00rootroot00000000000000Packaging ========= Varnish Cache packaging files are kept outside of the main distribution. The main reason for this is to decouple the development work from the packaging work. We want to be able to tag a release and do a tarball release without having to wait for the packagers to finish their work/changes. Official packages ----------------- The official Debian and Redhat packages are built by the Varnish Cache team and made available on https://packagecloud.io/varnishcache/ . Packaging files and scripts for Debian and Redhat: https://github.com/varnishcache/pkg-varnish-cache Third-party packages -------------------- Varnish Cache is built and packaged in many different operating systems and distributions. Please see https://varnish-cache.org/ for more information. varnish-7.5.0/README.rst000066400000000000000000000016761457605730600147130ustar00rootroot00000000000000.. Copyright (c) 2016-2020 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license Varnish Cache ============= This is Varnish Cache, the high-performance HTTP accelerator. Documentation and additional information about Varnish is available on https://www.varnish-cache.org/ Technical questions about Varnish and this release should be addressed to . Please see CONTRIBUTING for how to contribute patches and report bugs. For questions about commercial support and services related to Varnish see the `Varnish HTTP Cache Business page `_ . .. |ccibadge| image:: https://circleci.com/gh/varnishcache/varnish-cache/tree/master.svg?style=svg :target: https://circleci.com/gh/varnishcache/varnish-cache/tree/master .. _vtest: https://varnish-cache.org/vtest/ CircleCI tests: |ccibadge| More platforms are tested via vtest_ varnish-7.5.0/autogen.des000077500000000000000000000017001457605730600153520ustar00rootroot00000000000000#!/bin/sh # # Use this when doing code development SRCDIR=${SRCDIR:-$(dirname "$0")} set -ex make -k distclean > /dev/null 2>&1 || true # Prefer CLANG if we have it, and have not given preferences if command -v clang >/dev/null && test -z "$CC" ; then CC=clang export CC fi if [ "x$DST" != "x" ] ; then : elif [ "x`uname -o`" = "xFreeBSD" ] ; then DST="--prefix=/usr/local --mandir=/usr/local/man" else DST="--prefix=/opt/varnish --mandir=/opt/varnish/man" fi PERSISTENT=--with-persistent-storage if [ `uname -m` = "s390x" ] ; then # ASLR makes this impossible PERSISTENT= fi rm -f $SRCDIR/configure autoreconf -i -v $SRCDIR # NB: Workaround for make distcheck not working with # NB: FreeBSD's make on -current # env MAKE=gmake \ $SRCDIR/configure \ $DST \ --enable-maintainer-mode \ --enable-developer-warnings \ --enable-debugging-symbols \ --enable-dependency-tracking \ ${PERSISTENT} \ --with-contrib \ "$@" varnish-7.5.0/autogen.sh000077500000000000000000000001111457605730600152040ustar00rootroot00000000000000#!/bin/sh # # Use autogen.des when doing code development autoreconf -i varnish-7.5.0/bin/000077500000000000000000000000001457605730600137625ustar00rootroot00000000000000varnish-7.5.0/bin/Makefile.am000066400000000000000000000001771457605730600160230ustar00rootroot00000000000000# SUBDIRS = \ varnishadm \ varnishd \ varnishhist \ varnishlog \ varnishncsa \ varnishstat \ varnishtop \ varnishtest varnish-7.5.0/bin/flint.lnt000066400000000000000000000026311457605730600156170ustar00rootroot00000000000000// Copyright (c) 2010-2017 Varnish Software AS // SPDX-License-Identifier: BSD-2-Clause // See LICENSE file for full text of license -passes=3 -ffc // No automatic custody -printf(2, VSB_printf) /////////////////////////////////////////////////////////////////////// // miniobj // -emacro(755, CAST_OBJ) // glob macro not ref -emacro(779, REPLACE) // string constant != -emacro(774, REPLACE) // if(bool) always true -emacro(506, REPLACE) // const bool /////////////////////////////////////////////////////////////////////// // VMB -emacro(755, VMB) // glob macro not ref -emacro(755, VRMB) // glob macro not ref -emacro(755, VWMB) // glob macro not ref /////////////////////////////////////////////////////////////////////// // System library/POSIX related /////////////////////////////////////////////////////////////////////// // Fix strchr() semtics, it can only return NULL if arg2 != 0 -sem(strchr, 1p, type(1), 2n == 0 ? (@p < 1p) : (@p < 1p || @p == 0 )) +typename(844) -etype(844, struct pthread *) -sem(pthread_create, custodial(4)) -emacro(413, offsetof) // likely null pointer -emacro(736, isnan) // loss of prec. +libh(/usr/include/curses.h) -elib(659) // no tokens after struct def. -elib(123) // macro def. with arg at, (just warn) +libh(/usr/include/libunwind.h) -elib(849) -emacro(702, WEXITSTATUS) // signed shift right -e825 // control flows into case/default without -fallthrough comment varnish-7.5.0/bin/varnishadm/000077500000000000000000000000001457605730600161165ustar00rootroot00000000000000varnish-7.5.0/bin/varnishadm/Makefile.am000066400000000000000000000005501457605730600201520ustar00rootroot00000000000000# AM_CPPFLAGS = \ -I$(top_srcdir)/include \ -I$(top_builddir)/include bin_PROGRAMS = varnishadm varnishadm_SOURCES = varnishadm.c varnishadm_CFLAGS = @LIBEDIT_CFLAGS@ varnishadm_LDADD = \ $(top_builddir)/lib/libvarnishapi/libvarnishapi.la \ $(top_builddir)/lib/libvarnish/libvarnish.la \ ${PTHREAD_LIBS} ${RT_LIBS} ${NET_LIBS} @LIBEDIT_LIBS@ ${LIBM} varnish-7.5.0/bin/varnishadm/flint.lnt000066400000000000000000000002301457605730600177440ustar00rootroot00000000000000// Copyright (c) 2017-2018 Varnish Software AS // SPDX-License-Identifier: BSD-2-Clause // See LICENSE file for full text of license -sem(usage, r_no) varnish-7.5.0/bin/varnishadm/flint.sh000066400000000000000000000003701457605730600175660ustar00rootroot00000000000000#!/bin/sh # # Copyright (c) 2017-2021 Varnish Software AS # SPDX-License-Identifier: BSD-2-Clause # See LICENSE file for full text of license FLOPS=' *.c ../../lib/libvarnishapi/flint.lnt ../../lib/libvarnishapi/*.c ' ../../tools/flint_skel.sh varnish-7.5.0/bin/varnishadm/varnishadm.c000066400000000000000000000266271457605730600204330ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Cecilie Fritzvold * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include "config.h" #include #include #include #if defined(HAVE_EDIT_READLINE_READLINE_H) # include #elif defined(HAVE_LIBEDIT) # include #elif defined (HAVE_READLINE_READLINE_H) # include # ifdef HAVE_READLINE_HISTORY_H # include # else # error missing history.h - this should have got caught in configure # endif #else # error missing readline.h - this should have got caught in configure #endif #include #include #include #include #include #include #include #include "vdef.h" #include "vqueue.h" #include "vapi/vsig.h" #include "vapi/vsm.h" #include "vas.h" #include "vcli.h" #include "vjsn.h" #include "vtcp.h" #define RL_EXIT(status) \ do { \ rl_callback_handler_remove(); \ exit(status); \ } while (0) enum pass_mode_e { pass_script, pass_interactive, }; static double timeout = 5; static int p_arg = 0; static int line_sock; static void cli_write(int sock, const char *s) { int i, l; i = strlen(s); l = write (sock, s, i); if (i == l) return; perror("Write error CLI socket"); RL_EXIT(1); } /* * This function establishes a connection to the specified ip and port and * sends a command to varnishd. If varnishd returns an OK status, the result * is printed and 0 returned. Else, an error message is printed and 1 is * returned */ static int cli_sock(const char *T_arg, const char *S_arg) { int fd; int sock; unsigned status; char *answer = NULL; char buf[CLI_AUTH_RESPONSE_LEN + 1]; const char *err; sock = VTCP_open(T_arg, NULL, timeout, &err); if (sock < 0) { fprintf(stderr, "Connection failed (%s): %s\n", T_arg, err); return (-1); } (void)VCLI_ReadResult(sock, &status, &answer, timeout); if (status == CLIS_AUTH) { if (S_arg == NULL) { fprintf(stderr, "Authentication required\n"); free(answer); closefd(&sock); return (-1); } fd = open(S_arg, O_RDONLY); if (fd < 0) { fprintf(stderr, "Cannot open \"%s\": %s\n", S_arg, strerror(errno)); closefd(&sock); free(answer); return (-1); } VCLI_AuthResponse(fd, answer, buf); closefd(&fd); free(answer); cli_write(sock, "auth "); cli_write(sock, buf); cli_write(sock, "\n"); (void)VCLI_ReadResult(sock, &status, &answer, timeout); } if (status != CLIS_OK && status != CLIS_TRUNCATED) { fprintf(stderr, "Rejected %u\n%s\n", status, answer); closefd(&sock); free(answer); return (-1); } free(answer); cli_write(sock, "ping\n"); (void)VCLI_ReadResult(sock, &status, &answer, timeout); if (status != CLIS_OK || strstr(answer, "PONG") == NULL) { fprintf(stderr, "No pong received from server\n"); closefd(&sock); free(answer); return (-1); } free(answer); return (sock); } static unsigned pass_answer(int fd, enum pass_mode_e mode) { unsigned u, status; char *answer = NULL; u = VCLI_ReadResult(fd, &status, &answer, timeout); if (u) { if (status == CLIS_COMMS) { fprintf(stderr, "%s\n", answer); RL_EXIT(2); } if (answer) fprintf(stderr, "%s\n", answer); RL_EXIT(1); } if (p_arg && answer != NULL) { printf("%-3u %-8zu\n%s", status, strlen(answer), answer); } else if (p_arg) { printf("%-3u %-8u\n", status, 0U); } else { if (mode == pass_interactive) printf("%u\n", status); if (answer != NULL) printf("%s\n", answer); if (status == CLIS_TRUNCATED) printf("[response was truncated]\n"); } free(answer); (void)fflush(stdout); return (status); } static void v_noreturn_ do_args(int sock, int argc, char * const *argv) { int i; unsigned status; for (i = 0; i < argc; i++) { /* XXX: We should really CLI-quote these */ if (i > 0) cli_write(sock, " "); cli_write(sock, argv[i]); } cli_write(sock, "\n"); status = pass_answer(sock, pass_script); closefd(&sock); if (status == CLIS_OK || status == CLIS_TRUNCATED) exit(0); if (!p_arg) fprintf(stderr, "Command failed with error code %u\n", status); exit(1); } /* Callback for readline, doesn't take a private pointer, so we need * to have a global variable. */ static void v_matchproto_() send_line(char *l) { if (l) { cli_write(line_sock, l); cli_write(line_sock, "\n"); if (*l) add_history(l); rl_callback_handler_install("varnish> ", send_line); } else { RL_EXIT(0); } } static char * command_generator (const char *text, int state) { static struct vjsn *jsn_cmds; static const struct vjsn_val *jv; struct vjsn_val *jv2; unsigned u; char *answer = NULL; const char *err; if (!state) { cli_write(line_sock, "help -j\n"); u = VCLI_ReadResult(line_sock, NULL, &answer, timeout); if (u) { free(answer); return (NULL); } jsn_cmds = vjsn_parse(answer, &err); free(answer); if (err != NULL) return (NULL); AN(jsn_cmds); AN(jsn_cmds->value); assert (vjsn_is_array(jsn_cmds->value)); jv = VTAILQ_FIRST(&jsn_cmds->value->children); assert (vjsn_is_number(jv)); jv = VTAILQ_NEXT(jv, list); assert (vjsn_is_array(jv)); jv = VTAILQ_NEXT(jv, list); assert (vjsn_is_number(jv)); jv = VTAILQ_NEXT(jv, list); } while (jv != NULL) { assert (vjsn_is_object(jv)); jv2 = VTAILQ_FIRST(&jv->children); AN(jv2); jv = VTAILQ_NEXT(jv, list); assert (vjsn_is_string(jv2)); assert (!strcmp(jv2->name, "request")); if (!strncmp(text, jv2->value, strlen(text))) return (strdup(jv2->value)); } vjsn_delete(&jsn_cmds); return (NULL); } static char ** varnishadm_completion (const char *text, int start, int end) { char **matches; (void)end; matches = (char **)NULL; if (start == 0) matches = rl_completion_matches(text, command_generator); return (matches); } /* * No arguments given, simply pass bytes on stdin/stdout and CLI socket * Send a "banner" to varnish, to provoke a welcome message. */ static void v_noreturn_ interactive(int sock) { struct pollfd fds[2]; int i; unsigned status; line_sock = sock; rl_already_prompted = 1; rl_callback_handler_install("varnish> ", send_line); rl_attempted_completion_function = varnishadm_completion; fds[0].fd = sock; fds[0].events = POLLIN; fds[1].fd = 0; fds[1].events = POLLIN; cli_write(sock, "banner\n"); while (1) { i = poll(fds, 2, -1); if (i == -1 && errno == EINTR) { continue; } assert(i > 0); if (fds[0].revents & POLLIN) { /* Get rid of the prompt, kinda hackish */ printf("\r \r"); status = pass_answer(fds[0].fd, pass_interactive); rl_forced_update_display(); if (status == CLIS_CLOSE) RL_EXIT(0); } if (fds[1].revents & POLLIN) { rl_callback_read_char(); } } } /* * No arguments given, simply pass bytes on stdin/stdout and CLI socket */ static void v_noreturn_ pass(int sock) { struct pollfd fds[2]; char buf[1024]; int i; ssize_t n; int busy = 0; fds[0].fd = sock; fds[0].events = POLLIN; fds[1].fd = 0; fds[1].events = POLLIN; while (1) { i = poll(fds, 2, -1); if (i == -1 && errno == EINTR) { continue; } assert(i > 0); if (fds[0].revents & POLLIN) { (void)pass_answer(fds[0].fd, pass_script); busy = 0; if (fds[1].fd < 0) RL_EXIT(0); } if (fds[1].revents & POLLIN || fds[1].revents & POLLHUP) { n = read(fds[1].fd, buf, sizeof buf - 1); if (n == 0) { if (!busy) RL_EXIT(0); fds[1].fd = -1; } else if (n < 0) { RL_EXIT(0); } else { busy = 1; buf[n] = '\0'; cli_write(sock, buf); } } } } static void v_noreturn_ usage(int status) { fprintf(stderr, "Usage: varnishadm [-h] [-n ident] [-p] [-S secretfile] " "[-T [address]:port] [-t timeout] [command [...]]\n"); fprintf(stderr, "\t-n is mutually exclusive with -S and -T\n"); exit(status); } static int n_arg_sock(const char *n_arg, const char *t_arg) { char *T_arg, *T_start; char *S_arg; struct vsm *vsm; char *p; int sock; vsm = VSM_New(); AN(vsm); if (VSM_Arg(vsm, 'n', n_arg) < 0 || VSM_Arg(vsm, 't', t_arg) < 0 || VSM_Attach(vsm, STDERR_FILENO) < 0) { fprintf(stderr, "%s\n", VSM_Error(vsm)); VSM_Destroy(&vsm); return (-1); } T_start = T_arg = VSM_Dup(vsm, "Arg", "-T"); S_arg = VSM_Dup(vsm, "Arg", "-S"); VSM_Destroy(&vsm); if (T_arg == NULL) { fprintf(stderr, "No -T in shared memory\n"); return (-1); } sock = -1; while (*T_arg) { p = strchr(T_arg, '\n'); AN(p); *p = '\0'; sock = cli_sock(T_arg, S_arg); if (sock >= 0) break; T_arg = p + 1; } free(T_start); free(S_arg); return (sock); } static int t_arg_timeout(const char *t_arg) { char *p = NULL; AN(t_arg); timeout = strtod(t_arg, &p); if ((p != NULL && *p != '\0') || !isfinite(timeout) || timeout < 0) { fprintf(stderr, "-t: Invalid argument: %s", t_arg); return (-1); } return (1); } #define OPTARG "hn:pS:T:t:" int main(int argc, char * const *argv) { const char *T_arg = NULL; const char *S_arg = NULL; const char *n_arg = NULL; const char *t_arg = NULL; int opt, sock; if (argc == 2 && !strcmp(argv[1], "--optstring")) { printf(OPTARG "\n"); exit(0); } /* * By default linux::getopt(3) mangles the argv order, such that * varnishadm -n bla param.set foo -bar * gets interpreted as * varnishadm -n bla -bar param.set foo * The '+' stops that from happening * See #1496 */ while ((opt = getopt(argc, argv, "+" OPTARG)) != -1) { switch (opt) { case 'h': /* Usage help */ usage(0); case 'n': n_arg = optarg; break; case 'p': p_arg = 1; break; case 'S': S_arg = optarg; break; case 'T': T_arg = optarg; break; case 't': t_arg = optarg; break; default: usage(1); } } argc -= optind; argv += optind; if (T_arg != NULL) { if (n_arg != NULL) usage(1); sock = cli_sock(T_arg, S_arg); } else { if (S_arg != NULL) usage(1); sock = n_arg_sock(n_arg, t_arg); } if (sock < 0) exit(2); if (t_arg != NULL && t_arg_timeout(t_arg) < 0) exit(2); if (argc > 0) { VSIG_Arm_int(); VSIG_Arm_term(); do_args(sock, argc, argv); NEEDLESS(exit(0)); } if (isatty(0) && !p_arg) interactive(sock); else pass(sock); NEEDLESS(exit(0)); } varnish-7.5.0/bin/varnishd/000077500000000000000000000000001457605730600156005ustar00rootroot00000000000000varnish-7.5.0/bin/varnishd/Makefile.am000066400000000000000000000141731457605730600176420ustar00rootroot00000000000000# AM_CPPFLAGS = \ -I$(top_srcdir)/include \ -I$(top_srcdir)/lib/libvgz \ -I$(top_builddir)/bin/varnishd \ -I$(top_builddir)/include \ -I$(top_builddir)/lib/libvsc sbin_PROGRAMS = varnishd varnishd_SOURCES = \ cache/cache_acceptor.c \ cache/cache_backend.c \ cache/cache_backend_probe.c \ cache/cache_ban.c \ cache/cache_ban_build.c \ cache/cache_ban_lurker.c \ cache/cache_busyobj.c \ cache/cache_cli.c \ cache/cache_conn_pool.c \ cache/cache_deliver_proc.c \ cache/cache_director.c \ cache/cache_esi_deliver.c \ cache/cache_esi_fetch.c \ cache/cache_esi_parse.c \ cache/cache_expire.c \ cache/cache_fetch.c \ cache/cache_fetch_proc.c \ cache/cache_gzip.c \ cache/cache_hash.c \ cache/cache_http.c \ cache/cache_lck.c \ cache/cache_main.c \ cache/cache_mempool.c \ cache/cache_obj.c \ cache/cache_panic.c \ cache/cache_pool.c \ cache/cache_range.c \ cache/cache_req.c \ cache/cache_req_body.c \ cache/cache_req_fsm.c \ cache/cache_rfc2616.c \ cache/cache_session.c \ cache/cache_shmlog.c \ cache/cache_vary.c \ cache/cache_vcl.c \ cache/cache_vpi.c \ cache/cache_vrt.c \ cache/cache_vrt_filter.c \ cache/cache_vrt_priv.c \ cache/cache_vrt_re.c \ cache/cache_vrt_var.c \ cache/cache_vrt_vcl.c \ cache/cache_vrt_vmod.c \ cache/cache_wrk.c \ cache/cache_ws_common.c \ common/common_vsc.c \ common/common_vsmw.c \ common/common_vext.c \ hash/hash_classic.c \ hash/hash_critbit.c \ hash/hash_simple_list.c \ hash/mgt_hash.c \ hpack/vhp_decode.c \ hpack/vhp_table.c \ http1/cache_http1_deliver.c \ http1/cache_http1_fetch.c \ http1/cache_http1_fsm.c \ http1/cache_http1_line.c \ http1/cache_http1_pipe.c \ http1/cache_http1_proto.c \ http1/cache_http1_vfp.c \ http2/cache_http2_deliver.c \ http2/cache_http2_hpack.c \ http2/cache_http2_panic.c \ http2/cache_http2_proto.c \ http2/cache_http2_send.c \ http2/cache_http2_session.c \ mgt/mgt_acceptor.c \ mgt/mgt_child.c \ mgt/mgt_cli.c \ mgt/mgt_jail.c \ mgt/mgt_jail_solaris.c \ mgt/mgt_jail_solaris_tbl.h \ mgt/mgt_jail_unix.c \ mgt/mgt_main.c \ mgt/mgt_param.c \ mgt/mgt_param_tcp.c \ mgt/mgt_param_tweak.c \ mgt/mgt_shmem.c \ mgt/mgt_symtab.c \ mgt/mgt_util.c \ mgt/mgt_vcc.c \ mgt/mgt_vcl.c \ proxy/cache_proxy_proto.c \ proxy/cache_proxy.h \ storage/mgt_stevedore.c \ storage/stevedore.c \ storage/stevedore_utils.c \ storage/storage_file.c \ storage/storage_lru.c \ storage/storage_malloc.c \ storage/storage_debug.c \ storage/storage_simple.c \ storage/storage_umem.c \ waiter/cache_waiter.c \ waiter/cache_waiter_epoll.c \ waiter/cache_waiter_kqueue.c \ waiter/cache_waiter_poll.c \ waiter/cache_waiter_ports.c \ waiter/mgt_waiter.c if ENABLE_WORKSPACE_EMULATOR varnishd_SOURCES += \ cache/cache_ws_emu.c else varnishd_SOURCES += \ cache/cache_ws.c endif if WITH_PERSISTENT_STORAGE varnishd_SOURCES += \ storage/mgt_storage_persistent.c \ storage/storage_persistent.c \ storage/storage_persistent_silo.c \ storage/storage_persistent_subr.c endif nodist_varnishd_SOURCES = \ builtin_vcl.c noinst_HEADERS = \ cache/cache_ban.h \ cache/cache_conn_pool.h \ cache/cache_esi.h \ cache/cache_obj.h \ cache/cache_objhead.h \ cache/cache_pool.h \ cache/cache_transport.h \ cache/cache_vcl.h \ cache/cache_vgz.h \ common/heritage.h \ common/vsmw.h \ hash/hash_slinger.h \ hpack/vhp.h \ http1/cache_http1.h \ http2/cache_http2.h \ mgt/mgt.h \ mgt/mgt_vcl.h \ mgt/mgt_param.h \ storage/storage.h \ storage/storage_persistent.h \ storage/storage_simple.h \ waiter/mgt_waiter.h \ waiter/waiter_priv.h # Headers for use with vmods nobase_pkginclude_HEADERS = \ cache/cache.h \ cache/cache_backend.h \ cache/cache_director.h \ cache/cache_filter.h \ cache/cache_varnishd.h \ common/common_param.h \ waiter/waiter.h vcldir=$(datarootdir)/$(PACKAGE)/vcl varnishd_CFLAGS = \ -DNOT_IN_A_VMOD \ -DVARNISH_STATE_DIR='"${VARNISH_STATE_DIR}"' \ -DVARNISH_VMOD_DIR='"${vmoddir}"' \ -DVARNISH_VCL_DIR='"${pkgsysconfdir}:${vcldir}"' varnishd_LDFLAGS = -export-dynamic varnishd_LDADD = \ $(top_builddir)/lib/libvcc/libvcc.la \ $(top_builddir)/lib/libvarnish/libvarnish.la \ $(top_builddir)/lib/libvsc/libvsc.la \ $(top_builddir)/lib/libvgz/libvgz.la \ @JEMALLOC_LDADD@ \ ${DL_LIBS} ${PTHREAD_LIBS} ${NET_LIBS} ${RT_LIBS} ${LIBM} if WITH_UNWIND varnishd_CFLAGS += -DUNW_LOCAL_ONLY ${LIBUNWIND_CFLAGS} varnishd_LDADD += ${LIBUNWIND_LIBS} endif noinst_PROGRAMS = vhp_gen_hufdec vhp_gen_hufdec_SOURCES = hpack/vhp_gen_hufdec.c vhp_gen_hufdec_LDADD = $(top_builddir)/lib/libvarnish/libvarnish.la noinst_PROGRAMS += vhp_table_test vhp_table_test_SOURCES = hpack/vhp_table.c vhp_table_test_CFLAGS = -DTABLE_TEST_DRIVER vhp_table_test_LDADD = $(top_builddir)/lib/libvarnish/libvarnish.la noinst_PROGRAMS += vhp_decode_test vhp_decode_test_SOURCES = hpack/vhp_decode.c hpack/vhp_table.c vhp_decode_test_CFLAGS = -DDECODE_TEST_DRIVER vhp_decode_test_LDADD = $(top_builddir)/lib/libvarnish/libvarnish.la noinst_PROGRAMS += esi_parse_fuzzer esi_parse_fuzzer_SOURCES = \ cache/cache_ws_emu.c \ cache/cache_ws_common.c \ cache/cache_esi_parse.c \ fuzzers/esi_parse_fuzzer.c esi_parse_fuzzer_CFLAGS = \ -DNOT_IN_A_VMOD -DENABLE_WORKSPACE_EMULATOR esi_parse_fuzzer_LDADD = \ $(top_builddir)/lib/libvarnish/libvarnish.la \ $(top_builddir)/lib/libvgz/libvgz.la if ENABLE_OSS_FUZZ esi_parse_fuzzer_LDFLAGS = $(LIB_FUZZING_ENGINE) else esi_parse_fuzzer_CFLAGS += -DTEST_DRIVER endif TESTS = vhp_table_test vhp_decode_test # # Turn the builtin.vcl file into a C-string we can include in the program. # builtin_vcl.c: builtin.vcl echo '/*' > $@ echo ' * NB: This file is machine generated, DO NOT EDIT!' >> $@ echo ' *' >> $@ echo ' * Edit builtin.vcl instead and run make' >> $@ echo ' *' >> $@ echo ' */' >> $@ echo '#include "config.h"' >> $@ echo '#include "mgt/mgt.h"' >> $@ echo '' >> $@ echo 'const char * const builtin_vcl =' >> $@ sed -e 's/"/\\"/g' \ -e 's/$$/\\n"/' \ -e 's/^/ "/' $(srcdir)/builtin.vcl >> $@ echo ';' >> $@ EXTRA_DIST = builtin.vcl vhp_hufdec.h: vhp_gen_hufdec $(AM_V_GEN) ./vhp_gen_hufdec > vhp_hufdec.h_ mv -f vhp_hufdec.h_ vhp_hufdec.h DISTCLEANFILES = builtin_vcl.c BUILT_SOURCES = vhp_hufdec.h DISTCLEANFILES += vhp_hufdec.h varnish-7.5.0/bin/varnishd/builtin.vcl000066400000000000000000000141071457605730600177570ustar00rootroot00000000000000#- # Copyright (c) 2006 Verdens Gang AS # Copyright (c) 2006-2015 Varnish Software AS # All rights reserved. # # Author: Poul-Henning Kamp # # SPDX-License-Identifier: BSD-2-Clause # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # 1. Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # 2. Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in the # documentation and/or other materials provided with the distribution. # # THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND # ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE # ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS # OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) # HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY # OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF # SUCH DAMAGE. # # This is the builtin VCL code vcl 4.0; ####################################################################### # Client side sub vcl_recv { call vcl_builtin_recv; return (hash); } sub vcl_builtin_recv { call vcl_req_host; call vcl_req_method; call vcl_req_authorization; call vcl_req_cookie; } sub vcl_req_host { if (req.http.host ~ "[[:upper:]]") { set req.http.host = req.http.host.lower(); } if (!req.http.host && req.esi_level == 0 && req.proto == "HTTP/1.1") { # In HTTP/1.1, Host is required. return (synth(400)); } } sub vcl_req_method { if (req.method == "PRI") { # This will never happen in properly formed traffic. return (synth(405)); } if (req.method != "GET" && req.method != "HEAD" && req.method != "PUT" && req.method != "POST" && req.method != "TRACE" && req.method != "OPTIONS" && req.method != "DELETE" && req.method != "PATCH") { # Non-RFC2616 or CONNECT which is weird. return (pipe); } if (req.method != "GET" && req.method != "HEAD") { # We only deal with GET and HEAD by default. return (pass); } } sub vcl_req_authorization { if (req.http.Authorization) { # Not cacheable by default. return (pass); } } sub vcl_req_cookie { if (req.http.Cookie) { # Risky to cache by default. return (pass); } } sub vcl_pipe { call vcl_builtin_pipe; # By default "Connection: close" is set on all piped requests, to stop # connection reuse from sending future requests directly to the # (potentially) wrong backend. If you do want this to happen, you can # undo it here: # unset bereq.http.connection; return (pipe); } sub vcl_builtin_pipe { } sub vcl_pass { call vcl_builtin_pass; return (fetch); } sub vcl_builtin_pass { } sub vcl_hash { call vcl_builtin_hash; return (lookup); } sub vcl_builtin_hash { hash_data(req.url); if (req.http.host) { hash_data(req.http.host); } else { hash_data(server.ip); } } sub vcl_purge { call vcl_builtin_purge; return (synth(200, "Purged")); } sub vcl_builtin_purge { } sub vcl_hit { call vcl_builtin_hit; return (deliver); } sub vcl_builtin_hit { } sub vcl_miss { call vcl_builtin_miss; return (fetch); } sub vcl_builtin_miss { } sub vcl_deliver { call vcl_builtin_deliver; return (deliver); } sub vcl_builtin_deliver { } # # We can come here "invisibly" with the following errors: 500 & 503 # sub vcl_synth { call vcl_builtin_synth; return (deliver); } sub vcl_builtin_synth { set resp.http.Content-Type = "text/html; charset=utf-8"; set resp.http.Retry-After = "5"; set resp.body = {" "} + resp.status + " " + resp.reason + {"

Error "} + resp.status + " " + resp.reason + {"

"} + resp.reason + {"

Guru Meditation:

XID: "} + req.xid + {"


Varnish cache server

"}; } ####################################################################### # Backend Fetch sub vcl_backend_fetch { call vcl_builtin_backend_fetch; return (fetch); } sub vcl_builtin_backend_fetch { if (bereq.method == "GET") { unset bereq.body; } } sub vcl_backend_response { call vcl_builtin_backend_response; return (deliver); } sub vcl_builtin_backend_response { if (bereq.uncacheable) { return (deliver); } call vcl_beresp_stale; call vcl_beresp_cookie; call vcl_beresp_control; call vcl_beresp_vary; } sub vcl_beresp_stale { if (beresp.ttl <= 0s) { call vcl_beresp_hitmiss; } } sub vcl_beresp_cookie { if (beresp.http.Set-Cookie) { call vcl_beresp_hitmiss; } } sub vcl_beresp_control { if (beresp.http.Surrogate-control ~ "(?i)no-store" || (!beresp.http.Surrogate-Control && beresp.http.Cache-Control ~ "(?i:no-cache|no-store|private)")) { call vcl_beresp_hitmiss; } } sub vcl_beresp_vary { if (beresp.http.Vary == "*") { call vcl_beresp_hitmiss; } } sub vcl_beresp_hitmiss { set beresp.ttl = 120s; set beresp.uncacheable = true; return (deliver); } sub vcl_backend_error { call vcl_builtin_backend_error; return (deliver); } sub vcl_builtin_backend_error { set beresp.http.Content-Type = "text/html; charset=utf-8"; set beresp.http.Retry-After = "5"; set beresp.body = {" "} + beresp.status + " " + beresp.reason + {"

Error "} + beresp.status + " " + beresp.reason + {"

"} + beresp.reason + {"

Guru Meditation:

XID: "} + bereq.xid + {"


Varnish cache server

"}; } ####################################################################### # Housekeeping sub vcl_init { return (ok); } sub vcl_fini { return (ok); } varnish-7.5.0/bin/varnishd/cache/000077500000000000000000000000001457605730600166435ustar00rootroot00000000000000varnish-7.5.0/bin/varnishd/cache/cache.h000066400000000000000000000577441457605730600201000ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #ifdef VRT_H_INCLUDED # error "vrt.h included before cache.h - they are exclusive" #endif #ifdef CACHE_H_INCLUDED # error "cache.h included multiple times" #endif #include #include #include #include #include "vdef.h" #include "vrt.h" #define CACHE_H_INCLUDED // After vrt.h include. #include "miniobj.h" #include "vas.h" #include "vqueue.h" #include "vtree.h" #include "vapi/vsl_int.h" /*--------------------------------------------------------------------*/ struct vxids { uint64_t vxid; }; typedef struct vxids vxid_t; #define NO_VXID ((struct vxids){0}) #define IS_NO_VXID(x) ((x).vxid == 0) #define VXID_TAG(x) ((uintmax_t)((x).vxid & (VSL_CLIENTMARKER|VSL_BACKENDMARKER))) #define VXID(u) ((uintmax_t)((u.vxid) & VSL_IDENTMASK)) #define IS_SAME_VXID(x, y) ((x).vxid == (y).vxid) /*--------------------------------------------------------------------*/ struct body_status { const char *name; int nbr; int avail; int length_known; }; #define BODYSTATUS(U, l, n, a, k) extern const struct body_status BS_##U[1]; #include "tbl/body_status.h" typedef const struct body_status *body_status_t; typedef const char *hdr_t; /*--------------------------------------------------------------------*/ struct stream_close { unsigned magic; #define STREAM_CLOSE_MAGIC 0xc879c93d int idx; unsigned is_err; const char *name; const char *desc; }; extern const struct stream_close SC_NULL[1]; #define SESS_CLOSE(nm, stat, err, desc) \ extern const struct stream_close SC_##nm[1]; #include "tbl/sess_close.h" /*-------------------------------------------------------------------- * Indices into http->hd[] */ enum { #define SLTH(tag, ind, req, resp, sdesc, ldesc) ind, #include "tbl/vsl_tags_http.h" }; /*--------------------------------------------------------------------*/ struct ban; struct ban_proto; struct cli; struct http_conn; struct listen_sock; struct mempool; struct objcore; struct objhead; struct pool; struct req_step; struct sess; struct transport; struct vcf; struct VSC_lck; struct VSC_main; struct VSC_main_wrk; struct worker; struct worker_priv; #define DIGEST_LEN 32 /*--------------------------------------------------------------------*/ struct lock { void *priv; }; // Opaque /*-------------------------------------------------------------------- * Workspace structure for quick memory allocation. */ #define WS_ID_SIZE 4 struct ws { unsigned magic; #define WS_MAGIC 0x35fac554 char id[WS_ID_SIZE]; /* identity */ char *s; /* (S)tart of buffer */ char *f; /* (F)ree/front pointer */ char *r; /* (R)eserved length */ char *e; /* (E)nd of buffer */ }; /*-------------------------------------------------------------------- * */ struct http { unsigned magic; #define HTTP_MAGIC 0x6428b5c9 uint16_t shd; /* Size of hd space */ txt *hd; unsigned char *hdf; #define HDF_FILTER (1 << 0) /* Filtered by Connection */ /* NB: ->nhd and below zeroed/initialized by http_Teardown */ uint16_t nhd; /* Next free hd */ enum VSL_tag_e logtag; /* Must be SLT_*Method */ struct vsl_log *vsl; struct ws *ws; uint16_t status; uint8_t protover; }; /*--------------------------------------------------------------------*/ struct acct_req { #define ACCT(foo) uint64_t foo; #include "tbl/acct_fields_req.h" }; /*--------------------------------------------------------------------*/ struct acct_bereq { #define ACCT(foo) uint64_t foo; #include "tbl/acct_fields_bereq.h" }; /*--------------------------------------------------------------------*/ struct vsl_log { uint32_t *wlb, *wlp, *wle; unsigned wlr; vxid_t wid; }; /*--------------------------------------------------------------------*/ VRBT_HEAD(vrt_privs, vrt_priv); /* Worker pool stuff -------------------------------------------------*/ typedef void task_func_t(struct worker *wrk, void *priv); struct pool_task { VTAILQ_ENTRY(pool_task) list; task_func_t *func; void *priv; }; /* * tasks are taken off the queues in this order * * TASK_QUEUE_{REQ|STR} are new req's (H1/H2), and subject to queue limit. * * TASK_QUEUE_RUSH is req's returning from waiting list * * NOTE: When changing the number of classes, update places marked with * TASK_QUEUE_RESERVE in params.h */ enum task_prio { TASK_QUEUE_BO, TASK_QUEUE_RUSH, TASK_QUEUE_REQ, TASK_QUEUE_STR, TASK_QUEUE_VCA, TASK_QUEUE_BG, TASK_QUEUE__END }; #define TASK_QUEUE_HIGHEST_PRIORITY TASK_QUEUE_BO #define TASK_QUEUE_RESERVE TASK_QUEUE_BG #define TASK_QUEUE_LIMITED(prio) \ (prio == TASK_QUEUE_REQ || prio == TASK_QUEUE_STR) /*--------------------------------------------------------------------*/ struct worker { unsigned magic; #define WORKER_MAGIC 0x6391adcf int strangelove; struct worker_priv *wpriv; struct pool *pool; struct VSC_main_wrk *stats; struct vsl_log *vsl; // borrowed from req/bo struct pool_task task[1]; vtim_real lastused; struct v1l *v1l; pthread_cond_t cond; struct ws aws[1]; unsigned cur_method; unsigned seen_methods; struct wrk_vpi *vpi; }; /* Stored object ----------------------------------------------------- * This is just to encapsulate the fields owned by the stevedore */ struct storeobj { const struct stevedore *stevedore; void *priv; uint64_t priv2; }; /* Busy Objcore structure -------------------------------------------- * */ /* * The macro-states we expose outside the fetch code */ enum boc_state_e { #define BOC_STATE(U, l) BOS_##U, #include "tbl/boc_state.h" }; struct boc { unsigned magic; #define BOC_MAGIC 0x70c98476 unsigned refcount; struct lock mtx; pthread_cond_t cond; void *stevedore_priv; enum boc_state_e state; uint8_t *vary; uint64_t fetched_so_far; uint64_t delivered_so_far; uint64_t transit_buffer; }; /* Object core structure --------------------------------------------- * Objects have sideways references in the binary heap and the LRU list * and we want to avoid paging in a lot of objects just to move them up * or down the binheap or to move a unrelated object on the LRU list. * To avoid this we use a proxy object, objcore, to hold the relevant * housekeeping fields parts of an object. */ enum obj_attr { #define OBJ_FIXATTR(U, l, s) OA_##U, #define OBJ_VARATTR(U, l) OA_##U, #define OBJ_AUXATTR(U, l) OA_##U, #include "tbl/obj_attr.h" OA__MAX, }; enum obj_flags { #define OBJ_FLAG(U, l, v) OF_##U = v, #include "tbl/obj_attr.h" }; enum oc_flags { #define OC_FLAG(U, l, v) OC_F_##U = v, #include "tbl/oc_flags.h" }; enum oc_exp_flags { #define OC_EXP_FLAG(U, l, v) OC_EF_##U = v, #include "tbl/oc_exp_flags.h" }; struct objcore { unsigned magic; #define OBJCORE_MAGIC 0x4d301302 int refcnt; struct storeobj stobj[1]; struct objhead *objhead; struct boc *boc; vtim_real timer_when; VCL_INT hits; vtim_real t_origin; float ttl; float grace; float keep; uint8_t flags; uint8_t exp_flags; uint16_t oa_present; unsigned timer_idx; // XXX 4Gobj limit vtim_real last_lru; VTAILQ_ENTRY(objcore) hsh_list; VTAILQ_ENTRY(objcore) lru_list; VTAILQ_ENTRY(objcore) ban_list; VSTAILQ_ENTRY(objcore) exp_list; struct ban *ban; }; /* Busy Object structure --------------------------------------------- * * The busyobj structure captures the aspects of an object related to, * and while it is being fetched from the backend. * * One of these aspects will be how much has been fetched, which * streaming delivery will make use of. */ enum director_state_e { DIR_S_NULL = 0, DIR_S_HDRS = 1, DIR_S_BODY = 2, }; struct busyobj { unsigned magic; #define BUSYOBJ_MAGIC 0x23b95567 char *end; /* * All fields from retries and down are zeroed when the busyobj * is recycled. */ unsigned retries; struct req *req; struct sess *sp; struct worker *wrk; struct vfp_ctx *vfc; const char *vfp_filter_list; struct ws ws[1]; uintptr_t ws_bo; struct http *bereq0; struct http *bereq; struct http *beresp; struct objcore *bereq_body; struct objcore *stale_oc; struct objcore *fetch_objcore; const char *no_retry; struct http_conn *htc; struct pool_task fetch_task[1]; #define BERESP_FLAG(l, r, w, f, d) unsigned l:1; #define BEREQ_FLAG(l, r, w, d) BERESP_FLAG(l, r, w, 0, d) #include "tbl/bereq_flags.h" #include "tbl/beresp_flags.h" /* Timeouts */ vtim_dur connect_timeout; vtim_dur first_byte_timeout; vtim_dur between_bytes_timeout; vtim_dur task_deadline; /* Timers */ vtim_real t_first; /* First timestamp logged */ vtim_real t_resp; /* response received */ vtim_real t_prev; /* Previous timestamp logged */ /* Acct */ struct acct_bereq acct; const struct stevedore *storage; const struct director *director_req; const struct director *director_resp; enum director_state_e director_state; struct vcl *vcl; struct vsl_log vsl[1]; uint8_t digest[DIGEST_LEN]; struct vrt_privs privs[1]; uint16_t err_code; const char *err_reason; const char *client_identity; }; #define BUSYOBJ_TMO(bo, pfx, tmo) \ (isnan((bo)->tmo) ? cache_param->pfx##tmo : (bo)->tmo) /*--------------------------------------------------------------------*/ struct reqtop { unsigned magic; #define REQTOP_MAGIC 0x57fbda52 struct req *topreq; struct vcl *vcl0; struct vrt_privs privs[1]; }; struct req { unsigned magic; #define REQ_MAGIC 0xfb4abf6d body_status_t req_body_status; stream_close_t doclose; unsigned restarts; unsigned esi_level; /* Delivery mode */ unsigned res_mode; #define RES_ESI (1<<4) #define RES_PIPE (1<<7) const struct req_step *req_step; struct reqtop *top; /* esi_level == 0 request */ #define REQ_FLAG(l, r, w, d) unsigned l:1; #include "tbl/req_flags.h" uint16_t err_code; const char *err_reason; struct sess *sp; struct worker *wrk; struct pool_task task[1]; const struct transport *transport; void *transport_priv; VTAILQ_ENTRY(req) w_list; struct objcore *body_oc; /* The busy objhead we sleep on */ struct objhead *hash_objhead; /* Built Vary string == workspace reservation */ uint8_t *vary_b; uint8_t *vary_e; uint8_t digest[DIGEST_LEN]; vtim_dur d_ttl; vtim_dur d_grace; const struct stevedore *storage; const struct director *director_hint; struct vcl *vcl; uintptr_t ws_req; /* WS above request data */ /* Timestamps */ vtim_real t_first; /* First timestamp logged */ vtim_real t_prev; /* Previous timestamp logged */ vtim_real t_req; /* Headers complete */ vtim_real t_resp; /* Entry to last deliver/synth */ struct http_conn *htc; struct vfp_ctx *vfc; const char *client_identity; /* HTTP request */ struct http *http; struct http *http0; /* HTTP response */ struct http *resp; intmax_t resp_len; struct ws ws[1]; struct objcore *objcore; struct objcore *stale_oc; /* Deliver pipeline */ struct vdp_ctx *vdc; const char *vdp_filter_list; /* Transaction VSL buffer */ struct vsl_log vsl[1]; /* Temporary accounting */ struct acct_req acct; struct vrt_privs privs[1]; struct vcf *vcf; }; #define IS_TOPREQ(req) ((req)->top->topreq == (req)) /*-------------------------------------------------------------------- * Struct sess is a high memory-load structure because sessions typically * hang around the waiter for relatively long time. * * The size goal for struct sess + struct memitem is <512 bytes * * Getting down to the next relevant size (<256 bytes because of how malloc * works, is not realistic without a lot of code changes. */ enum sess_attr { #define SESS_ATTR(UP, low, typ, len) SA_##UP, #include "tbl/sess_attr.h" SA_LAST }; struct sess { unsigned magic; #define SESS_MAGIC 0x2c2f9c5a uint16_t sattr[SA_LAST]; struct listen_sock *listen_sock; int refcnt; int fd; vxid_t vxid; struct lock mtx; struct pool *pool; struct ws ws[1]; vtim_real t_open; /* fd accepted */ vtim_real t_idle; /* fd accepted or resp sent */ vtim_dur timeout_idle; vtim_dur timeout_linger; vtim_dur send_timeout; vtim_dur idle_send_timeout; }; #define SESS_TMO(sp, tmo) \ (isnan((sp)->tmo) ? cache_param->tmo : (sp)->tmo) /* Prototypes etc ----------------------------------------------------*/ /* cache_ban.c */ /* for constructing bans */ struct ban_proto *BAN_Build(void); const char *BAN_AddTest(struct ban_proto *, const char *, const char *, const char *); const char *BAN_Commit(struct ban_proto *b); void BAN_Abandon(struct ban_proto *b); /* cache_cli.c [CLI] */ extern pthread_t cli_thread; #define IS_CLI() (pthread_equal(pthread_self(), cli_thread)) #define ASSERT_CLI() do {assert(IS_CLI());} while (0) /* cache_http.c */ unsigned HTTP_estimate(unsigned nhttp); void HTTP_Clone(struct http *to, const struct http * const fm); void HTTP_Dup(struct http *to, const struct http * const fm); struct http *HTTP_create(void *p, uint16_t nhttp, unsigned); const char *http_Status2Reason(unsigned, const char **); int http_IsHdr(const txt *hh, hdr_t hdr); unsigned http_EstimateWS(const struct http *fm, unsigned how); void http_PutResponse(struct http *to, const char *proto, uint16_t status, const char *response); void http_FilterReq(struct http *to, const struct http *fm, unsigned how); void HTTP_Encode(const struct http *fm, uint8_t *, unsigned len, unsigned how); int HTTP_Decode(struct http *to, const uint8_t *fm); void http_ForceHeader(struct http *to, hdr_t, const char *val); void http_AppendHeader(struct http *to, hdr_t, const char *val); void http_PrintfHeader(struct http *to, const char *fmt, ...) v_printflike_(2, 3); void http_TimeHeader(struct http *to, const char *fmt, vtim_real now); const char * http_ViaHeader(void); void http_Proto(struct http *to); void http_SetHeader(struct http *to, const char *header); void http_SetH(struct http *to, unsigned n, const char *header); void http_ForceField(struct http *to, unsigned n, const char *t); void HTTP_Setup(struct http *, struct ws *, struct vsl_log *, enum VSL_tag_e); void http_Teardown(struct http *ht); int http_GetHdr(const struct http *hp, hdr_t, const char **ptr); int http_GetHdrToken(const struct http *hp, hdr_t, const char *token, const char **pb, const char **pe); int http_GetHdrField(const struct http *hp, hdr_t, const char *field, const char **ptr); double http_GetHdrQ(const struct http *hp, hdr_t, const char *field); ssize_t http_GetContentLength(const struct http *hp); ssize_t http_GetContentRange(const struct http *hp, ssize_t *lo, ssize_t *hi); const char * http_GetRange(const struct http *hp, ssize_t *lo, ssize_t *hi, ssize_t len); uint16_t http_GetStatus(const struct http *hp); int http_IsStatus(const struct http *hp, int); void http_SetStatus(struct http *to, uint16_t status, const char *reason); const char *http_GetMethod(const struct http *hp); int http_HdrIs(const struct http *hp, hdr_t, const char *val); void http_CopyHome(const struct http *hp); void http_Unset(struct http *hp, hdr_t); unsigned http_CountHdr(const struct http *hp, hdr_t); void http_CollectHdr(struct http *hp, hdr_t); void http_CollectHdrSep(struct http *hp, hdr_t, const char *sep); void http_VSL_log(const struct http *hp); void HTTP_Merge(struct worker *, struct objcore *, struct http *to); uint16_t HTTP_GetStatusPack(struct worker *, struct objcore *oc); int HTTP_IterHdrPack(struct worker *, struct objcore *, const char **); #define HTTP_FOREACH_PACK(wrk, oc, ptr) \ for ((ptr) = NULL; HTTP_IterHdrPack(wrk, oc, &(ptr));) const char *HTTP_GetHdrPack(struct worker *, struct objcore *, hdr_t); stream_close_t http_DoConnection(struct http *hp, stream_close_t sc_close); int http_IsFiltered(const struct http *hp, unsigned u, unsigned how); #define HTTPH_R_PASS (1 << 0) /* Request (c->b) in pass mode */ #define HTTPH_R_FETCH (1 << 1) /* Request (c->b) for fetch */ #define HTTPH_A_INS (1 << 2) /* Response (b->o) for insert */ #define HTTPH_A_PASS (1 << 3) /* Response (b->o) for pass */ #define HTTPH_C_SPECIFIC (1 << 4) /* Connection-specific */ #define HTTPH(a, b, c) extern char b[]; #include "tbl/http_headers.h" extern const char H__Status[]; extern const char H__Proto[]; extern const char H__Reason[]; // rfc7233,l,1207,1208 #define http_tok_eq(s1, s2) (!vct_casecmp(s1, s2)) #define http_tok_at(s1, s2, l) (!vct_caselencmp(s1, s2, l)) #define http_ctok_at(s, cs) (!vct_caselencmp(s, cs, sizeof(cs) - 1)) // rfc7230,l,1037,1038 #define http_scheme_at(str, tok) http_ctok_at(str, #tok "://") // rfc7230,l,1144,1144 // rfc7231,l,1156,1158 #define http_method_eq(str, tok) (!strcmp(str, #tok)) // rfc7230,l,1222,1222 // rfc7230,l,2848,2848 // rfc7231,l,3883,3885 // rfc7234,l,1339,1340 // rfc7234,l,1418,1419 #define http_hdr_eq(s1, s2) http_tok_eq(s1, s2) #define http_hdr_at(s1, s2, l) http_tok_at(s1, s2, l) // rfc7230,l,1952,1952 // rfc7231,l,604,604 #define http_coding_eq(str, tok) http_tok_eq(str, #tok) // rfc7231,l,1864,1864 #define http_expect_eq(str, tok) http_tok_eq(str, #tok) // rfc7233,l,1207,1208 #define http_range_at(str, tok, l) http_tok_at(str, #tok, l) /* cache_lck.c */ /* Internal functions, call only through macros below */ void Lck__Lock(struct lock *lck, const char *p, int l); void Lck__Unlock(struct lock *lck, const char *p, int l); int Lck__Trylock(struct lock *lck, const char *p, int l); void Lck__New(struct lock *lck, struct VSC_lck *, const char *); int Lck__Held(const struct lock *lck); int Lck__Owned(const struct lock *lck); extern pthread_mutexattr_t mtxattr_errorcheck; /* public interface: */ void Lck_Delete(struct lock *lck); int Lck_CondWaitUntil(pthread_cond_t *, struct lock *, vtim_real when); int Lck_CondWait(pthread_cond_t *, struct lock *); int Lck_CondWaitTimeout(pthread_cond_t *, struct lock *, vtim_dur timeout); #define Lck_New(a, b) Lck__New(a, b, #b) #define Lck_Lock(a) Lck__Lock(a, __func__, __LINE__) #define Lck_Unlock(a) Lck__Unlock(a, __func__, __LINE__) #define Lck_Trylock(a) Lck__Trylock(a, __func__, __LINE__) #define Lck_AssertHeld(a) \ do { \ assert(Lck__Held(a)); \ assert(Lck__Owned(a)); \ } while (0) struct VSC_lck *Lck_CreateClass(struct vsc_seg **, const char *); void Lck_DestroyClass(struct vsc_seg **); #define LOCK(nam) extern struct VSC_lck *lck_##nam; #include "tbl/locks.h" /* cache_obj.c */ int ObjHasAttr(struct worker *, struct objcore *, enum obj_attr); const void *ObjGetAttr(struct worker *, struct objcore *, enum obj_attr, ssize_t *len); typedef int objiterate_f(void *priv, unsigned flush, const void *ptr, ssize_t len); #define OBJ_ITER_FLUSH 0x01 #define OBJ_ITER_END 0x02 int ObjIterate(struct worker *, struct objcore *, void *priv, objiterate_f *func, int final); vxid_t ObjGetXID(struct worker *, struct objcore *); uint64_t ObjGetLen(struct worker *, struct objcore *); int ObjGetDouble(struct worker *, struct objcore *, enum obj_attr, double *); int ObjGetU64(struct worker *, struct objcore *, enum obj_attr, uint64_t *); int ObjCheckFlag(struct worker *, struct objcore *, enum obj_flags of); /* cache_req_body.c */ ssize_t VRB_Iterate(struct worker *, struct vsl_log *, struct req *, objiterate_f *func, void *priv); /* cache_session.c [SES] */ #define SESS_ATTR(UP, low, typ, len) \ int SES_Get_##low(const struct sess *sp, typ **dst); #include "tbl/sess_attr.h" const char *SES_Get_String_Attr(const struct sess *sp, enum sess_attr a); /* cache_shmlog.c */ void VSLv(enum VSL_tag_e tag, vxid_t vxid, const char *fmt, va_list va); void VSL(enum VSL_tag_e tag, vxid_t vxid, const char *fmt, ...) v_printflike_(3, 4); void VSLs(enum VSL_tag_e tag, vxid_t vxid, const struct strands *s); void VSLbv(struct vsl_log *, enum VSL_tag_e tag, const char *fmt, va_list va); void VSLb(struct vsl_log *, enum VSL_tag_e tag, const char *fmt, ...) v_printflike_(3, 4); void VSLbt(struct vsl_log *, enum VSL_tag_e tag, txt t); void VSLbs(struct vsl_log *, enum VSL_tag_e tag, const struct strands *s); void VSLb_ts(struct vsl_log *, const char *event, vtim_real first, vtim_real *pprev, vtim_real now); void VSLb_bin(struct vsl_log *, enum VSL_tag_e, ssize_t, const void*); int VSL_tag_is_masked(enum VSL_tag_e tag); static inline void VSLb_ts_req(struct req *req, const char *event, vtim_real now) { if (isnan(req->t_first) || req->t_first == 0.) req->t_first = req->t_prev = now; VSLb_ts(req->vsl, event, req->t_first, &req->t_prev, now); } static inline void VSLb_ts_busyobj(struct busyobj *bo, const char *event, vtim_real now) { if (isnan(bo->t_first) || bo->t_first == 0.) bo->t_first = bo->t_prev = now; VSLb_ts(bo->vsl, event, bo->t_first, &bo->t_prev, now); } /* cache_vcl.c */ const char *VCL_Name(const struct vcl *); /* cache_wrk.c */ typedef void *bgthread_t(struct worker *, void *priv); void WRK_BgThread(pthread_t *thr, const char *name, bgthread_t *func, void *priv); /* cache_ws.c */ void WS_Init(struct ws *ws, const char *id, void *space, unsigned len); unsigned WS_ReserveSize(struct ws *, unsigned); unsigned WS_ReserveAll(struct ws *); void WS_Release(struct ws *ws, unsigned bytes); void WS_ReleaseP(struct ws *ws, const char *ptr); void WS_Assert(const struct ws *ws); void WS_Reset(struct ws *ws, uintptr_t); void *WS_Alloc(struct ws *ws, unsigned bytes); void *WS_Copy(struct ws *ws, const void *str, int len); uintptr_t WS_Snapshot(struct ws *ws); int WS_Allocated(const struct ws *ws, const void *ptr, ssize_t len); unsigned WS_Dump(const struct ws *ws, char, size_t off, void *buf, size_t len); static inline void * WS_Reservation(const struct ws *ws) { WS_Assert(ws); AN(ws->r); AN(ws->f); return (ws->f); } static inline unsigned WS_ReservationSize(const struct ws *ws) { AN(ws->r); return (ws->r - ws->f); } static inline unsigned WS_ReserveLumps(struct ws *ws, size_t sz) { AN(sz); return (WS_ReserveAll(ws) / sz); } /* cache_ws_common.c */ void WS_MarkOverflow(struct ws *ws); int WS_Overflowed(const struct ws *ws); const char *WS_Printf(struct ws *ws, const char *fmt, ...) v_printflike_(2, 3); void WS_VSB_new(struct vsb *, struct ws *); char *WS_VSB_finish(struct vsb *, struct ws *, size_t *); /* WS utility */ #define WS_TASK_ALLOC_OBJ(ctx, ptr, magic) do { \ ptr = WS_Alloc((ctx)->ws, sizeof *(ptr)); \ if ((ptr) == NULL) \ VRT_fail(ctx, "Out of workspace for " #magic); \ else \ INIT_OBJ(ptr, magic); \ } while(0) /* cache_rfc2616.c */ void RFC2616_Ttl(struct busyobj *, vtim_real now, vtim_real *t_origin, float *ttl, float *grace, float *keep); unsigned RFC2616_Req_Gzip(const struct http *); int RFC2616_Do_Cond(const struct req *sp); void RFC2616_Weaken_Etag(struct http *hp); void RFC2616_Vary_AE(struct http *hp); const char * RFC2616_Strong_LM(const struct http *hp, struct worker *wrk, struct objcore *oc); /* * We want to cache the most recent timestamp in wrk->lastused to avoid * extra timestamps in cache_pool.c. Hide this detail with a macro */ #define W_TIM_real(w) ((w)->lastused = VTIM_real()) varnish-7.5.0/bin/varnishd/cache/cache_acceptor.c000066400000000000000000000512101457605730600217310ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * This source file has the various trickery surrounding the accept/listen * sockets. * */ #include "config.h" #include #include #include #include "cache_varnishd.h" #include "cache_transport.h" #include "cache_pool.h" #include "common/heritage.h" #include "vcli_serve.h" #include "vsa.h" #include "vtcp.h" #include "vtim.h" static pthread_t VCA_thread; static vtim_dur vca_pace = 0.0; static struct lock pace_mtx; static unsigned pool_accepting; static pthread_mutex_t shut_mtx = PTHREAD_MUTEX_INITIALIZER; struct wrk_accept { unsigned magic; #define WRK_ACCEPT_MAGIC 0x8c4b4d59 /* Accept stuff */ struct sockaddr_storage acceptaddr; socklen_t acceptaddrlen; int acceptsock; struct listen_sock *acceptlsock; }; struct poolsock { unsigned magic; #define POOLSOCK_MAGIC 0x1b0a2d38 VTAILQ_ENTRY(poolsock) list; struct listen_sock *lsock; struct pool_task task[1]; struct pool *pool; }; /*-------------------------------------------------------------------- * TCP options we want to control */ union sock_arg { struct linger lg; struct timeval tv; int i; }; static struct sock_opt { int level; int optname; const char *strname; unsigned mod; socklen_t sz; union sock_arg arg[1]; } sock_opts[] = { /* Note: Setting the mod counter to something not-zero is needed * to force the setsockopt() calls on startup */ #define SOCK_OPT(lvl, nam, typ) { lvl, nam, #nam, 1, sizeof(typ) }, SOCK_OPT(SOL_SOCKET, SO_LINGER, struct linger) SOCK_OPT(SOL_SOCKET, SO_KEEPALIVE, int) SOCK_OPT(SOL_SOCKET, SO_SNDTIMEO, struct timeval) SOCK_OPT(SOL_SOCKET, SO_RCVTIMEO, struct timeval) SOCK_OPT(IPPROTO_TCP, TCP_NODELAY, int) #if defined(HAVE_TCP_KEEP) SOCK_OPT(IPPROTO_TCP, TCP_KEEPIDLE, int) SOCK_OPT(IPPROTO_TCP, TCP_KEEPCNT, int) SOCK_OPT(IPPROTO_TCP, TCP_KEEPINTVL, int) #elif defined(HAVE_TCP_KEEPALIVE) SOCK_OPT(IPPROTO_TCP, TCP_KEEPALIVE, int) #endif #undef SOCK_OPT }; static const int n_sock_opts = sizeof sock_opts / sizeof sock_opts[0]; struct conn_heritage { unsigned sess_set; unsigned listen_mod; }; /*-------------------------------------------------------------------- * We want to get out of any kind of trouble-hit TCP connections as fast * as absolutely possible, so we set them LINGER disabled, so that even if * there are outstanding write data on the socket, a close(2) will return * immediately. */ static const struct linger disable_so_linger = { .l_onoff = 0, }; /* * We turn on keepalives by default to assist in detecting clients that have * hung up on connections returning from waitinglists */ static const unsigned enable_so_keepalive = 1; /* We disable Nagle's algorithm in favor of low latency setups. */ static const unsigned enable_tcp_nodelay = 1; /*-------------------------------------------------------------------- * lacking a better place, we put some generic periodic updates * into the vca_acct() loop which we are running anyway */ static void vca_periodic(vtim_real t0) { vtim_real now; now = VTIM_real(); VSC_C_main->uptime = (uint64_t)(now - t0); VTIM_postel = FEATURE(FEATURE_HTTP_DATE_POSTEL); } /*-------------------------------------------------------------------- * Some kernels have bugs/limitations with respect to which options are * inherited from the accept/listen socket, so we have to keep track of * which, if any, sockopts we have to set on the accepted socket. */ static int vca_sock_opt_init(void) { struct sock_opt *so; union sock_arg tmp; int n, chg = 0; size_t sz; memset(&tmp, 0, sizeof tmp); for (n = 0; n < n_sock_opts; n++) { so = &sock_opts[n]; #define SET_VAL(nm, so, fld, val) \ do { \ if (!strcmp(#nm, so->strname)) { \ assert(so->sz == sizeof so->arg->fld); \ so->arg->fld = (val); \ } \ } while (0) #define NEW_VAL(nm, so, fld, val) \ do { \ if (!strcmp(#nm, so->strname)) { \ sz = sizeof tmp.fld; \ assert(so->sz == sz); \ tmp.fld = (val); \ if (memcmp(&so->arg->fld, &(tmp.fld), sz)) { \ memcpy(&so->arg->fld, &(tmp.fld), sz); \ so->mod++; \ chg = 1; \ } \ } \ } while (0) SET_VAL(SO_LINGER, so, lg, disable_so_linger); SET_VAL(SO_KEEPALIVE, so, i, enable_so_keepalive); NEW_VAL(SO_SNDTIMEO, so, tv, VTIM_timeval_sock(cache_param->idle_send_timeout)); NEW_VAL(SO_RCVTIMEO, so, tv, VTIM_timeval_sock(cache_param->timeout_idle)); SET_VAL(TCP_NODELAY, so, i, enable_tcp_nodelay); #if defined(HAVE_TCP_KEEP) NEW_VAL(TCP_KEEPIDLE, so, i, (int)cache_param->tcp_keepalive_time); NEW_VAL(TCP_KEEPCNT, so, i, (int)cache_param->tcp_keepalive_probes); NEW_VAL(TCP_KEEPINTVL, so, i, (int)cache_param->tcp_keepalive_intvl); #elif defined(HAVE_TCP_KEEPALIVE) NEW_VAL(TCP_KEEPALIVE, so, i, (int)cache_param->tcp_keepalive_time); #endif } return (chg); } static void vca_sock_opt_test(const struct listen_sock *ls, const struct sess *sp) { struct conn_heritage *ch; struct sock_opt *so; union sock_arg tmp; socklen_t l; int i, n; CHECK_OBJ_NOTNULL(ls, LISTEN_SOCK_MAGIC); CHECK_OBJ_NOTNULL(sp, SESS_MAGIC); for (n = 0; n < n_sock_opts; n++) { so = &sock_opts[n]; ch = &ls->conn_heritage[n]; if (ch->sess_set) { VSL(SLT_Debug, sp->vxid, "sockopt: Not testing nonhereditary %s for %s=%s", so->strname, ls->name, ls->endpoint); continue; } if (so->level == IPPROTO_TCP && ls->uds) { VSL(SLT_Debug, sp->vxid, "sockopt: Not testing incompatible %s for %s=%s", so->strname, ls->name, ls->endpoint); continue; } memset(&tmp, 0, sizeof tmp); l = so->sz; i = getsockopt(sp->fd, so->level, so->optname, &tmp, &l); if (i == 0 && memcmp(&tmp, so->arg, so->sz)) { VSL(SLT_Debug, sp->vxid, "sockopt: Test confirmed %s non heredity for %s=%s", so->strname, ls->name, ls->endpoint); ch->sess_set = 1; } if (i && errno != ENOPROTOOPT) VTCP_Assert(i); } } static void vca_sock_opt_set(const struct listen_sock *ls, const struct sess *sp) { struct conn_heritage *ch; struct sock_opt *so; vxid_t vxid; int n, sock; CHECK_OBJ_NOTNULL(ls, LISTEN_SOCK_MAGIC); if (sp != NULL) { CHECK_OBJ(sp, SESS_MAGIC); sock = sp->fd; vxid = sp->vxid; } else { sock = ls->sock; vxid = NO_VXID; } for (n = 0; n < n_sock_opts; n++) { so = &sock_opts[n]; ch = &ls->conn_heritage[n]; if (so->level == IPPROTO_TCP && ls->uds) { VSL(SLT_Debug, vxid, "sockopt: Not setting incompatible %s for %s=%s", so->strname, ls->name, ls->endpoint); continue; } if (sp == NULL && ch->listen_mod == so->mod) { VSL(SLT_Debug, vxid, "sockopt: Not setting unmodified %s for %s=%s", so->strname, ls->name, ls->endpoint); continue; } if (sp != NULL && !ch->sess_set) { VSL(SLT_Debug, sp->vxid, "sockopt: %s may be inherited for %s=%s", so->strname, ls->name, ls->endpoint); continue; } VSL(SLT_Debug, vxid, "sockopt: Setting %s for %s=%s", so->strname, ls->name, ls->endpoint); VTCP_Assert(setsockopt(sock, so->level, so->optname, so->arg, so->sz)); if (sp == NULL) ch->listen_mod = so->mod; } } /*-------------------------------------------------------------------- * If accept(2)'ing fails, we pace ourselves to relive any resource * shortage if possible. */ static void vca_pace_check(void) { vtim_dur p; if (vca_pace == 0.0) return; Lck_Lock(&pace_mtx); p = vca_pace; Lck_Unlock(&pace_mtx); if (p > 0.0) VTIM_sleep(p); } static void vca_pace_bad(void) { Lck_Lock(&pace_mtx); vca_pace += cache_param->acceptor_sleep_incr; if (vca_pace > cache_param->acceptor_sleep_max) vca_pace = cache_param->acceptor_sleep_max; Lck_Unlock(&pace_mtx); } static void vca_pace_good(void) { if (vca_pace == 0.0) return; Lck_Lock(&pace_mtx); vca_pace *= cache_param->acceptor_sleep_decay; if (vca_pace < cache_param->acceptor_sleep_incr) vca_pace = 0.0; Lck_Unlock(&pace_mtx); } /*-------------------------------------------------------------------- * The pool-task for a newly accepted session * * Called from assigned worker thread */ static void vca_mk_tcp(const struct wrk_accept *wa, struct sess *sp, char *laddr, char *lport, char *raddr, char *rport) { struct suckaddr *sa = NULL; ssize_t sz; AN(SES_Reserve_remote_addr(sp, &sa, &sz)); AN(sa); assert(sz == vsa_suckaddr_len); AN(VSA_Build(sa, &wa->acceptaddr, wa->acceptaddrlen)); sp->sattr[SA_CLIENT_ADDR] = sp->sattr[SA_REMOTE_ADDR]; VTCP_name(sa, raddr, VTCP_ADDRBUFSIZE, rport, VTCP_PORTBUFSIZE); AN(SES_Set_String_Attr(sp, SA_CLIENT_IP, raddr)); AN(SES_Set_String_Attr(sp, SA_CLIENT_PORT, rport)); AN(SES_Reserve_local_addr(sp, &sa, &sz)); AN(VSA_getsockname(sp->fd, sa, sz)); sp->sattr[SA_SERVER_ADDR] = sp->sattr[SA_LOCAL_ADDR]; VTCP_name(sa, laddr, VTCP_ADDRBUFSIZE, lport, VTCP_PORTBUFSIZE); } static void vca_mk_uds(struct wrk_accept *wa, struct sess *sp, char *laddr, char *lport, char *raddr, char *rport) { struct suckaddr *sa = NULL; ssize_t sz; (void) wa; AN(SES_Reserve_remote_addr(sp, &sa, &sz)); AN(sa); assert(sz == vsa_suckaddr_len); AZ(SES_Set_remote_addr(sp, bogo_ip)); sp->sattr[SA_CLIENT_ADDR] = sp->sattr[SA_REMOTE_ADDR]; sp->sattr[SA_LOCAL_ADDR] = sp->sattr[SA_REMOTE_ADDR]; sp->sattr[SA_SERVER_ADDR] = sp->sattr[SA_REMOTE_ADDR]; AN(SES_Set_String_Attr(sp, SA_CLIENT_IP, "0.0.0.0")); AN(SES_Set_String_Attr(sp, SA_CLIENT_PORT, "0")); strcpy(laddr, "0.0.0.0"); strcpy(raddr, "0.0.0.0"); strcpy(lport, "0"); strcpy(rport, "0"); } static void v_matchproto_(task_func_t) vca_make_session(struct worker *wrk, void *arg) { struct sess *sp; struct req *req; struct wrk_accept *wa; char laddr[VTCP_ADDRBUFSIZE]; char lport[VTCP_PORTBUFSIZE]; char raddr[VTCP_ADDRBUFSIZE]; char rport[VTCP_PORTBUFSIZE]; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CAST_OBJ_NOTNULL(wa, arg, WRK_ACCEPT_MAGIC); VTCP_blocking(wa->acceptsock); /* Turn accepted socket into a session */ AN(WS_Reservation(wrk->aws)); sp = SES_New(wrk->pool); CHECK_OBJ_NOTNULL(sp, SESS_MAGIC); wrk->stats->s_sess++; sp->t_open = VTIM_real(); sp->t_idle = sp->t_open; sp->vxid = VXID_Get(wrk, VSL_CLIENTMARKER); sp->fd = wa->acceptsock; wa->acceptsock = -1; sp->listen_sock = wa->acceptlsock; assert((size_t)wa->acceptaddrlen <= vsa_suckaddr_len); if (wa->acceptlsock->uds) vca_mk_uds(wa, sp, laddr, lport, raddr, rport); else vca_mk_tcp(wa, sp, laddr, lport, raddr, rport); AN(wa->acceptlsock->name); VSL(SLT_Begin, sp->vxid, "sess 0 %s", wa->acceptlsock->transport->name); VSL(SLT_SessOpen, sp->vxid, "%s %s %s %s %s %.6f %d", raddr, rport, wa->acceptlsock->name, laddr, lport, sp->t_open, sp->fd); vca_pace_good(); wrk->stats->sess_conn++; if (wa->acceptlsock->test_heritage) { vca_sock_opt_test(wa->acceptlsock, sp); wa->acceptlsock->test_heritage = 0; } vca_sock_opt_set(wa->acceptlsock, sp); req = Req_New(sp); CHECK_OBJ_NOTNULL(req, REQ_MAGIC); req->htc->rfd = &sp->fd; SES_SetTransport(wrk, sp, req, wa->acceptlsock->transport); WS_Release(wrk->aws, 0); } /*-------------------------------------------------------------------- * This function accepts on a single socket for a single thread pool. * * As long as we can stick the accepted connection to another thread * we do so, otherwise we put the socket back on the "BACK" pool * and handle the new connection ourselves. */ static void v_matchproto_(task_func_t) vca_accept_task(struct worker *wrk, void *arg) { struct wrk_accept wa; struct poolsock *ps; struct listen_sock *ls; int i; char laddr[VTCP_ADDRBUFSIZE]; char lport[VTCP_PORTBUFSIZE]; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CAST_OBJ_NOTNULL(ps, arg, POOLSOCK_MAGIC); ls = ps->lsock; CHECK_OBJ_NOTNULL(ls, LISTEN_SOCK_MAGIC); while (!pool_accepting) VTIM_sleep(.1); /* Dont hold on to (possibly) discarded VCLs */ if (wrk->wpriv->vcl != NULL) VCL_Rel(&wrk->wpriv->vcl); while (!ps->pool->die) { INIT_OBJ(&wa, WRK_ACCEPT_MAGIC); wa.acceptlsock = ls; vca_pace_check(); wa.acceptaddrlen = sizeof wa.acceptaddr; do { i = accept(ls->sock, (void*)&wa.acceptaddr, &wa.acceptaddrlen); } while (i < 0 && errno == EAGAIN && !ps->pool->die); if (i < 0 && ps->pool->die) break; if (i < 0 && ls->sock == -2) { /* Shut down in progress */ sleep(2); continue; } if (i < 0) { switch (errno) { case ECONNABORTED: wrk->stats->sess_fail_econnaborted++; break; case EINTR: wrk->stats->sess_fail_eintr++; break; case EMFILE: wrk->stats->sess_fail_emfile++; vca_pace_bad(); break; case EBADF: wrk->stats->sess_fail_ebadf++; vca_pace_bad(); break; case ENOBUFS: case ENOMEM: wrk->stats->sess_fail_enomem++; vca_pace_bad(); break; default: wrk->stats->sess_fail_other++; vca_pace_bad(); break; } i = errno; wrk->stats->sess_fail++; if (wa.acceptlsock->uds) { bstrcpy(laddr, "0.0.0.0"); bstrcpy(lport, "0"); } else { VTCP_myname(ls->sock, laddr, VTCP_ADDRBUFSIZE, lport, VTCP_PORTBUFSIZE); } VSL(SLT_SessError, NO_VXID, "%s %s %s %d %d \"%s\"", wa.acceptlsock->name, laddr, lport, ls->sock, i, VAS_errtxt(i)); (void)Pool_TrySumstat(wrk); continue; } wa.acceptsock = i; if (!Pool_Task_Arg(wrk, TASK_QUEUE_REQ, vca_make_session, &wa, sizeof wa)) { /* * We couldn't get another thread, so we will handle * the request in this worker thread, but first we * must reschedule the listening task so it will be * taken up by another thread again. */ if (!ps->pool->die) { AZ(Pool_Task(wrk->pool, ps->task, TASK_QUEUE_VCA)); return; } } if (!ps->pool->die && DO_DEBUG(DBG_SLOW_ACCEPTOR)) VTIM_sleep(2.0); } VSL(SLT_Debug, NO_VXID, "XXX Accept thread dies %p", ps); FREE_OBJ(ps); } /*-------------------------------------------------------------------- * Called when a worker and attached thread pool is created, to * allocate the tasks which will listen to sockets for that pool. */ void VCA_NewPool(struct pool *pp) { struct listen_sock *ls; struct poolsock *ps; VTAILQ_FOREACH(ls, &heritage.socks, list) { ALLOC_OBJ(ps, POOLSOCK_MAGIC); AN(ps); ps->lsock = ls; ps->task->func = vca_accept_task; ps->task->priv = ps; ps->pool = pp; VTAILQ_INSERT_TAIL(&pp->poolsocks, ps, list); AZ(Pool_Task(pp, ps->task, TASK_QUEUE_VCA)); } } void VCA_DestroyPool(struct pool *pp) { struct poolsock *ps; while (!VTAILQ_EMPTY(&pp->poolsocks)) { ps = VTAILQ_FIRST(&pp->poolsocks); VTAILQ_REMOVE(&pp->poolsocks, ps, list); } } /*--------------------------------------------------------------------*/ static void * v_matchproto_() vca_acct(void *arg) { struct listen_sock *ls; vtim_real t0; // XXX Actually a mis-nomer now because the accept happens in a pool // thread. Rename to accept-nanny or so? THR_SetName("cache-acceptor"); THR_Init(); (void)arg; t0 = VTIM_real(); vca_periodic(t0); pool_accepting = 1; while (1) { (void)sleep(1); if (vca_sock_opt_init()) { PTOK(pthread_mutex_lock(&shut_mtx)); VTAILQ_FOREACH(ls, &heritage.socks, list) { if (ls->sock == -2) continue; // VCA_Shutdown assert (ls->sock > 0); vca_sock_opt_set(ls, NULL); /* If one of the options on a socket has * changed, also force a retest of whether * the values are inherited to the * accepted sockets. This should then * catch any false positives from previous * tests that could happen if the set * value of an option happened to just be * the OS default for that value, and * wasn't actually inherited from the * listening socket. */ ls->test_heritage = 1; } PTOK(pthread_mutex_unlock(&shut_mtx)); } vca_periodic(t0); } NEEDLESS(return (NULL)); } /*--------------------------------------------------------------------*/ static void v_matchproto_(cli_func_t) ccf_start(struct cli *cli, const char * const *av, void *priv) { struct listen_sock *ls; (void)cli; (void)av; (void)priv; (void)vca_sock_opt_init(); VTAILQ_FOREACH(ls, &heritage.socks, list) { CHECK_OBJ_NOTNULL(ls->transport, TRANSPORT_MAGIC); assert (ls->sock > 0); // We know where stdin is if (cache_param->tcp_fastopen && VTCP_fastopen(ls->sock, cache_param->listen_depth)) VSL(SLT_Error, NO_VXID, "Kernel TCP Fast Open: sock=%d, errno=%d %s", ls->sock, errno, VAS_errtxt(errno)); if (listen(ls->sock, cache_param->listen_depth)) { VCLI_SetResult(cli, CLIS_CANT); VCLI_Out(cli, "Listen failed on socket '%s': %s", ls->endpoint, VAS_errtxt(errno)); return; } AZ(ls->conn_heritage); ls->conn_heritage = calloc(n_sock_opts, sizeof *ls->conn_heritage); AN(ls->conn_heritage); ls->test_heritage = 1; vca_sock_opt_set(ls, NULL); if (cache_param->accept_filter && VTCP_filter_http(ls->sock)) VSL(SLT_Error, NO_VXID, "Kernel filtering: sock=%d, errno=%d %s", ls->sock, errno, VAS_errtxt(errno)); } PTOK(pthread_create(&VCA_thread, NULL, vca_acct, NULL)); } /*--------------------------------------------------------------------*/ static void v_matchproto_(cli_func_t) ccf_listen_address(struct cli *cli, const char * const *av, void *priv) { struct listen_sock *ls; char h[VTCP_ADDRBUFSIZE], p[VTCP_PORTBUFSIZE]; (void)cli; (void)av; (void)priv; /* * This CLI command is primarily used by varnishtest. Don't * respond until listen(2) has been called, in order to avoid * a race where varnishtest::client would attempt to connect(2) * before listen(2) has been called. */ while (!pool_accepting) VTIM_sleep(.1); PTOK(pthread_mutex_lock(&shut_mtx)); VTAILQ_FOREACH(ls, &heritage.socks, list) { if (!ls->uds) { VTCP_myname(ls->sock, h, sizeof h, p, sizeof p); VCLI_Out(cli, "%s %s %s\n", ls->name, h, p); } else VCLI_Out(cli, "%s %s -\n", ls->name, ls->endpoint); } PTOK(pthread_mutex_unlock(&shut_mtx)); } /*--------------------------------------------------------------------*/ static struct cli_proto vca_cmds[] = { { CLICMD_SERVER_START, "", ccf_start }, { CLICMD_DEBUG_LISTEN_ADDRESS, "d", ccf_listen_address }, { NULL } }; void VCA_Init(void) { CLI_AddFuncs(vca_cmds); Lck_New(&pace_mtx, lck_vcapace); } void VCA_Shutdown(void) { struct listen_sock *ls; int i; PTOK(pthread_mutex_lock(&shut_mtx)); VTAILQ_FOREACH(ls, &heritage.socks, list) { i = ls->sock; ls->sock = -2; (void)close(i); } PTOK(pthread_mutex_unlock(&shut_mtx)); } /*-------------------------------------------------------------------- * Transport protocol registration * */ static VTAILQ_HEAD(,transport) transports = VTAILQ_HEAD_INITIALIZER(transports); static uint16_t next_xport; static void XPORT_Register(struct transport *xp) { CHECK_OBJ_NOTNULL(xp, TRANSPORT_MAGIC); AZ(xp->number); xp->number = ++next_xport; VTAILQ_INSERT_TAIL(&transports, xp, list); } void XPORT_Init(void) { ASSERT_MGT(); #define TRANSPORT_MACRO(name) XPORT_Register(&name##_transport); TRANSPORTS #undef TRANSPORT_MACRO } const struct transport * XPORT_Find(const char *name) { const struct transport *xp; ASSERT_MGT(); VTAILQ_FOREACH(xp, &transports, list) if (xp->proto_ident != NULL && !strcasecmp(xp->proto_ident, name)) return (xp); return (NULL); } const struct transport * XPORT_ByNumber(uint16_t no) { const struct transport *xp; VTAILQ_FOREACH(xp, &transports, list) if (xp->number == no) return (xp); return (NULL); } varnish-7.5.0/bin/varnishd/cache/cache_backend.c000066400000000000000000000470751457605730600215360ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * The director implementation for VCL backends. * */ #include "config.h" #include #include "cache_varnishd.h" #include "cache_director.h" #include "vtcp.h" #include "vtim.h" #include "vsa.h" #include "cache_backend.h" #include "cache_conn_pool.h" #include "cache_transport.h" #include "cache_vcl.h" #include "http1/cache_http1.h" #include "proxy/cache_proxy.h" #include "VSC_vbe.h" /*--------------------------------------------------------------------*/ static const char * const vbe_proto_ident = "HTTP Backend"; static struct lock backends_mtx; /*--------------------------------------------------------------------*/ void VBE_Connect_Error(struct VSC_vbe *vsc, int err) { switch(err) { case 0: /* * This is kind of brittle, but zero is the only * value of errno we can trust to have no meaning. */ vsc->helddown++; break; case EACCES: case EPERM: vsc->fail_eacces++; break; case EADDRNOTAVAIL: vsc->fail_eaddrnotavail++; break; case ECONNREFUSED: vsc->fail_econnrefused++; break; case ENETUNREACH: vsc->fail_enetunreach++; break; case ETIMEDOUT: vsc->fail_etimedout++; break; default: vsc->fail_other++; } } /*--------------------------------------------------------------------*/ #define FIND_TMO(tmx, dst, bo, be) \ do { \ CHECK_OBJ_NOTNULL(bo, BUSYOBJ_MAGIC); \ dst = bo->tmx; \ if (isnan(dst) && be->tmx >= 0.0) \ dst = be->tmx; \ if (isnan(dst)) \ dst = cache_param->tmx; \ } while (0) /*-------------------------------------------------------------------- * Get a connection to the backend * * note: wrk is a separate argument because it differs for pipe vs. fetch */ static struct pfd * vbe_dir_getfd(VRT_CTX, struct worker *wrk, VCL_BACKEND dir, struct backend *bp, unsigned force_fresh) { struct busyobj *bo; struct pfd *pfd; int *fdp, err; vtim_dur tmod; char abuf1[VTCP_ADDRBUFSIZE], abuf2[VTCP_ADDRBUFSIZE]; char pbuf1[VTCP_PORTBUFSIZE], pbuf2[VTCP_PORTBUFSIZE]; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(ctx->bo, BUSYOBJ_MAGIC); bo = ctx->bo; CHECK_OBJ_NOTNULL(bp, BACKEND_MAGIC); AN(bp->vsc); if (!VRT_Healthy(ctx, dir, NULL)) { VSLb(bo->vsl, SLT_FetchError, "backend %s: unhealthy", VRT_BACKEND_string(dir)); bp->vsc->unhealthy++; VSC_C_main->backend_unhealthy++; return (NULL); } if (bp->max_connections > 0 && bp->n_conn >= bp->max_connections) { VSLb(bo->vsl, SLT_FetchError, "backend %s: busy", VRT_BACKEND_string(dir)); bp->vsc->busy++; VSC_C_main->backend_busy++; return (NULL); } AZ(bo->htc); bo->htc = WS_Alloc(bo->ws, sizeof *bo->htc); if (bo->htc == NULL) { VSLb(bo->vsl, SLT_FetchError, "out of workspace"); /* XXX: counter ? */ return (NULL); } bo->htc->doclose = SC_NULL; CHECK_OBJ_NOTNULL(bo->htc->doclose, STREAM_CLOSE_MAGIC); FIND_TMO(connect_timeout, tmod, bo, bp); pfd = VCP_Get(bp->conn_pool, tmod, wrk, force_fresh, &err); if (pfd == NULL) { Lck_Lock(bp->director->mtx); VBE_Connect_Error(bp->vsc, err); Lck_Unlock(bp->director->mtx); VSLb(bo->vsl, SLT_FetchError, "backend %s: fail errno %d (%s)", VRT_BACKEND_string(dir), err, VAS_errtxt(err)); VSC_C_main->backend_fail++; bo->htc = NULL; return (NULL); } VSLb_ts_busyobj(bo, "Connected", W_TIM_real(wrk)); fdp = PFD_Fd(pfd); AN(fdp); assert(*fdp >= 0); Lck_Lock(bp->director->mtx); bp->n_conn++; bp->vsc->conn++; bp->vsc->req++; Lck_Unlock(bp->director->mtx); CHECK_OBJ_NOTNULL(bo->htc->doclose, STREAM_CLOSE_MAGIC); err = 0; if (bp->proxy_header != 0) err += VPX_Send_Proxy(*fdp, bp->proxy_header, bo->sp); if (err < 0) { VSLb(bo->vsl, SLT_FetchError, "backend %s: proxy write errno %d (%s)", VRT_BACKEND_string(dir), errno, VAS_errtxt(errno)); // account as if connect failed - good idea? VSC_C_main->backend_fail++; bo->htc = NULL; VCP_Close(&pfd); AZ(pfd); Lck_Lock(bp->director->mtx); bp->n_conn--; bp->vsc->conn--; bp->vsc->req--; Lck_Unlock(bp->director->mtx); return (NULL); } bo->acct.bereq_hdrbytes += err; PFD_LocalName(pfd, abuf1, sizeof abuf1, pbuf1, sizeof pbuf1); PFD_RemoteName(pfd, abuf2, sizeof abuf2, pbuf2, sizeof pbuf2); VSLb(bo->vsl, SLT_BackendOpen, "%d %s %s %s %s %s %s", *fdp, VRT_BACKEND_string(dir), abuf2, pbuf2, abuf1, pbuf1, PFD_State(pfd) == PFD_STATE_STOLEN ? "reuse" : "connect"); INIT_OBJ(bo->htc, HTTP_CONN_MAGIC); bo->htc->priv = pfd; bo->htc->rfd = fdp; bo->htc->doclose = SC_NULL; FIND_TMO(first_byte_timeout, bo->htc->first_byte_timeout, bo, bp); FIND_TMO(between_bytes_timeout, bo->htc->between_bytes_timeout, bo, bp); return (pfd); } static void v_matchproto_(vdi_finish_f) vbe_dir_finish(VRT_CTX, VCL_BACKEND d) { struct backend *bp; struct busyobj *bo; struct pfd *pfd; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(d, DIRECTOR_MAGIC); bo = ctx->bo; CHECK_OBJ_NOTNULL(bo, BUSYOBJ_MAGIC); CAST_OBJ_NOTNULL(bp, d->priv, BACKEND_MAGIC); CHECK_OBJ_NOTNULL(bo->htc, HTTP_CONN_MAGIC); CHECK_OBJ_NOTNULL(bo->htc->doclose, STREAM_CLOSE_MAGIC); pfd = bo->htc->priv; bo->htc->priv = NULL; if (bo->htc->doclose != SC_NULL || bp->proxy_header != 0) { VSLb(bo->vsl, SLT_BackendClose, "%d %s close %s", *PFD_Fd(pfd), VRT_BACKEND_string(d), bo->htc->doclose->name); VCP_Close(&pfd); AZ(pfd); Lck_Lock(bp->director->mtx); } else { assert (PFD_State(pfd) == PFD_STATE_USED); VSLb(bo->vsl, SLT_BackendClose, "%d %s recycle", *PFD_Fd(pfd), VRT_BACKEND_string(d)); Lck_Lock(bp->director->mtx); VSC_C_main->backend_recycle++; VCP_Recycle(bo->wrk, &pfd); } assert(bp->n_conn > 0); bp->n_conn--; AN(bp->vsc); bp->vsc->conn--; #define ACCT(foo) bp->vsc->foo += bo->acct.foo; #include "tbl/acct_fields_bereq.h" Lck_Unlock(bp->director->mtx); bo->htc = NULL; } static int v_matchproto_(vdi_gethdrs_f) vbe_dir_gethdrs(VRT_CTX, VCL_BACKEND d) { int i, extrachance = 1; struct backend *bp; struct pfd *pfd; struct busyobj *bo; struct worker *wrk; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(d, DIRECTOR_MAGIC); bo = ctx->bo; CHECK_OBJ_NOTNULL(bo, BUSYOBJ_MAGIC); CHECK_OBJ_NOTNULL(bo->bereq, HTTP_MAGIC); if (bo->htc != NULL) CHECK_OBJ_NOTNULL(bo->htc->doclose, STREAM_CLOSE_MAGIC); wrk = ctx->bo->wrk; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CAST_OBJ_NOTNULL(bp, d->priv, BACKEND_MAGIC); /* * Now that we know our backend, we can set a default Host: * header if one is necessary. This cannot be done in the VCL * because the backend may be chosen by a director. */ if (!http_GetHdr(bo->bereq, H_Host, NULL) && bp->hosthdr != NULL) http_PrintfHeader(bo->bereq, "Host: %s", bp->hosthdr); do { if (bo->htc != NULL) CHECK_OBJ_NOTNULL(bo->htc->doclose, STREAM_CLOSE_MAGIC); pfd = vbe_dir_getfd(ctx, wrk, d, bp, extrachance == 0 ? 1 : 0); if (pfd == NULL) return (-1); AN(bo->htc); CHECK_OBJ_NOTNULL(bo->htc->doclose, STREAM_CLOSE_MAGIC); if (PFD_State(pfd) != PFD_STATE_STOLEN) extrachance = 0; i = V1F_SendReq(wrk, bo, &bo->acct.bereq_hdrbytes, &bo->acct.bereq_bodybytes); if (i == 0 && PFD_State(pfd) != PFD_STATE_USED) { if (VCP_Wait(wrk, pfd, VTIM_real() + bo->htc->first_byte_timeout) != 0) { bo->htc->doclose = SC_RX_TIMEOUT; VSLb(bo->vsl, SLT_FetchError, "first byte timeout (reused connection)"); extrachance = 0; } } if (bo->htc->doclose == SC_NULL) { assert(PFD_State(pfd) == PFD_STATE_USED); if (i == 0) i = V1F_FetchRespHdr(bo); if (i == 0) { AN(bo->htc->priv); http_VSL_log(bo->beresp); return (0); } } CHECK_OBJ_NOTNULL(bo->htc->doclose, STREAM_CLOSE_MAGIC); /* * If we recycled a backend connection, there is a finite chance * that the backend closed it before we got the bereq to it. * In that case do a single automatic retry if req.body allows. */ vbe_dir_finish(ctx, d); AZ(bo->htc); if (i < 0 || extrachance == 0) break; if (bo->no_retry != NULL) break; VSC_C_main->backend_retry++; } while (extrachance--); return (-1); } static VCL_IP v_matchproto_(vdi_getip_f) vbe_dir_getip(VRT_CTX, VCL_BACKEND d) { struct pfd *pfd; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(d, DIRECTOR_MAGIC); CHECK_OBJ_NOTNULL(ctx->bo, BUSYOBJ_MAGIC); CHECK_OBJ_NOTNULL(ctx->bo->htc, HTTP_CONN_MAGIC); pfd = ctx->bo->htc->priv; return (VCP_GetIp(pfd)); } /*--------------------------------------------------------------------*/ static stream_close_t v_matchproto_(vdi_http1pipe_f) vbe_dir_http1pipe(VRT_CTX, VCL_BACKEND d) { int i; stream_close_t retval; struct backend *bp; struct v1p_acct v1a; struct pfd *pfd; vtim_real deadline; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(d, DIRECTOR_MAGIC); CHECK_OBJ_NOTNULL(ctx->req, REQ_MAGIC); CHECK_OBJ_NOTNULL(ctx->bo, BUSYOBJ_MAGIC); CAST_OBJ_NOTNULL(bp, d->priv, BACKEND_MAGIC); memset(&v1a, 0, sizeof v1a); /* This is hackish... */ v1a.req = ctx->req->acct.req_hdrbytes; ctx->req->acct.req_hdrbytes = 0; ctx->req->res_mode = RES_PIPE; retval = SC_TX_ERROR; pfd = vbe_dir_getfd(ctx, ctx->req->wrk, d, bp, 0); if (pfd != NULL) { CHECK_OBJ_NOTNULL(ctx->bo->htc, HTTP_CONN_MAGIC); i = V1F_SendReq(ctx->req->wrk, ctx->bo, &v1a.bereq, &v1a.out); VSLb_ts_req(ctx->req, "Pipe", W_TIM_real(ctx->req->wrk)); if (i == 0) { deadline = ctx->bo->task_deadline; if (isnan(deadline)) deadline = cache_param->pipe_task_deadline; if (deadline > 0.) deadline += ctx->req->sp->t_idle; retval = V1P_Process(ctx->req, *PFD_Fd(pfd), &v1a, deadline); } VSLb_ts_req(ctx->req, "PipeSess", W_TIM_real(ctx->req->wrk)); ctx->bo->htc->doclose = retval; vbe_dir_finish(ctx, d); } V1P_Charge(ctx->req, &v1a, bp->vsc); CHECK_OBJ_NOTNULL(retval, STREAM_CLOSE_MAGIC); return (retval); } /*--------------------------------------------------------------------*/ static void vbe_dir_event(const struct director *d, enum vcl_event_e ev) { struct backend *bp; CHECK_OBJ_NOTNULL(d, DIRECTOR_MAGIC); CAST_OBJ_NOTNULL(bp, d->priv, BACKEND_MAGIC); if (ev == VCL_EVENT_WARM) { VRT_VSC_Reveal(bp->vsc_seg); if (bp->probe != NULL) VBP_Control(bp, 1); } else if (ev == VCL_EVENT_COLD) { if (bp->probe != NULL) VBP_Control(bp, 0); VRT_VSC_Hide(bp->vsc_seg); } else if (ev == VCL_EVENT_DISCARD) { VRT_DelDirector(&bp->director); } } /*---------------------------------------------------------------------*/ static void vbe_free(struct backend *be) { CHECK_OBJ_NOTNULL(be, BACKEND_MAGIC); if (be->probe != NULL) VBP_Remove(be); VSC_vbe_Destroy(&be->vsc_seg); Lck_Lock(&backends_mtx); VSC_C_main->n_backend--; Lck_Unlock(&backends_mtx); VCP_Rel(&be->conn_pool); #define DA(x) do { if (be->x != NULL) free(be->x); } while (0) #define DN(x) /**/ VRT_BACKEND_HANDLE(); #undef DA #undef DN free(be->endpoint); FREE_OBJ(be); } static void v_matchproto_(vdi_destroy_f) vbe_destroy(const struct director *d) { struct backend *be; CAST_OBJ_NOTNULL(be, d->priv, BACKEND_MAGIC); vbe_free(be); } /*--------------------------------------------------------------------*/ static void vbe_panic(const struct director *d, struct vsb *vsb) { struct backend *bp; CHECK_OBJ_NOTNULL(d, DIRECTOR_MAGIC); CAST_OBJ_NOTNULL(bp, d->priv, BACKEND_MAGIC); VCP_Panic(vsb, bp->conn_pool); VSB_printf(vsb, "hosthdr = %s,\n", bp->hosthdr); VSB_printf(vsb, "n_conn = %u,\n", bp->n_conn); } /*-------------------------------------------------------------------- */ static void v_matchproto_(vdi_list_f) vbe_list(VRT_CTX, const struct director *d, struct vsb *vsb, int pflag, int jflag) { struct backend *bp; (void)ctx; CHECK_OBJ_NOTNULL(d, DIRECTOR_MAGIC); CAST_OBJ_NOTNULL(bp, d->priv, BACKEND_MAGIC); if (bp->probe != NULL) VBP_Status(vsb, bp, pflag, jflag); else if (jflag && pflag) VSB_cat(vsb, "{},\n"); else if (jflag) VSB_cat(vsb, "[0, 0, \"healthy\"]"); else if (pflag) return; else VSB_cat(vsb, "0/0\thealthy"); } /*-------------------------------------------------------------------- */ static VCL_BOOL v_matchproto_(vdi_healthy_f) vbe_healthy(VRT_CTX, VCL_BACKEND d, VCL_TIME *t) { struct backend *bp; (void)ctx; CHECK_OBJ_NOTNULL(d, DIRECTOR_MAGIC); CAST_OBJ_NOTNULL(bp, d->priv, BACKEND_MAGIC); if (t != NULL) *t = bp->changed; return (!bp->sick); } /*-------------------------------------------------------------------- */ static const struct vdi_methods vbe_methods[1] = {{ .magic = VDI_METHODS_MAGIC, .type = "backend", .http1pipe = vbe_dir_http1pipe, .gethdrs = vbe_dir_gethdrs, .getip = vbe_dir_getip, .finish = vbe_dir_finish, .event = vbe_dir_event, .destroy = vbe_destroy, .panic = vbe_panic, .list = vbe_list, .healthy = vbe_healthy }}; static const struct vdi_methods vbe_methods_noprobe[1] = {{ .magic = VDI_METHODS_MAGIC, .type = "backend", .http1pipe = vbe_dir_http1pipe, .gethdrs = vbe_dir_gethdrs, .getip = vbe_dir_getip, .finish = vbe_dir_finish, .event = vbe_dir_event, .destroy = vbe_destroy, .panic = vbe_panic, .list = vbe_list }}; /*-------------------------------------------------------------------- * Create a new static or dynamic director::backend instance. */ size_t VRT_backend_vsm_need(VRT_CTX) { (void)ctx; return (VRT_VSC_Overhead(VSC_vbe_size)); } /* * The new_backend via parameter is a VCL_BACKEND, but we need a (struct * backend) * * For now, we resolve it when creating the backend, which imples no redundancy * / load balancing across the via director if it is more than a simple backend. */ static const struct backend * via_resolve(VRT_CTX, const struct vrt_endpoint *vep, VCL_BACKEND via) { const struct backend *viabe = NULL; CHECK_OBJ_NOTNULL(vep, VRT_ENDPOINT_MAGIC); CHECK_OBJ_NOTNULL(via, DIRECTOR_MAGIC); if (vep->uds_path) { VRT_fail(ctx, "Via is only supported for IP addresses"); return (NULL); } via = VRT_DirectorResolve(ctx, via); if (via == NULL) { VRT_fail(ctx, "Via resolution failed"); return (NULL); } CHECK_OBJ(via, DIRECTOR_MAGIC); CHECK_OBJ_NOTNULL(via->vdir, VCLDIR_MAGIC); if (via->vdir->methods == vbe_methods || via->vdir->methods == vbe_methods_noprobe) CAST_OBJ_NOTNULL(viabe, via->priv, BACKEND_MAGIC); if (viabe == NULL) VRT_fail(ctx, "Via does not resolve to a backend"); return (viabe); } /* * construct a new endpoint identical to vep with sa in a proxy header */ static struct vrt_endpoint * via_endpoint(const struct vrt_endpoint *vep, const struct suckaddr *sa, const char *auth) { struct vsb *preamble; struct vrt_blob blob[1]; struct vrt_endpoint *nvep, *ret; const struct suckaddr *client_bogo; CHECK_OBJ_NOTNULL(vep, VRT_ENDPOINT_MAGIC); AN(sa); nvep = VRT_Endpoint_Clone(vep); CHECK_OBJ_NOTNULL(nvep, VRT_ENDPOINT_MAGIC); if (VSA_Get_Proto(sa) == AF_INET6) client_bogo = bogo_ip6; else client_bogo = bogo_ip; preamble = VSB_new_auto(); AN(preamble); VPX_Format_Proxy(preamble, 2, client_bogo, sa, auth); blob->blob = VSB_data(preamble); blob->len = VSB_len(preamble); nvep->preamble = blob; ret = VRT_Endpoint_Clone(nvep); CHECK_OBJ_NOTNULL(ret, VRT_ENDPOINT_MAGIC); VSB_destroy(&preamble); FREE_OBJ(nvep); return (ret); } VCL_BACKEND VRT_new_backend_clustered(VRT_CTX, struct vsmw_cluster *vc, const struct vrt_backend *vrt, VCL_BACKEND via) { struct backend *be; struct vcl *vcl; const struct vrt_backend_probe *vbp; const struct vrt_endpoint *vep; const struct vdi_methods *m; const struct suckaddr *sa = NULL; char abuf[VTCP_ADDRBUFSIZE]; const struct backend *viabe = NULL; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(vrt, VRT_BACKEND_MAGIC); vep = vrt->endpoint; CHECK_OBJ_NOTNULL(vep, VRT_ENDPOINT_MAGIC); if (vep->uds_path == NULL) { if (vep->ipv4 == NULL && vep->ipv6 == NULL) { VRT_fail(ctx, "%s: Illegal IP", __func__); return (NULL); } } else { assert(vep->ipv4== NULL && vep->ipv6== NULL); } if (via != NULL) { viabe = via_resolve(ctx, vep, via); if (viabe == NULL) return (NULL); } vcl = ctx->vcl; AN(vcl); AN(vrt->vcl_name); /* Create new backend */ ALLOC_OBJ(be, BACKEND_MAGIC); if (be == NULL) return (NULL); #define DA(x) do { if (vrt->x != NULL) REPLACE((be->x), (vrt->x)); } while (0) #define DN(x) do { be->x = vrt->x; } while (0) VRT_BACKEND_HANDLE(); #undef DA #undef DN if (viabe || be->hosthdr == NULL) { if (vrt->endpoint->uds_path != NULL) sa = bogo_ip; else if (cache_param->prefer_ipv6 && vep->ipv6 != NULL) sa = vep->ipv6; else if (vep->ipv4!= NULL) sa = vep->ipv4; else sa = vep->ipv6; if (be->hosthdr == NULL) { VTCP_name(sa, abuf, sizeof abuf, NULL, 0); REPLACE(be->hosthdr, abuf); } } be->vsc = VSC_vbe_New(vc, &be->vsc_seg, "%s.%s", VCL_Name(ctx->vcl), vrt->vcl_name); AN(be->vsc); if (! vcl->temp->is_warm) VRT_VSC_Hide(be->vsc_seg); if (viabe) vep = be->endpoint = via_endpoint(viabe->endpoint, sa, be->authority); else vep = be->endpoint = VRT_Endpoint_Clone(vep); AN(vep); be->conn_pool = VCP_Ref(vep, vbe_proto_ident); AN(be->conn_pool); vbp = vrt->probe; if (vbp == NULL) vbp = VCL_DefaultProbe(vcl); if (vbp != NULL) { VBP_Insert(be, vbp, be->conn_pool); m = vbe_methods; } else { be->sick = 0; m = vbe_methods_noprobe; } Lck_Lock(&backends_mtx); VSC_C_main->n_backend++; Lck_Unlock(&backends_mtx); be->director = VRT_AddDirector(ctx, m, be, "%s", vrt->vcl_name); if (be->director == NULL) { vbe_free(be); return (NULL); } /* for cold VCL, update initial director state */ if (be->probe != NULL) VBP_Update_Backend(be->probe); return (be->director); } VCL_BACKEND VRT_new_backend(VRT_CTX, const struct vrt_backend *vrt, VCL_BACKEND via) { CHECK_OBJ_NOTNULL(vrt, VRT_BACKEND_MAGIC); CHECK_OBJ_NOTNULL(vrt->endpoint, VRT_ENDPOINT_MAGIC); return (VRT_new_backend_clustered(ctx, NULL, vrt, via)); } /*-------------------------------------------------------------------- * Delete a dynamic director::backend instance. Undeleted dynamic and * static instances are GC'ed when the VCL is discarded (in cache_vcl.c) */ void VRT_delete_backend(VRT_CTX, VCL_BACKEND *dp) { (void)ctx; CHECK_OBJ_NOTNULL(*dp, DIRECTOR_MAGIC); VRT_DisableDirector(*dp); VRT_Assign_Backend(dp, NULL); } /*---------------------------------------------------------------------*/ void VBE_InitCfg(void) { Lck_New(&backends_mtx, lck_vbe); } varnish-7.5.0/bin/varnishd/cache/cache_backend.h000066400000000000000000000054561457605730600215400ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Backend and APIs * * A backend ("VBE") is a director which talks HTTP over TCP. * * As you'll notice the terminology is a bit muddled here, but we try to * keep it clean on the user-facing side, where a "director" is always * a "pick a backend/director" functionality, and a "backend" is whatever * satisfies the actual request in the end. * */ struct vbp_target; struct vrt_ctx; struct vrt_backend_probe; struct conn_pool; /*-------------------------------------------------------------------- * An instance of a backend from a VCL program. */ struct backend { unsigned magic; #define BACKEND_MAGIC 0x64c4c7c6 unsigned n_conn; struct vrt_endpoint *endpoint; VRT_BACKEND_FIELDS() unsigned sick; vtim_real changed; struct vbp_target *probe; struct vsc_seg *vsc_seg; struct VSC_vbe *vsc; struct conn_pool *conn_pool; VCL_BACKEND director; }; /*--------------------------------------------------------------------- * Prototypes */ /* cache_backend_probe.c */ void VBP_Update_Backend(struct vbp_target *vt); void VBP_Insert(struct backend *b, struct vrt_backend_probe const *p, struct conn_pool *); void VBP_Remove(struct backend *b); void VBP_Control(const struct backend *b, int stop); void VBP_Status(struct vsb *, const struct backend *, int details, int json); void VBE_Connect_Error(struct VSC_vbe *, int err); varnish-7.5.0/bin/varnishd/cache/cache_backend_probe.c000066400000000000000000000415361457605730600227210ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Poll backends for collection of health statistics * * We co-opt threads from the worker pool for probing the backends, * but we want to avoid a potentially messy cleanup operation when we * retire the backend, so the thread owns the health information, which * the backend references, rather than the other way around. * */ #include "config.h" #include #include #include #include "cache_varnishd.h" #include "vbh.h" #include "vcli_serve.h" #include "vsa.h" #include "vtcp.h" #include "vtim.h" #include "cache_backend.h" #include "cache_conn_pool.h" #include "VSC_vbe.h" /* Default averaging rate, we want something pretty responsive */ #define AVG_RATE 4 struct vbp_target { unsigned magic; #define VBP_TARGET_MAGIC 0x6b7cb656 VRT_BACKEND_PROBE_FIELDS() struct backend *backend; struct conn_pool *conn_pool; char *req; int req_len; char resp_buf[128]; unsigned good; /* Collected statistics */ #define BITMAP(n, c, t, b) uintmax_t n; #include "tbl/backend_poll.h" vtim_dur last; vtim_dur avg; double rate; vtim_real due; int running; int heap_idx; struct pool_task task[1]; }; static struct lock vbp_mtx; static pthread_cond_t vbp_cond; static struct vbh *vbp_heap; static const unsigned char vbp_proxy_local[] = { 0x0d, 0x0a, 0x0d, 0x0a, 0x00, 0x0d, 0x0a, 0x51, 0x55, 0x49, 0x54, 0x0a, 0x20, 0x00, 0x00, 0x00, }; /*--------------------------------------------------------------------*/ static void vbp_delete(struct vbp_target *vt) { CHECK_OBJ_NOTNULL(vt, VBP_TARGET_MAGIC); #define DN(x) /**/ VRT_BACKEND_PROBE_HANDLE(); #undef DN VCP_Rel(&vt->conn_pool); free(vt->req); FREE_OBJ(vt); } /*-------------------------------------------------------------------- * Record pokings... */ static void vbp_start_poke(struct vbp_target *vt) { CHECK_OBJ_NOTNULL(vt, VBP_TARGET_MAGIC); #define BITMAP(n, c, t, b) \ vt->n <<= 1; #include "tbl/backend_poll.h" vt->last = 0; vt->resp_buf[0] = '\0'; } static void vbp_has_poked(struct vbp_target *vt) { unsigned i, j; uint64_t u; CHECK_OBJ_NOTNULL(vt, VBP_TARGET_MAGIC); /* Calculate exponential average */ if (vt->happy & 1) { if (vt->rate < AVG_RATE) vt->rate += 1.0; vt->avg += (vt->last - vt->avg) / vt->rate; } u = vt->happy; for (i = j = 0; i < vt->window; i++) { if (u & 1) j++; u >>= 1; } vt->good = j; } void VBP_Update_Backend(struct vbp_target *vt) { unsigned i = 0, chg; char bits[10]; CHECK_OBJ_NOTNULL(vt, VBP_TARGET_MAGIC); #define BITMAP(n, c, t, b) \ bits[i++] = (vt->n & 1) ? c : '-'; #include "tbl/backend_poll.h" bits[i] = '\0'; assert(i < sizeof bits); Lck_Lock(&vbp_mtx); if (vt->backend == NULL) { Lck_Unlock(&vbp_mtx); return; } i = (vt->good < vt->threshold); chg = (i != vt->backend->sick); vt->backend->sick = i; AN(vt->backend->vcl_name); VSL(SLT_Backend_health, NO_VXID, "%s %s %s %s %u %u %u %.6f %.6f \"%s\"", vt->backend->vcl_name, chg ? "Went" : "Still", i ? "sick" : "healthy", bits, vt->good, vt->threshold, vt->window, vt->last, vt->avg, vt->resp_buf); vt->backend->vsc->happy = vt->happy; if (chg) vt->backend->changed = VTIM_real(); Lck_Unlock(&vbp_mtx); } static void vbp_reset(struct vbp_target *vt) { unsigned u; CHECK_OBJ_NOTNULL(vt, VBP_TARGET_MAGIC); vt->avg = 0.0; vt->rate = 0.0; #define BITMAP(n, c, t, b) \ vt->n = 0; #include "tbl/backend_poll.h" for (u = 0; u < vt->initial; u++) { vbp_start_poke(vt); vt->happy |= 1; vbp_has_poked(vt); } } /*-------------------------------------------------------------------- * Poke one backend, once, but possibly at both IPv4 and IPv6 addresses. * * We do deliberately not use the stuff in cache_backend.c, because we * want to measure the backends response without local distractions. */ static int vbp_write(struct vbp_target *vt, int *sock, const void *buf, size_t len) { int i; i = write(*sock, buf, len); VTCP_Assert(i); if (i != len) { if (i < 0) { vt->err_xmit |= 1; bprintf(vt->resp_buf, "Write error %d (%s)", errno, VAS_errtxt(errno)); } else { bprintf(vt->resp_buf, "Short write (%d/%zu) error %d (%s)", i, len, errno, VAS_errtxt(errno)); } VTCP_close(sock); return (-1); } return (0); } static int vbp_write_proxy_v1(struct vbp_target *vt, int *sock) { char buf[105]; /* maximum size for a TCP6 PROXY line with null char */ char addr[VTCP_ADDRBUFSIZE]; char port[VTCP_PORTBUFSIZE]; char vsabuf[vsa_suckaddr_len]; const struct suckaddr *sua; int proto; struct vsb vsb; sua = VSA_getsockname(*sock, vsabuf, sizeof vsabuf); AN(sua); AN(VSB_init(&vsb, buf, sizeof buf)); proto = VSA_Get_Proto(sua); if (proto == AF_INET || proto == AF_INET6) { VTCP_name(sua, addr, sizeof addr, port, sizeof port); VSB_printf(&vsb, "PROXY %s %s %s %s %s\r\n", proto == AF_INET ? "TCP4" : "TCP6", addr, addr, port, port); } else { VSB_cat(&vsb, "PROXY UNKNOWN\r\n"); } AZ(VSB_finish(&vsb)); VSB_fini(&vsb); return (vbp_write(vt, sock, buf, strlen(buf))); } static void vbp_poke(struct vbp_target *vt) { int s, i, proxy_header, err; vtim_real t_start, t_now, t_end; vtim_dur tmo; unsigned rlen, resp; char buf[8192], *p; struct pollfd pfda[1], *pfd = pfda; const struct suckaddr *sa; t_start = t_now = VTIM_real(); t_end = t_start + vt->timeout; s = VCP_Open(vt->conn_pool, t_end - t_now, &sa, &err); if (s < 0) { bprintf(vt->resp_buf, "Open error %d (%s)", err, VAS_errtxt(err)); Lck_Lock(&vbp_mtx); if (vt->backend) VBE_Connect_Error(vt->backend->vsc, err); Lck_Unlock(&vbp_mtx); return; } i = VSA_Get_Proto(sa); if (VSA_Compare(sa, bogo_ip) == 0) vt->good_unix |= 1; else if (i == AF_INET) vt->good_ipv4 |= 1; else if (i == AF_INET6) vt->good_ipv6 |= 1; else WRONG("Wrong probe protocol family"); t_now = VTIM_real(); tmo = t_end - t_now; if (tmo <= 0) { bprintf(vt->resp_buf, "Open timeout %.3fs exceeded by %.3fs", vt->timeout, -tmo); VTCP_close(&s); return; } Lck_Lock(&vbp_mtx); if (vt->backend != NULL) proxy_header = vt->backend->proxy_header; else proxy_header = -1; Lck_Unlock(&vbp_mtx); if (proxy_header < 0) { bprintf(vt->resp_buf, "%s", "No backend"); VTCP_close(&s); return; } /* Send the PROXY header */ assert(proxy_header <= 2); if (proxy_header == 1) { if (vbp_write_proxy_v1(vt, &s) != 0) return; } else if (proxy_header == 2 && vbp_write(vt, &s, vbp_proxy_local, sizeof vbp_proxy_local) != 0) return; /* Send the request */ if (vbp_write(vt, &s, vt->req, vt->req_len) != 0) return; vt->good_xmit |= 1; pfd->fd = s; rlen = 0; while (1) { pfd->events = POLLIN; pfd->revents = 0; t_now = VTIM_real(); tmo = t_end - t_now; if (tmo <= 0) { bprintf(vt->resp_buf, "Poll timeout %.3fs exceeded by %.3fs", vt->timeout, -tmo); i = -1; break; } i = poll(pfd, 1, VTIM_poll_tmo(tmo)); if (i <= 0) { if (!i) { if (!vt->exp_close) break; errno = ETIMEDOUT; } bprintf(vt->resp_buf, "Poll error %d (%s)", errno, VAS_errtxt(errno)); i = -1; break; } if (rlen < sizeof vt->resp_buf) i = read(s, vt->resp_buf + rlen, sizeof vt->resp_buf - rlen); else i = read(s, buf, sizeof buf); VTCP_Assert(i); if (i <= 0) { if (i < 0) bprintf(vt->resp_buf, "Read error %d (%s)", errno, VAS_errtxt(errno)); break; } rlen += i; } VTCP_close(&s); if (i < 0) { /* errno reported above */ vt->err_recv |= 1; return; } if (rlen == 0) { bprintf(vt->resp_buf, "%s", "Empty response"); return; } /* So we have a good receive ... */ t_now = VTIM_real(); vt->last = t_now - t_start; vt->good_recv |= 1; /* Now find out if we like the response */ vt->resp_buf[sizeof vt->resp_buf - 1] = '\0'; p = strchr(vt->resp_buf, '\r'); if (p != NULL) *p = '\0'; p = strchr(vt->resp_buf, '\n'); if (p != NULL) *p = '\0'; i = sscanf(vt->resp_buf, "HTTP/%*f %u ", &resp); if (i == 1 && resp == vt->exp_status) vt->happy |= 1; } /*-------------------------------------------------------------------- */ static void vbp_heap_insert(struct vbp_target *vt) { // Lck_AssertHeld(&vbp_mtx); VBH_insert(vbp_heap, vt); if (VBH_root(vbp_heap) == vt) PTOK(pthread_cond_signal(&vbp_cond)); } /*-------------------------------------------------------------------- */ static void v_matchproto_(task_func_t) vbp_task(struct worker *wrk, void *priv) { struct vbp_target *vt; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CAST_OBJ_NOTNULL(vt, priv, VBP_TARGET_MAGIC); AN(vt->running); AN(vt->req); assert(vt->req_len > 0); vbp_start_poke(vt); vbp_poke(vt); vbp_has_poked(vt); VBP_Update_Backend(vt); Lck_Lock(&vbp_mtx); if (vt->running < 0) { assert(vt->heap_idx == VBH_NOIDX); vbp_delete(vt); } else { vt->running = 0; if (vt->heap_idx != VBH_NOIDX) { vt->due = VTIM_real() + vt->interval; VBH_delete(vbp_heap, vt->heap_idx); vbp_heap_insert(vt); } } Lck_Unlock(&vbp_mtx); } /*-------------------------------------------------------------------- */ static void * v_matchproto_(bgthread_t) vbp_thread(struct worker *wrk, void *priv) { vtim_real now, nxt; struct vbp_target *vt; int r; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); AZ(priv); Lck_Lock(&vbp_mtx); while (1) { now = VTIM_real(); vt = VBH_root(vbp_heap); if (vt == NULL) { nxt = 8.192 + now; (void)Lck_CondWaitUntil(&vbp_cond, &vbp_mtx, nxt); } else if (vt->due > now) { nxt = vt->due; vt = NULL; (void)Lck_CondWaitUntil(&vbp_cond, &vbp_mtx, nxt); } else { VBH_delete(vbp_heap, vt->heap_idx); vt->due = now + vt->interval; VBH_insert(vbp_heap, vt); if (!vt->running) { vt->running = 1; vt->task->func = vbp_task; vt->task->priv = vt; Lck_Unlock(&vbp_mtx); r = Pool_Task_Any(vt->task, TASK_QUEUE_REQ); Lck_Lock(&vbp_mtx); if (r) vt->running = 0; } } } NEEDLESS(Lck_Unlock(&vbp_mtx)); NEEDLESS(return (NULL)); } /*-------------------------------------------------------------------- * Cli functions */ static void vbp_bitmap(struct vsb *vsb, char c, uint64_t map, const char *lbl) { int i; uint64_t u = (1ULL << 63); VSB_cat(vsb, " "); for (i = 0; i < 64; i++) { if (map & u) VSB_putc(vsb, c); else VSB_putc(vsb, '-'); map <<= 1; } VSB_printf(vsb, " %s\n", lbl); } /*lint -e{506} constant value boolean */ /*lint -e{774} constant value boolean */ void VBP_Status(struct vsb *vsb, const struct backend *be, int details, int json) { struct vbp_target *vt; CHECK_OBJ_NOTNULL(be, BACKEND_MAGIC); vt = be->probe; CHECK_OBJ_NOTNULL(vt, VBP_TARGET_MAGIC); if (!details) { if (json) VSB_printf(vsb, "[%u, %u, \"%s\"]", vt->good, vt->window, vt->backend->sick ? "sick" : "healthy"); else VSB_printf(vsb, "%u/%u\t%s", vt->good, vt->window, vt->backend->sick ? "sick" : "healthy"); return; } if (json) { VSB_cat(vsb, "{\n"); VSB_indent(vsb, 2); #define BITMAP(nn, cc, tt, bb) \ VSB_printf(vsb, "\"bits_%c\": %ju,\n", cc, vt->nn); #include "tbl/backend_poll.h" VSB_printf(vsb, "\"good\": %u,\n", vt->good); VSB_printf(vsb, "\"threshold\": %u,\n", vt->threshold); VSB_printf(vsb, "\"window\": %u", vt->window); VSB_indent(vsb, -2); VSB_cat(vsb, "\n"); VSB_cat(vsb, "},\n"); return; } VSB_printf(vsb, "\n Current states good: %2u threshold: %2u window: %2u\n", vt->good, vt->threshold, vt->window); VSB_printf(vsb, " Average response time of good probes: %.6f\n", vt->avg); VSB_cat(vsb, " Oldest ======================" "============================ Newest\n"); #define BITMAP(n, c, t, b) \ if ((vt->n != 0) || (b)) \ vbp_bitmap(vsb, (c), vt->n, (t)); #include "tbl/backend_poll.h" } /*-------------------------------------------------------------------- * Build request from probe spec */ static void vbp_build_req(struct vbp_target *vt, const struct vrt_backend_probe *vbp, const struct backend *be) { struct vsb *vsb; vsb = VSB_new_auto(); AN(vsb); VSB_clear(vsb); if (vbp->request != NULL) { VSB_cat(vsb, vbp->request); } else { AN(be->hosthdr); VSB_printf(vsb, "GET %s HTTP/1.1\r\n" "Host: %s\r\n" "Connection: close\r\n" "\r\n", vbp->url != NULL ? vbp->url : "/", be->hosthdr); } AZ(VSB_finish(vsb)); vt->req = strdup(VSB_data(vsb)); AN(vt->req); vt->req_len = VSB_len(vsb); VSB_destroy(&vsb); } /*-------------------------------------------------------------------- * Sanitize and set defaults * XXX: we could make these defaults parameters */ static void vbp_set_defaults(struct vbp_target *vt, const struct vrt_backend_probe *vp) { #define DN(x) do { vt->x = vp->x; } while (0) VRT_BACKEND_PROBE_HANDLE(); #undef DN if (vt->timeout == 0.0) vt->timeout = 2.0; if (vt->interval == 0.0) vt->interval = 5.0; if (vt->window == 0) vt->window = 8; if (vt->threshold == 0) vt->threshold = 3; if (vt->exp_status == 0) vt->exp_status = 200; if (vt->initial == ~0U) vt->initial = vt->threshold - 1; vt->initial = vmin(vt->initial, vt->threshold); } /*-------------------------------------------------------------------- */ void VBP_Control(const struct backend *be, int enable) { struct vbp_target *vt; CHECK_OBJ_NOTNULL(be, BACKEND_MAGIC); vt = be->probe; CHECK_OBJ_NOTNULL(vt, VBP_TARGET_MAGIC); vbp_reset(vt); VBP_Update_Backend(vt); Lck_Lock(&vbp_mtx); if (enable) { assert(vt->heap_idx == VBH_NOIDX); vt->due = VTIM_real(); vbp_heap_insert(vt); } else { assert(vt->heap_idx != VBH_NOIDX); VBH_delete(vbp_heap, vt->heap_idx); } Lck_Unlock(&vbp_mtx); } /*-------------------------------------------------------------------- * Insert/Remove/Use called from cache_backend.c */ void VBP_Insert(struct backend *b, const struct vrt_backend_probe *vp, struct conn_pool *tp) { struct vbp_target *vt; CHECK_OBJ_NOTNULL(b, BACKEND_MAGIC); CHECK_OBJ_NOTNULL(vp, VRT_BACKEND_PROBE_MAGIC); AZ(b->probe); ALLOC_OBJ(vt, VBP_TARGET_MAGIC); XXXAN(vt); vt->conn_pool = tp; VCP_AddRef(vt->conn_pool); vt->backend = b; b->probe = vt; vbp_set_defaults(vt, vp); vbp_build_req(vt, vp, b); vbp_reset(vt); } void VBP_Remove(struct backend *be) { struct vbp_target *vt; CHECK_OBJ_NOTNULL(be, BACKEND_MAGIC); vt = be->probe; CHECK_OBJ_NOTNULL(vt, VBP_TARGET_MAGIC); Lck_Lock(&vbp_mtx); be->sick = 1; be->probe = NULL; vt->backend = NULL; if (vt->running) { // task scheduled, it calls vbp_delete() vt->running = -1; vt = NULL; } else if (vt->heap_idx != VBH_NOIDX) { // task done, not yet rescheduled VBH_delete(vbp_heap, vt->heap_idx); } Lck_Unlock(&vbp_mtx); if (vt != NULL) { assert(vt->heap_idx == VBH_NOIDX); vbp_delete(vt); } } /*-------------------------------------------------------------------*/ static int v_matchproto_(vbh_cmp_t) vbp_cmp(void *priv, const void *a, const void *b) { const struct vbp_target *aa, *bb; int ar, br; AZ(priv); CAST_OBJ_NOTNULL(aa, a, VBP_TARGET_MAGIC); CAST_OBJ_NOTNULL(bb, b, VBP_TARGET_MAGIC); ar = aa->running == 0; br = bb->running == 0; if (ar != br) return (ar); return (aa->due < bb->due); } static void v_matchproto_(vbh_update_t) vbp_update(void *priv, void *p, unsigned u) { struct vbp_target *vt; AZ(priv); CAST_OBJ_NOTNULL(vt, p, VBP_TARGET_MAGIC); vt->heap_idx = u; } /*-------------------------------------------------------------------*/ void VBP_Init(void) { pthread_t thr; Lck_New(&vbp_mtx, lck_probe); vbp_heap = VBH_new(NULL, vbp_cmp, vbp_update); AN(vbp_heap); PTOK(pthread_cond_init(&vbp_cond, NULL)); WRK_BgThread(&thr, "backend-poller", vbp_thread, NULL); } varnish-7.5.0/bin/varnishd/cache/cache_ban.c000066400000000000000000000552621457605730600207040ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #include "config.h" #include #include #include "cache_varnishd.h" #include "cache_ban.h" #include "cache_objhead.h" #include "vcli_serve.h" #include "vend.h" #include "vmb.h" /* cache_ban_build.c */ void BAN_Build_Init(void); void BAN_Build_Fini(void); struct lock ban_mtx; int ban_shutdown; struct banhead_s ban_head = VTAILQ_HEAD_INITIALIZER(ban_head); struct ban * volatile ban_start; static pthread_t ban_thread; static int ban_holds; uint64_t bans_persisted_bytes; uint64_t bans_persisted_fragmentation; struct ban_test { uint8_t oper; uint8_t arg1; const char *arg1_spec; const char *arg2; double arg2_double; const void *arg2_spec; }; static const char * const arg_name[BAN_ARGARRSZ + 1] = { #define PVAR(a, b, c) [BAN_ARGIDX(c)] = (a), #include "tbl/ban_vars.h" [BAN_ARGARRSZ] = NULL }; /*-------------------------------------------------------------------- * Storage handling of bans */ static struct ban * ban_alloc(void) { struct ban *b; ALLOC_OBJ(b, BAN_MAGIC); if (b != NULL) VTAILQ_INIT(&b->objcore); return (b); } void BAN_Free(struct ban *b) { CHECK_OBJ_NOTNULL(b, BAN_MAGIC); AZ(b->refcount); assert(VTAILQ_EMPTY(&b->objcore)); if (b->spec != NULL) free(b->spec); FREE_OBJ(b); } /*-------------------------------------------------------------------- * Get/release holds which prevent the ban_lurker from starting. * Holds are held while stevedores load zombie objects. */ void BAN_Hold(void) { Lck_Lock(&ban_mtx); /* Once holds are released, we allow no more */ assert(ban_holds > 0); ban_holds++; Lck_Unlock(&ban_mtx); } void BAN_Release(void) { Lck_Lock(&ban_mtx); assert(ban_holds > 0); ban_holds--; Lck_Unlock(&ban_mtx); if (ban_holds == 0) WRK_BgThread(&ban_thread, "ban-lurker", ban_lurker, NULL); } /*-------------------------------------------------------------------- * Extract time and length from ban-spec */ vtim_real ban_time(const uint8_t *banspec) { vtim_real t; uint64_t u; assert(sizeof t == sizeof u); assert(sizeof t == (BANS_LENGTH - BANS_TIMESTAMP)); u = vbe64dec(banspec + BANS_TIMESTAMP); memcpy(&t, &u, sizeof t); return (t); } unsigned ban_len(const uint8_t *banspec) { unsigned u; u = vbe32dec(banspec + BANS_LENGTH); return (u); } int ban_equal(const uint8_t *bs1, const uint8_t *bs2) { unsigned u; /* * Compare two ban-strings. */ u = ban_len(bs1); if (u != ban_len(bs2)) return (0); if (bs1[BANS_FLAGS] & BANS_FLAG_NODEDUP) return (0); return (!memcmp(bs1 + BANS_LENGTH, bs2 + BANS_LENGTH, u - BANS_LENGTH)); } void ban_mark_completed(struct ban *b) { unsigned ln; CHECK_OBJ_NOTNULL(b, BAN_MAGIC); Lck_AssertHeld(&ban_mtx); AN(b->spec); if (!(b->flags & BANS_FLAG_COMPLETED)) { ln = ban_len(b->spec); b->flags |= BANS_FLAG_COMPLETED; b->spec[BANS_FLAGS] |= BANS_FLAG_COMPLETED; VWMB(); vbe32enc(b->spec + BANS_LENGTH, BANS_HEAD_LEN); VSC_C_main->bans_completed++; bans_persisted_fragmentation += ln - ban_len(b->spec); VSC_C_main->bans_persisted_fragmentation = bans_persisted_fragmentation; } } /*-------------------------------------------------------------------- * Access a lump of bytes in a ban test spec */ static const void * ban_get_lump(const uint8_t **bs) { const void *r; unsigned ln; while (**bs == 0xff) *bs += 1; ln = vbe32dec(*bs); *bs += PRNDUP(sizeof(uint32_t)); assert(PAOK(*bs)); r = (const void*)*bs; *bs += ln; return (r); } /*-------------------------------------------------------------------- * Pick a test apart from a spec string */ static void ban_iter(const uint8_t **bs, struct ban_test *bt) { uint64_t dtmp; memset(bt, 0, sizeof *bt); bt->arg2_double = nan(""); bt->arg1 = *(*bs)++; if (BANS_HAS_ARG1_SPEC(bt->arg1)) { bt->arg1_spec = (const char *)*bs; (*bs) += (*bs)[0] + 2; } if (BANS_HAS_ARG2_DOUBLE(bt->arg1)) { dtmp = vbe64dec(ban_get_lump(bs)); bt->oper = *(*bs)++; memcpy(&bt->arg2_double, &dtmp, sizeof dtmp); return; } bt->arg2 = ban_get_lump(bs); bt->oper = *(*bs)++; if (BANS_HAS_ARG2_SPEC(bt->oper)) bt->arg2_spec = ban_get_lump(bs); } /*-------------------------------------------------------------------- * A new object is created, grab a reference to the newest ban */ void BAN_NewObjCore(struct objcore *oc) { CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); AZ(oc->ban); AN(oc->objhead); Lck_Lock(&ban_mtx); oc->ban = ban_start; ban_start->refcount++; VTAILQ_INSERT_TAIL(&ban_start->objcore, oc, ban_list); Lck_Unlock(&ban_mtx); } /*-------------------------------------------------------------------- * An object is destroyed, release its ban reference */ void BAN_DestroyObj(struct objcore *oc) { CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); if (oc->ban == NULL) return; Lck_Lock(&ban_mtx); CHECK_OBJ_ORNULL(oc->ban, BAN_MAGIC); if (oc->ban != NULL) { assert(oc->ban->refcount > 0); oc->ban->refcount--; VTAILQ_REMOVE(&oc->ban->objcore, oc, ban_list); oc->ban = NULL; } Lck_Unlock(&ban_mtx); } /*-------------------------------------------------------------------- * Find a ban based on a timestamp. * Assume we have a BAN_Hold, so list traversal is safe. */ struct ban * BAN_FindBan(vtim_real t0) { struct ban *b; vtim_real t1; assert(ban_holds > 0); VTAILQ_FOREACH(b, &ban_head, list) { t1 = ban_time(b->spec); if (t1 == t0) return (b); if (t1 < t0) break; } return (NULL); } /*-------------------------------------------------------------------- * Grab a reference to a ban and associate the objcore with that ban. * Assume we have a BAN_Hold, so list traversal is safe. */ void BAN_RefBan(struct objcore *oc, struct ban *b) { Lck_Lock(&ban_mtx); CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); AZ(oc->ban); CHECK_OBJ_NOTNULL(b, BAN_MAGIC); assert(ban_holds > 0); b->refcount++; VTAILQ_INSERT_TAIL(&b->objcore, oc, ban_list); oc->ban = b; Lck_Unlock(&ban_mtx); } /*-------------------------------------------------------------------- * Compile a full ban list and export this area to the stevedores for * persistence. */ static void ban_export(void) { struct ban *b; struct vsb *vsb; unsigned ln; Lck_AssertHeld(&ban_mtx); ln = bans_persisted_bytes - bans_persisted_fragmentation; vsb = VSB_new_auto(); AN(vsb); VTAILQ_FOREACH_REVERSE(b, &ban_head, banhead_s, list) AZ(VSB_bcat(vsb, b->spec, ban_len(b->spec))); AZ(VSB_finish(vsb)); assert(VSB_len(vsb) == ln); STV_BanExport((const uint8_t *)VSB_data(vsb), VSB_len(vsb)); VSB_destroy(&vsb); VSC_C_main->bans_persisted_bytes = bans_persisted_bytes = ln; VSC_C_main->bans_persisted_fragmentation = bans_persisted_fragmentation = 0; } /* * For both of these we do a full export on info failure to remove * holes in the exported list. * XXX: we should keep track of the size of holes in the last exported list * XXX: check if the ban_export should be batched in ban_cleantail */ void ban_info_new(const uint8_t *ban, unsigned len) { /* XXX martin pls review if ban_mtx needs to be held */ Lck_AssertHeld(&ban_mtx); if (STV_BanInfoNew(ban, len)) ban_export(); } void ban_info_drop(const uint8_t *ban, unsigned len) { /* XXX martin pls review if ban_mtx needs to be held */ Lck_AssertHeld(&ban_mtx); if (STV_BanInfoDrop(ban, len)) ban_export(); } /*-------------------------------------------------------------------- * Put a skeleton ban in the list, unless there is an identical, * time & condition, ban already in place. * * If a newer ban has same condition, mark the inserted ban COMPLETED, * also mark any older bans, with the same condition COMPLETED. */ static void ban_reload(const uint8_t *ban, unsigned len) { struct ban *b, *b2; int duplicate = 0; vtim_real t0, t1, t2 = 9e99; ASSERT_CLI(); Lck_AssertHeld(&ban_mtx); t0 = ban_time(ban); assert(len == ban_len(ban)); VTAILQ_FOREACH(b, &ban_head, list) { t1 = ban_time(b->spec); assert(t1 < t2); t2 = t1; if (t1 == t0) return; if (t1 < t0) break; if (ban_equal(b->spec, ban)) duplicate = 1; } VSC_C_main->bans++; VSC_C_main->bans_added++; b2 = ban_alloc(); AN(b2); b2->spec = malloc(len); AN(b2->spec); memcpy(b2->spec, ban, len); if (ban[BANS_FLAGS] & BANS_FLAG_REQ) { VSC_C_main->bans_req++; b2->flags |= BANS_FLAG_REQ; } if (duplicate) VSC_C_main->bans_dups++; if (duplicate || (ban[BANS_FLAGS] & BANS_FLAG_COMPLETED)) ban_mark_completed(b2); if (b == NULL) VTAILQ_INSERT_TAIL(&ban_head, b2, list); else VTAILQ_INSERT_BEFORE(b, b2, list); bans_persisted_bytes += len; VSC_C_main->bans_persisted_bytes = bans_persisted_bytes; /* Hunt down older duplicates */ for (b = VTAILQ_NEXT(b2, list); b != NULL; b = VTAILQ_NEXT(b, list)) { if (b->flags & BANS_FLAG_COMPLETED) continue; if (ban_equal(b->spec, ban)) { ban_mark_completed(b); VSC_C_main->bans_dups++; } } } /*-------------------------------------------------------------------- * Reload a series of persisted ban specs */ void BAN_Reload(const uint8_t *ptr, unsigned len) { const uint8_t *pe; unsigned l; AZ(ban_shutdown); pe = ptr + len; Lck_Lock(&ban_mtx); while (ptr < pe) { /* XXX: This can be optimized by traversing the live * ban list together with the reload list (combining * the loops in BAN_Reload and ban_reload). */ l = ban_len(ptr); assert(ptr + l <= pe); ban_reload(ptr, l); ptr += l; } Lck_Unlock(&ban_mtx); } /*-------------------------------------------------------------------- * Get a bans timestamp */ vtim_real BAN_Time(const struct ban *b) { if (b == NULL) return (0.0); CHECK_OBJ_NOTNULL(b, BAN_MAGIC); return (ban_time(b->spec)); } /*-------------------------------------------------------------------- * Evaluate ban-spec */ int ban_evaluate(struct worker *wrk, const uint8_t *bsarg, struct objcore *oc, const struct http *reqhttp, unsigned *tests) { struct ban_test bt; const uint8_t *bs, *be; const char *p; const char *arg1; double darg1, darg2; int rv; /* * for ttl and age, fix the point in time such that banning refers to * the same point in time when the ban is evaluated * * for grace/keep, we assume that the absolute values are pola and that * users will most likely also specify a ttl criterion if they want to * fix a point in time (such as "obj.ttl > 5h && obj.keep > 3h") */ bs = bsarg; be = bs + ban_len(bs); bs += BANS_HEAD_LEN; while (bs < be) { (*tests)++; ban_iter(&bs, &bt); arg1 = NULL; darg1 = darg2 = nan(""); switch (bt.arg1) { case BANS_ARG_URL: AN(reqhttp); arg1 = reqhttp->hd[HTTP_HDR_URL].b; break; case BANS_ARG_REQHTTP: AN(reqhttp); (void)http_GetHdr(reqhttp, bt.arg1_spec, &p); arg1 = p; break; case BANS_ARG_OBJHTTP: arg1 = HTTP_GetHdrPack(wrk, oc, bt.arg1_spec); break; case BANS_ARG_OBJSTATUS: arg1 = HTTP_GetHdrPack(wrk, oc, H__Status); break; case BANS_ARG_OBJTTL: darg1 = oc->ttl + oc->t_origin; darg2 = bt.arg2_double + ban_time(bsarg); break; case BANS_ARG_OBJAGE: darg1 = 0.0 - oc->t_origin; darg2 = 0.0 - (ban_time(bsarg) - bt.arg2_double); break; case BANS_ARG_OBJGRACE: darg1 = oc->grace; darg2 = bt.arg2_double; break; case BANS_ARG_OBJKEEP: darg1 = oc->keep; darg2 = bt.arg2_double; break; default: WRONG("Wrong BAN_ARG code"); } switch (bt.oper) { case BANS_OPER_EQ: if (arg1 == NULL) { if (isnan(darg1) || darg1 != darg2) return (0); } else if (strcmp(arg1, bt.arg2)) { return (0); } break; case BANS_OPER_NEQ: if (arg1 == NULL) { if (! isnan(darg1) && darg1 == darg2) return (0); } else if (!strcmp(arg1, bt.arg2)) { return (0); } break; case BANS_OPER_MATCH: if (arg1 == NULL) return (0); rv = VRE_match(bt.arg2_spec, arg1, 0, 0, NULL); xxxassert(rv >= -1); if (rv < 0) return (0); break; case BANS_OPER_NMATCH: if (arg1 == NULL) return (0); rv = VRE_match(bt.arg2_spec, arg1, 0, 0, NULL); xxxassert(rv >= -1); if (rv >= 0) return (0); break; case BANS_OPER_GT: AZ(arg1); assert(! isnan(darg1)); if (!(darg1 > darg2)) return (0); break; case BANS_OPER_GTE: AZ(arg1); assert(! isnan(darg1)); if (!(darg1 >= darg2)) return (0); break; case BANS_OPER_LT: AZ(arg1); assert(! isnan(darg1)); if (!(darg1 < darg2)) return (0); break; case BANS_OPER_LTE: AZ(arg1); assert(! isnan(darg1)); if (!(darg1 <= darg2)) return (0); break; default: WRONG("Wrong BAN_OPER code"); } } return (1); } /*-------------------------------------------------------------------- * Check an object against all applicable bans * * Return: * -1 not all bans checked, but none of the checked matched * Only if !has_req * 0 No bans matched, object moved to ban_start. * 1 Ban matched, object removed from ban list. */ int BAN_CheckObject(struct worker *wrk, struct objcore *oc, struct req *req) { struct ban *b; struct vsl_log *vsl; struct ban *b0, *bn; unsigned tests; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); CHECK_OBJ_NOTNULL(req, REQ_MAGIC); Lck_AssertHeld(&oc->objhead->mtx); assert(oc->refcnt > 0); vsl = req->vsl; CHECK_OBJ_NOTNULL(oc->ban, BAN_MAGIC); /* First do an optimistic unlocked check */ b0 = ban_start; CHECK_OBJ_NOTNULL(b0, BAN_MAGIC); if (b0 == oc->ban) return (0); /* If that fails, make a safe check */ Lck_Lock(&ban_mtx); b0 = ban_start; bn = oc->ban; if (b0 != bn) bn->refcount++; Lck_Unlock(&ban_mtx); AN(bn); if (b0 == bn) return (0); AN(b0); AN(bn); /* * This loop is safe without locks, because we know we hold * a refcount on a ban somewhere in the list and we do not * inspect the list past that ban. */ tests = 0; for (b = b0; b != bn; b = VTAILQ_NEXT(b, list)) { CHECK_OBJ_NOTNULL(b, BAN_MAGIC); if (b->flags & BANS_FLAG_COMPLETED) continue; if (ban_evaluate(wrk, b->spec, oc, req->http, &tests)) break; } Lck_Lock(&ban_mtx); bn->refcount--; VSC_C_main->bans_tested++; VSC_C_main->bans_tests_tested += tests; if (b == bn) { /* not banned */ oc->ban->refcount--; VTAILQ_REMOVE(&oc->ban->objcore, oc, ban_list); VTAILQ_INSERT_TAIL(&b0->objcore, oc, ban_list); b0->refcount++; oc->ban = b0; b = NULL; } if (b != NULL) VSC_C_main->bans_obj_killed++; if (VTAILQ_LAST(&ban_head, banhead_s)->refcount == 0) ban_kick_lurker(); Lck_Unlock(&ban_mtx); if (b == NULL) { /* not banned */ ObjSendEvent(wrk, oc, OEV_BANCHG); return (0); } else { VSLb(vsl, SLT_ExpBan, "%ju banned lookup", VXID(ObjGetXID(wrk, oc))); return (1); } } /*-------------------------------------------------------------------- * CLI functions to add bans */ static void v_matchproto_(cli_func_t) ccf_ban(struct cli *cli, const char * const *av, void *priv) { int narg, i; struct ban_proto *bp; const char *err = NULL; (void)priv; /* First do some cheap checks on the arguments */ for (narg = 0; av[narg + 2] != NULL; narg++) continue; if ((narg % 4) != 3) { VCLI_Out(cli, "Wrong number of arguments"); VCLI_SetResult(cli, CLIS_PARAM); return; } for (i = 3; i < narg; i += 4) { if (strcmp(av[i + 2], "&&")) { VCLI_Out(cli, "Found \"%s\" expected &&", av[i + 2]); VCLI_SetResult(cli, CLIS_PARAM); return; } } bp = BAN_Build(); if (bp == NULL) { VCLI_Out(cli, "Out of Memory"); VCLI_SetResult(cli, CLIS_CANT); return; } for (i = 0; i < narg; i += 4) { err = BAN_AddTest(bp, av[i + 2], av[i + 3], av[i + 4]); if (err) break; } if (err == NULL) { // XXX racy - grab wstat lock? err = BAN_Commit(bp); } if (err != NULL) { VCLI_Out(cli, "%s", err); BAN_Abandon(bp); VCLI_SetResult(cli, CLIS_PARAM); } } #define Ms 60 #define Hs (Ms * 60) #define Ds (Hs * 24) #define Ws (Ds * 7) #define Ys (Ds * 365) #define Xfmt(buf, var, s, unit) \ ((var) >= s && (var) % s == 0) \ bprintf((buf), "%ju" unit, (var) / s) // XXX move to VTIM? #define vdur_render(buf, dur) do { \ uintmax_t dec = (uintmax_t)floor(dur); \ uintmax_t frac = (uintmax_t)floor((dur) * 1e3) % UINTMAX_C(1000); \ if (dec == 0 && frac == 0) \ (void) strncpy(buf, "0s", sizeof(buf)); \ else if (dec == 0) \ bprintf((buf), "%jums", frac); \ else if (frac != 0) \ bprintf((buf), "%ju.%03jus", dec, frac); \ else if Xfmt(buf, dec, Ys, "y"); \ else if Xfmt(buf, dec, Ws, "w"); \ else if Xfmt(buf, dec, Ds, "d"); \ else if Xfmt(buf, dec, Hs, "h"); \ else if Xfmt(buf, dec, Ms, "m"); \ else \ bprintf((buf), "%jus", dec); \ } while (0) static void ban_render(struct cli *cli, const uint8_t *bs, int quote) { struct ban_test bt; const uint8_t *be; char buf[64]; be = bs + ban_len(bs); bs += BANS_HEAD_LEN; while (bs < be) { ban_iter(&bs, &bt); ASSERT_BAN_ARG(bt.arg1); ASSERT_BAN_OPER(bt.oper); if (BANS_HAS_ARG1_SPEC(bt.arg1)) VCLI_Out(cli, "%s%.*s", arg_name[BAN_ARGIDX(bt.arg1)], bt.arg1_spec[0] - 1, bt.arg1_spec + 1); else VCLI_Out(cli, "%s", arg_name[BAN_ARGIDX(bt.arg1)]); VCLI_Out(cli, " %s ", ban_oper[BAN_OPERIDX(bt.oper)]); if (BANS_HAS_ARG2_DOUBLE(bt.arg1)) { vdur_render(buf, bt.arg2_double); VCLI_Out(cli, "%s", buf); } else if (quote) { VCLI_Quote(cli, bt.arg2); } else { VCLI_Out(cli, "%s", bt.arg2); } if (bs < be) VCLI_Out(cli, " && "); } } static void ban_list(struct cli *cli, struct ban *bl) { struct ban *b; int64_t o; VCLI_Out(cli, "Present bans:\n"); VTAILQ_FOREACH(b, &ban_head, list) { o = bl == b ? 1 : 0; VCLI_Out(cli, "%10.6f %5ju %s", ban_time(b->spec), (intmax_t)(b->refcount - o), b->flags & BANS_FLAG_COMPLETED ? "C" : "-"); if (DO_DEBUG(DBG_LURKER)) { VCLI_Out(cli, "%s%s %p ", b->flags & BANS_FLAG_REQ ? "R" : "-", b->flags & BANS_FLAG_OBJ ? "O" : "-", b); } VCLI_Out(cli, " "); ban_render(cli, b->spec, 0); VCLI_Out(cli, "\n"); if (VCLI_Overflow(cli)) break; if (DO_DEBUG(DBG_LURKER)) { Lck_Lock(&ban_mtx); struct objcore *oc; VTAILQ_FOREACH(oc, &b->objcore, ban_list) VCLI_Out(cli, " oc = %p\n", oc); Lck_Unlock(&ban_mtx); } } } static void ban_list_json(struct cli *cli, const char * const *av, struct ban *bl) { struct ban *b; int64_t o; int n = 0; int ocs; VCLI_JSON_begin(cli, 2, av); VCLI_Out(cli, ",\n"); VTAILQ_FOREACH(b, &ban_head, list) { o = bl == b ? 1 : 0; VCLI_Out(cli, "%s", n ? ",\n" : ""); n++; VCLI_Out(cli, "{\n"); VSB_indent(cli->sb, 2); VCLI_Out(cli, "\"time\": %.6f,\n", ban_time(b->spec)); VCLI_Out(cli, "\"refs\": %ju,\n", (intmax_t)(b->refcount - o)); VCLI_Out(cli, "\"completed\": %s,\n", b->flags & BANS_FLAG_COMPLETED ? "true" : "false"); VCLI_Out(cli, "\"spec\": \""); ban_render(cli, b->spec, 1); VCLI_Out(cli, "\""); if (DO_DEBUG(DBG_LURKER)) { VCLI_Out(cli, ",\n"); VCLI_Out(cli, "\"req_tests\": %s,\n", b->flags & BANS_FLAG_REQ ? "true" : "false"); VCLI_Out(cli, "\"obj_tests\": %s,\n", b->flags & BANS_FLAG_OBJ ? "true" : "false"); VCLI_Out(cli, "\"pointer\": \"%p\",\n", b); if (VCLI_Overflow(cli)) break; ocs = 0; VCLI_Out(cli, "\"objcores\": [\n"); VSB_indent(cli->sb, 2); Lck_Lock(&ban_mtx); struct objcore *oc; VTAILQ_FOREACH(oc, &b->objcore, ban_list) { if (ocs) VCLI_Out(cli, ",\n"); VCLI_Out(cli, "%p", oc); ocs++; } Lck_Unlock(&ban_mtx); VSB_indent(cli->sb, -2); VCLI_Out(cli, "\n]"); } VSB_indent(cli->sb, -2); VCLI_Out(cli, "\n}"); } VCLI_JSON_end(cli); } static void v_matchproto_(cli_func_t) ccf_ban_list(struct cli *cli, const char * const *av, void *priv) { struct ban *bl; (void)priv; /* Get a reference so we are safe to traverse the list */ Lck_Lock(&ban_mtx); bl = VTAILQ_LAST(&ban_head, banhead_s); bl->refcount++; Lck_Unlock(&ban_mtx); if (av[2] != NULL && strcmp(av[2], "-j") == 0) ban_list_json(cli, av, bl); else ban_list(cli, bl); Lck_Lock(&ban_mtx); bl->refcount--; ban_kick_lurker(); // XXX: Mostly for testcase b00009.vtc Lck_Unlock(&ban_mtx); } static struct cli_proto ban_cmds[] = { { CLICMD_BAN, "", ccf_ban }, { CLICMD_BAN_LIST, "", ccf_ban_list, ccf_ban_list }, { NULL } }; /*-------------------------------------------------------------------- */ void BAN_Compile(void) { struct ban *b; /* * All bans have been read from all persistent stevedores. Export * the compiled list */ ASSERT_CLI(); AZ(ban_shutdown); Lck_Lock(&ban_mtx); /* Report the place-holder ban */ b = VTAILQ_FIRST(&ban_head); ban_info_new(b->spec, ban_len(b->spec)); ban_export(); Lck_Unlock(&ban_mtx); ban_start = VTAILQ_FIRST(&ban_head); BAN_Release(); } void BAN_Init(void) { struct ban_proto *bp; BAN_Build_Init(); Lck_New(&ban_mtx, lck_ban); CLI_AddFuncs(ban_cmds); ban_holds = 1; /* Add a placeholder ban */ bp = BAN_Build(); AN(bp); PTOK(pthread_cond_init(&ban_lurker_cond, NULL)); AZ(BAN_Commit(bp)); Lck_Lock(&ban_mtx); ban_mark_completed(VTAILQ_FIRST(&ban_head)); Lck_Unlock(&ban_mtx); } /*-------------------------------------------------------------------- * Shutdown of the ban system. * * When this function returns, no new bans will be accepted, and no * bans will be dropped (ban lurker thread stopped), so that no * STV_BanInfo calls will be executed. */ void BAN_Shutdown(void) { void *status; Lck_Lock(&ban_mtx); ban_shutdown = 1; ban_kick_lurker(); Lck_Unlock(&ban_mtx); PTOK(pthread_join(ban_thread, &status)); AZ(status); Lck_Lock(&ban_mtx); /* Export the ban list to compact it */ ban_export(); Lck_Unlock(&ban_mtx); BAN_Build_Fini(); } varnish-7.5.0/bin/varnishd/cache/cache_ban.h000066400000000000000000000123261457605730600207030ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Ban processing * * A ban consists of a number of conditions (or tests), all of which must be * satisfied. Here are some potential bans we could support: * * req.url == "/foo" * req.url ~ ".iso" && obj.size > 10MB * req.http.host ~ "web1.com" && obj.http.set-cookie ~ "USER=29293" * * We make the "&&" mandatory from the start, leaving the syntax space * for latter handling of "||" as well. * * Bans are compiled into bytestrings as follows: * 8 bytes - double: timestamp XXX: Byteorder ? * 4 bytes - be32: length * 1 byte - flags: 0x01: BAN_F_{REQ|OBJ|COMPLETED} * N tests * A test have this form: * 1 byte - arg (see ban_vars.h col 3 "BANS_ARG_XXX") * (n bytes) - http header name, canonical encoding * lump - comparison arg * 1 byte - operation (BANS_OPER_) * (lump) - compiled regexp * A lump is: * 4 bytes - be32: length * n bytes - content * */ /*-------------------------------------------------------------------- * BAN string defines & magic markers */ #define BANS_TIMESTAMP 0 #define BANS_LENGTH 8 #define BANS_FLAGS 12 #define BANS_HEAD_LEN 16 #define BANS_FLAG_REQ (1<<0) #define BANS_FLAG_OBJ (1<<1) #define BANS_FLAG_COMPLETED (1<<2) #define BANS_FLAG_HTTP (1<<3) #define BANS_FLAG_DURATION (1<<4) #define BANS_FLAG_NODEDUP (1<<5) #define BANS_OPER_EQ 0x10 #define BANS_OPER_OFF_ BANS_OPER_EQ #define BANS_OPER_NEQ 0x11 #define BANS_OPER_MATCH 0x12 #define BANS_OPER_NMATCH 0x13 #define BANS_OPER_GT 0x14 #define BANS_OPER_GTE 0x15 #define BANS_OPER_LT 0x16 #define BANS_OPER_LTE 0x17 #define BANS_OPER_LIM_ (BANS_OPER_LTE + 1) #define BAN_OPERIDX(x) ((x) - BANS_OPER_OFF_) #define BAN_OPERARRSZ (BANS_OPER_LIM_ - BANS_OPER_OFF_) #define ASSERT_BAN_OPER(x) assert((x) >= BANS_OPER_OFF_ && (x) < BANS_OPER_LIM_) #define BANS_ARG_URL 0x18 #define BANS_ARG_OFF_ BANS_ARG_URL #define BANS_ARG_REQHTTP 0x19 #define BANS_ARG_OBJHTTP 0x1a #define BANS_ARG_OBJSTATUS 0x1b #define BANS_ARG_OBJTTL 0x1c #define BANS_ARG_OBJAGE 0x1d #define BANS_ARG_OBJGRACE 0x1e #define BANS_ARG_OBJKEEP 0x1f #define BANS_ARG_LIM (BANS_ARG_OBJKEEP + 1) #define BAN_ARGIDX(x) ((x) - BANS_ARG_OFF_) #define BAN_ARGARRSZ (BANS_ARG_LIM - BANS_ARG_OFF_) #define ASSERT_BAN_ARG(x) assert((x) >= BANS_ARG_OFF_ && (x) < BANS_ARG_LIM) // has an arg1_spec (BANS_FLAG_HTTP at build time) #define BANS_HAS_ARG1_SPEC(arg) \ ((arg) == BANS_ARG_REQHTTP || \ (arg) == BANS_ARG_OBJHTTP) // has an arg2_spec (regex) #define BANS_HAS_ARG2_SPEC(oper) \ ((oper) == BANS_OPER_MATCH || \ (oper) == BANS_OPER_NMATCH) // has an arg2_double (BANS_FLAG_DURATION at build time) #define BANS_HAS_ARG2_DOUBLE(arg) \ ((arg) >= BANS_ARG_OBJTTL && \ (arg) <= BANS_ARG_OBJKEEP) /*--------------------------------------------------------------------*/ struct ban { unsigned magic; #define BAN_MAGIC 0x700b08ea unsigned flags; /* BANS_FLAG_* */ VTAILQ_ENTRY(ban) list; VTAILQ_ENTRY(ban) l_list; int64_t refcount; VTAILQ_HEAD(,objcore) objcore; uint8_t *spec; }; VTAILQ_HEAD(banhead_s,ban); bgthread_t ban_lurker; extern struct lock ban_mtx; extern int ban_shutdown; extern struct banhead_s ban_head; extern struct ban * volatile ban_start; extern pthread_cond_t ban_lurker_cond; extern uint64_t bans_persisted_bytes; extern uint64_t bans_persisted_fragmentation; extern const char * const ban_oper[BAN_OPERARRSZ + 1]; void ban_mark_completed(struct ban *); unsigned ban_len(const uint8_t *banspec); void ban_info_new(const uint8_t *ban, unsigned len); void ban_info_drop(const uint8_t *ban, unsigned len); int ban_evaluate(struct worker *wrk, const uint8_t *bs, struct objcore *oc, const struct http *reqhttp, unsigned *tests); vtim_real ban_time(const uint8_t *banspec); int ban_equal(const uint8_t *bs1, const uint8_t *bs2); void BAN_Free(struct ban *b); void ban_kick_lurker(void); varnish-7.5.0/bin/varnishd/cache/cache_ban_build.c000066400000000000000000000247671457605730600220710ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #include "config.h" #include #include "cache_varnishd.h" #include "cache_ban.h" #include "vend.h" #include "vtim.h" #include "vnum.h" void BAN_Build_Init(void); void BAN_Build_Fini(void); struct ban_proto { unsigned magic; #define BAN_PROTO_MAGIC 0xd8adc494 unsigned flags; /* BANS_FLAG_* */ struct vsb *vsb; char *err; }; /*-------------------------------------------------------------------- * Variables we can ban on */ static const struct pvar { const char *name; unsigned flag; uint8_t tag; } pvars[] = { #define PVAR(a, b, c) { (a), (b), (c) }, #include "tbl/ban_vars.h" { 0, 0, 0} }; /* operators allowed per argument (pvar.tag) */ static const unsigned arg_opervalid[BAN_ARGARRSZ + 1] = { #define ARGOPER(arg, mask) [BAN_ARGIDX(arg)] = (mask), #include "tbl/ban_arg_oper.h" [BAN_ARGARRSZ] = 0 }; // init'ed in _Init static const char *arg_operhelp[BAN_ARGARRSZ + 1]; // operators const char * const ban_oper[BAN_OPERARRSZ + 1] = { #define OPER(op, str) [BAN_OPERIDX(op)] = (str), #include "tbl/ban_oper.h" [BAN_OPERARRSZ] = NULL }; /*-------------------------------------------------------------------- */ static char ban_build_err_no_mem[] = "No Memory"; /*-------------------------------------------------------------------- */ struct ban_proto * BAN_Build(void) { struct ban_proto *bp; ALLOC_OBJ(bp, BAN_PROTO_MAGIC); if (bp == NULL) return (bp); bp->vsb = VSB_new_auto(); if (bp->vsb == NULL) { FREE_OBJ(bp); return (NULL); } return (bp); } // TODO: change to (struct ban_proto **) void BAN_Abandon(struct ban_proto *bp) { CHECK_OBJ_NOTNULL(bp, BAN_PROTO_MAGIC); VSB_destroy(&bp->vsb); FREE_OBJ(bp); } /*-------------------------------------------------------------------- */ static void ban_add_lump(const struct ban_proto *bp, const void *p, uint32_t len) { uint8_t buf[PRNDUP(sizeof len)] = { 0xff }; while (VSB_len(bp->vsb) & PALGN) VSB_putc(bp->vsb, buf[0]); vbe32enc(buf, len); VSB_bcat(bp->vsb, buf, sizeof buf); VSB_bcat(bp->vsb, p, len); } /*-------------------------------------------------------------------- */ static const char * ban_error(struct ban_proto *bp, const char *fmt, ...) { va_list ap; CHECK_OBJ_NOTNULL(bp, BAN_PROTO_MAGIC); AN(bp->vsb); /* First error is sticky */ if (bp->err == NULL) { if (fmt == ban_build_err_no_mem) { bp->err = ban_build_err_no_mem; } else { /* Record the error message in the vsb */ VSB_clear(bp->vsb); va_start(ap, fmt); VSB_vprintf(bp->vsb, fmt, ap); va_end(ap); AZ(VSB_finish(bp->vsb)); bp->err = VSB_data(bp->vsb); } } return (bp->err); } /*-------------------------------------------------------------------- * Parse and add a http argument specification * Output something which HTTP_GetHdr understands */ static void ban_parse_http(const struct ban_proto *bp, const char *a1) { int l; l = strlen(a1) + 1; assert(l <= 127); VSB_putc(bp->vsb, (char)l); VSB_cat(bp->vsb, a1); VSB_putc(bp->vsb, ':'); VSB_putc(bp->vsb, '\0'); } /*-------------------------------------------------------------------- * Parse and add a ban test specification */ static const char * ban_parse_regexp(struct ban_proto *bp, const char *a3) { struct vsb vsb[1]; char errbuf[VRE_ERROR_LEN]; int errorcode, erroroffset; size_t sz; vre_t *re, *rex; re = VRE_compile(a3, 0, &errorcode, &erroroffset, 0); if (re == NULL) { AN(VSB_init(vsb, errbuf, sizeof errbuf)); AZ(VRE_error(vsb, errorcode)); AZ(VSB_finish(vsb)); VSB_fini(vsb); return (ban_error(bp, "Regex compile error: %s", errbuf)); } rex = VRE_export(re, &sz); AN(rex); ban_add_lump(bp, rex, sz); VRE_free(&rex); VRE_free(&re); return (0); } static int ban_parse_oper(const char *p) { int i; for (i = 0; i < BAN_OPERARRSZ; i++) { if (!strcmp(p, ban_oper[i])) return (BANS_OPER_OFF_ + i); } return (-1); } /*-------------------------------------------------------------------- * Add a (and'ed) test-condition to a ban */ const char * BAN_AddTest(struct ban_proto *bp, const char *a1, const char *a2, const char *a3) { const struct pvar *pv; double darg; uint64_t dtmp; uint8_t denc[sizeof darg]; int op; CHECK_OBJ_NOTNULL(bp, BAN_PROTO_MAGIC); AN(bp->vsb); AN(a1); AN(a2); AN(a3); if (bp->err != NULL) return (bp->err); for (pv = pvars; pv->name != NULL; pv++) { if (!(pv->flag & BANS_FLAG_HTTP) && !strcmp(a1, pv->name)) break; if ((pv->flag & BANS_FLAG_HTTP) && !strncmp(a1, pv->name, strlen(pv->name))) break; } if (pv->name == NULL) return (ban_error(bp, "Unknown or unsupported field \"%s\"", a1)); bp->flags |= pv->flag; VSB_putc(bp->vsb, pv->tag); if (pv->flag & BANS_FLAG_HTTP) { if (strlen(a1 + strlen(pv->name)) < 1) return (ban_error(bp, "Missing header name: \"%s\"", pv->name)); assert(BANS_HAS_ARG1_SPEC(pv->tag)); ban_parse_http(bp, a1 + strlen(pv->name)); } op = ban_parse_oper(a2); if (op < BANS_OPER_OFF_ || ((1U << BAN_OPERIDX(op)) & arg_opervalid[BAN_ARGIDX(pv->tag)]) == 0) return (ban_error(bp, "expected conditional (%s) got \"%s\"", arg_operhelp[BAN_ARGIDX(pv->tag)], a2)); if ((pv->flag & BANS_FLAG_DURATION) == 0) { assert(! BANS_HAS_ARG2_DOUBLE(pv->tag)); ban_add_lump(bp, a3, strlen(a3) + 1); VSB_putc(bp->vsb, op); if (! BANS_HAS_ARG2_SPEC(op)) return (NULL); return (ban_parse_regexp(bp, a3)); } assert(pv->flag & BANS_FLAG_DURATION); assert(BANS_HAS_ARG2_DOUBLE(pv->tag)); darg = VNUM_duration(a3); if (isnan(darg)) { return (ban_error(bp, "expected duration [ms|s|m|h|d|w|y] got \"%s\"", a3)); } assert(sizeof darg == sizeof dtmp); assert(sizeof dtmp == sizeof denc); memcpy(&dtmp, &darg, sizeof dtmp); vbe64enc(denc, dtmp); ban_add_lump(bp, denc, sizeof denc); VSB_putc(bp->vsb, op); return (NULL); } /*-------------------------------------------------------------------- * We maintain ban_start as a pointer to the first element of the list * as a separate variable from the VTAILQ, to avoid depending on the * internals of the VTAILQ macros. We tacitly assume that a pointer * write is always atomic in doing so. * * Returns: * 0: Ban successfully inserted * -1: Ban not inserted due to shutdown in progress. The ban has been * deleted. */ const char * BAN_Commit(struct ban_proto *bp) { struct ban *b, *bi; ssize_t ln; vtim_real t0; uint64_t u; CHECK_OBJ_NOTNULL(bp, BAN_PROTO_MAGIC); AN(bp->vsb); assert(sizeof u == sizeof t0); if (ban_shutdown) return (ban_error(bp, "Shutting down")); AZ(VSB_finish(bp->vsb)); ln = VSB_len(bp->vsb); assert(ln >= 0); ALLOC_OBJ(b, BAN_MAGIC); if (b == NULL) return (ban_error(bp, ban_build_err_no_mem)); VTAILQ_INIT(&b->objcore); b->spec = malloc(ln + BANS_HEAD_LEN); if (b->spec == NULL) { free(b); return (ban_error(bp, ban_build_err_no_mem)); } b->flags = bp->flags; memset(b->spec, 0, BANS_HEAD_LEN); t0 = VTIM_real(); memcpy(&u, &t0, sizeof u); vbe64enc(b->spec + BANS_TIMESTAMP, u); b->spec[BANS_FLAGS] = b->flags & 0xff; memcpy(b->spec + BANS_HEAD_LEN, VSB_data(bp->vsb), ln); ln += BANS_HEAD_LEN; vbe32enc(b->spec + BANS_LENGTH, ln); Lck_Lock(&ban_mtx); if (ban_shutdown) { /* We could have raced a shutdown */ Lck_Unlock(&ban_mtx); BAN_Free(b); return (ban_error(bp, "Shutting down")); } bi = VTAILQ_FIRST(&ban_head); VTAILQ_INSERT_HEAD(&ban_head, b, list); ban_start = b; VSC_C_main->bans++; VSC_C_main->bans_added++; bans_persisted_bytes += ln; VSC_C_main->bans_persisted_bytes = bans_persisted_bytes; if (b->flags & BANS_FLAG_OBJ) VSC_C_main->bans_obj++; if (b->flags & BANS_FLAG_REQ) VSC_C_main->bans_req++; if (bi != NULL) ban_info_new(b->spec, ln); /* Notify stevedores */ if (cache_param->ban_dups) { /* Hunt down duplicates, and mark them as completed */ for (bi = VTAILQ_NEXT(b, list); bi != NULL; bi = VTAILQ_NEXT(bi, list)) { if (!(bi->flags & BANS_FLAG_COMPLETED) && ban_equal(b->spec, bi->spec)) { ban_mark_completed(bi); VSC_C_main->bans_dups++; } } } if (!(b->flags & BANS_FLAG_REQ)) ban_kick_lurker(); Lck_Unlock(&ban_mtx); BAN_Abandon(bp); return (NULL); } static void ban_build_arg_operhelp(struct vsb *vsb, int arg) { unsigned mask; const char *p = NULL, *n = NULL; int i; ASSERT_BAN_ARG(arg); mask = arg_opervalid[BAN_ARGIDX(arg)]; for (i = 0; i < BAN_OPERARRSZ; i++) { if ((mask & (1U << i)) == 0) continue; if (p == NULL) p = ban_oper[i]; else if (n == NULL) n = ban_oper[i]; else { VSB_cat(vsb, p); VSB_cat(vsb, ", "); p = n; n = ban_oper[i]; } } if (n) { AN(p); VSB_cat(vsb, p); VSB_cat(vsb, " or "); VSB_cat(vsb, n); return; } AN(p); VSB_cat(vsb, p); } void BAN_Build_Init(void) { struct vsb *vsb; int i; vsb = VSB_new_auto(); AN(vsb); for (i = BANS_ARG_OFF_; i < BANS_ARG_LIM; i ++) { VSB_clear(vsb); ban_build_arg_operhelp(vsb, i); AZ(VSB_finish(vsb)); arg_operhelp[BAN_ARGIDX(i)] = strdup(VSB_data(vsb)); AN(arg_operhelp[BAN_ARGIDX(i)]); } arg_operhelp[BAN_ARGIDX(i)] = NULL; VSB_destroy(&vsb); } void BAN_Build_Fini(void) { int i; for (i = 0; i < BAN_ARGARRSZ; i++) free(TRUST_ME(arg_operhelp[i])); } varnish-7.5.0/bin/varnishd/cache/cache_ban_lurker.c000066400000000000000000000264371457605730600222720ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #include "config.h" #include "cache_varnishd.h" #include "cache_ban.h" #include "cache_objhead.h" #include "vtim.h" static struct objcore oc_mark_cnt = { .magic = OBJCORE_MAGIC, }; static struct objcore oc_mark_end = { .magic = OBJCORE_MAGIC, }; static unsigned ban_batch; static unsigned ban_generation; pthread_cond_t ban_lurker_cond; void ban_kick_lurker(void) { Lck_AssertHeld(&ban_mtx); ban_generation++; PTOK(pthread_cond_signal(&ban_lurker_cond)); } /* * ban_cleantail: clean the tail of the ban list up to the first ban which is * still referenced. For already completed bans, we update statistics * accordingly, but otherwise just skip the completion step and remove directly * * if an obans list is passed, we clean its tail as well */ static void ban_cleantail(struct banhead_s *obans) { struct ban *b, *bt; struct banhead_s freelist = VTAILQ_HEAD_INITIALIZER(freelist); /* handle the zero-length tail unprotected */ if (VTAILQ_LAST(&ban_head, banhead_s) == VTAILQ_FIRST(&ban_head)) return; Lck_Lock(&ban_mtx); do { b = VTAILQ_LAST(&ban_head, banhead_s); if (b != VTAILQ_FIRST(&ban_head) && b->refcount == 0) { assert(VTAILQ_EMPTY(&b->objcore)); if (b->flags & BANS_FLAG_COMPLETED) VSC_C_main->bans_completed--; if (b->flags & BANS_FLAG_OBJ) VSC_C_main->bans_obj--; if (b->flags & BANS_FLAG_REQ) VSC_C_main->bans_req--; VSC_C_main->bans--; VSC_C_main->bans_deleted++; VTAILQ_REMOVE(&ban_head, b, list); VTAILQ_INSERT_TAIL(&freelist, b, list); bans_persisted_fragmentation += ban_len(b->spec); VSC_C_main->bans_persisted_fragmentation = bans_persisted_fragmentation; ban_info_drop(b->spec, ban_len(b->spec)); } else { b = NULL; } } while (b != NULL); Lck_Unlock(&ban_mtx); /* oban order is head to tail, freelist tail to head */ if (obans != NULL) bt = VTAILQ_LAST(obans, banhead_s); else bt = NULL; if (bt != NULL) { AN(obans); VTAILQ_FOREACH(b, &freelist, list) { if (b != bt) continue; VTAILQ_REMOVE(obans, b, l_list); bt = VTAILQ_LAST(obans, banhead_s); if (bt == NULL) break; } } VTAILQ_FOREACH_SAFE(b, &freelist, list, bt) BAN_Free(b); return; } /*-------------------------------------------------------------------- * Our task here is somewhat tricky: The canonical locking order is * objhead->mtx first, then ban_mtx, because that is the order which * makes most sense in HSH_Lookup(), but we come the other way. * We optimistically try to get them the other way, and get out of * the way if that fails, and retry again later. * * To avoid hammering on contested ocs, we first move those behind a marker * once. When we only have contested ocs left, we stop moving them around and * re-try them in order. */ static struct objcore * ban_lurker_getfirst(struct vsl_log *vsl, struct ban *bt) { struct objhead *oh; struct objcore *oc, *noc; int move_oc = 1; Lck_Lock(&ban_mtx); oc = VTAILQ_FIRST(&bt->objcore); while (1) { CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); if (oc == &oc_mark_cnt) { if (VTAILQ_NEXT(oc, ban_list) == &oc_mark_end) { /* done with this ban's oc list */ VTAILQ_REMOVE(&bt->objcore, &oc_mark_cnt, ban_list); VTAILQ_REMOVE(&bt->objcore, &oc_mark_end, ban_list); oc = NULL; break; } oc = VTAILQ_NEXT(oc, ban_list); CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); move_oc = 0; } else if (oc == &oc_mark_end) { assert(move_oc == 0); /* hold off to give lookup a chance and reiterate */ VSC_C_main->bans_lurker_contention++; Lck_Unlock(&ban_mtx); VSL_Flush(vsl, 0); VTIM_sleep(cache_param->ban_lurker_holdoff); Lck_Lock(&ban_mtx); oc = VTAILQ_FIRST(&bt->objcore); assert(oc == &oc_mark_cnt); continue; } assert(oc != &oc_mark_cnt); assert(oc != &oc_mark_end); oh = oc->objhead; CHECK_OBJ_NOTNULL(oh, OBJHEAD_MAGIC); if (!Lck_Trylock(&oh->mtx)) { if (oc->flags & OC_F_BUSY) { Lck_Unlock(&oh->mtx); } else if (oc->refcnt == 0 || oc->flags & (OC_F_DYING | OC_F_FAILED)) { /* * We seize the opportunity to remove * the object completely off the ban * list, now that we have both the oh * and ban mutexes. */ noc = VTAILQ_NEXT(oc, ban_list); VTAILQ_REMOVE(&bt->objcore, oc, ban_list); oc->ban = NULL; bt->refcount--; Lck_Unlock(&oh->mtx); oc = noc; continue; } else { /* * We got the lock, and the oc is not being * dismantled under our feet - grab a ref */ AZ(oc->flags & OC_F_BUSY); oc->refcnt += 1; VTAILQ_REMOVE(&bt->objcore, oc, ban_list); VTAILQ_INSERT_TAIL(&bt->objcore, oc, ban_list); Lck_Unlock(&oh->mtx); break; } } noc = VTAILQ_NEXT(oc, ban_list); if (move_oc) { /* contested ocs go between the two markers */ VTAILQ_REMOVE(&bt->objcore, oc, ban_list); VTAILQ_INSERT_BEFORE(&oc_mark_end, oc, ban_list); } oc = noc; } Lck_Unlock(&ban_mtx); return (oc); } static void ban_lurker_test_ban(struct worker *wrk, struct vsl_log *vsl, struct ban *bt, struct banhead_s *obans, struct ban *bd, int kill) { struct ban *bl, *bln; struct objcore *oc; unsigned tests; int i; uint64_t tested = 0, tested_tests = 0, lok = 0, lokc = 0; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); /* * First see if there is anything to do, and if so, insert markers */ Lck_Lock(&ban_mtx); oc = VTAILQ_FIRST(&bt->objcore); if (oc != NULL) { VTAILQ_INSERT_TAIL(&bt->objcore, &oc_mark_cnt, ban_list); VTAILQ_INSERT_TAIL(&bt->objcore, &oc_mark_end, ban_list); } Lck_Unlock(&ban_mtx); if (oc == NULL) return; while (1) { if (++ban_batch > cache_param->ban_lurker_batch) { VTIM_sleep(cache_param->ban_lurker_sleep); ban_batch = 0; } oc = ban_lurker_getfirst(vsl, bt); if (oc == NULL) { if (tested == 0 && lokc == 0) { AZ(tested_tests); AZ(lok); return; } Lck_Lock(&ban_mtx); VSC_C_main->bans_lurker_tested += tested; VSC_C_main->bans_lurker_tests_tested += tested_tests; VSC_C_main->bans_lurker_obj_killed += lok; VSC_C_main->bans_lurker_obj_killed_cutoff += lokc; Lck_Unlock(&ban_mtx); return; } i = 0; VTAILQ_FOREACH_REVERSE_SAFE(bl, obans, banhead_s, l_list, bln) { if (oc->ban != bt) { /* * HSH_Lookup() grabbed this oc, killed * it or tested it to top. We're done. */ break; } if (bl->flags & BANS_FLAG_COMPLETED) { /* Ban was overtaken by new (dup) ban */ VTAILQ_REMOVE(obans, bl, l_list); continue; } if (kill == 1) i = 1; else { AZ(bl->flags & BANS_FLAG_REQ); tests = 0; i = ban_evaluate(wrk, bl->spec, oc, NULL, &tests); tested++; tested_tests += tests; } if (i) { if (kill) { VSLb(vsl, SLT_ExpBan, "%ju killed for lurker cutoff", VXID(ObjGetXID(wrk, oc))); lokc++; } else { VSLb(vsl, SLT_ExpBan, "%ju banned by lurker", VXID(ObjGetXID(wrk, oc))); lok++; } HSH_Kill(oc); break; } } if (i == 0 && oc->ban == bt) { Lck_Lock(&ban_mtx); VSC_C_main->bans_lurker_tested += tested; VSC_C_main->bans_lurker_tests_tested += tested_tests; VSC_C_main->bans_lurker_obj_killed += lok; VSC_C_main->bans_lurker_obj_killed_cutoff += lokc; tested = tested_tests = lok = lokc = 0; if (oc->ban == bt && bt != bd) { bt->refcount--; VTAILQ_REMOVE(&bt->objcore, oc, ban_list); oc->ban = bd; bd->refcount++; VTAILQ_INSERT_TAIL(&bd->objcore, oc, ban_list); i = 1; } Lck_Unlock(&ban_mtx); if (i) ObjSendEvent(wrk, oc, OEV_BANCHG); } (void)HSH_DerefObjCore(wrk, &oc, 0); } } /*-------------------------------------------------------------------- * Ban lurker thread: * * try to move ocs as far up the ban list as possible (to bd) * * BANS_FLAG_REQ bans act as barriers, for bans further down, ocs get moved to * them. But still all bans up to the initial bd get checked and marked * completed. */ static vtim_dur ban_lurker_work(struct worker *wrk, struct vsl_log *vsl) { struct ban *b, *bd; struct banhead_s obans; vtim_real d; vtim_dur dt, n; unsigned count = 0, cutoff = UINT_MAX; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); dt = 49.62; // Random, non-magic if (cache_param->ban_lurker_sleep == 0) { ban_cleantail(NULL); return (dt); } if (cache_param->ban_cutoff > 0) cutoff = cache_param->ban_cutoff; Lck_Lock(&ban_mtx); b = ban_start; Lck_Unlock(&ban_mtx); d = VTIM_real() - cache_param->ban_lurker_age; bd = NULL; VTAILQ_INIT(&obans); for (; b != NULL; b = VTAILQ_NEXT(b, list), count++) { if (bd != NULL) ban_lurker_test_ban(wrk, vsl, b, &obans, bd, count > cutoff ? 1 : 0); if (b->flags & BANS_FLAG_COMPLETED) continue; if (b->flags & BANS_FLAG_REQ && count <= cutoff) { if (bd != NULL) bd = VTAILQ_NEXT(b, list); continue; } n = ban_time(b->spec) - d; if (n < 0) { VTAILQ_INSERT_TAIL(&obans, b, l_list); if (bd == NULL) bd = b; } else if (n < dt) { dt = n; } } /* * conceptually, all obans are now completed. Remove the tail. * If any bans to be completed remain after the tail is cut, * mark them completed */ ban_cleantail(&obans); if (VTAILQ_FIRST(&obans) == NULL) return (dt); Lck_Lock(&ban_mtx); VTAILQ_FOREACH(b, &obans, l_list) ban_mark_completed(b); Lck_Unlock(&ban_mtx); return (dt); } void * v_matchproto_(bgthread_t) ban_lurker(struct worker *wrk, void *priv) { struct vsl_log vsl; vtim_dur dt; unsigned gen = ban_generation + 1; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); AZ(priv); VSL_Setup(&vsl, NULL, 0); while (!ban_shutdown) { dt = ban_lurker_work(wrk, &vsl); if (DO_DEBUG(DBG_LURKER)) VSLb(&vsl, SLT_Debug, "lurker: sleep = %lf", dt); Lck_Lock(&ban_mtx); if (gen == ban_generation) { Pool_Sumstat(wrk); (void)Lck_CondWaitTimeout( &ban_lurker_cond, &ban_mtx, dt); ban_batch = 0; } gen = ban_generation; Lck_Unlock(&ban_mtx); } pthread_exit(0); NEEDLESS(return (NULL)); } varnish-7.5.0/bin/varnishd/cache/cache_busyobj.c000066400000000000000000000115371457605730600216160ustar00rootroot00000000000000/*- * Copyright (c) 2013-2015 Varnish Software AS * All rights reserved. * * Author: Martin Blix Grydeland * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Handle backend connections and backend request structures. * */ #include "config.h" #include #include "cache_varnishd.h" #include "cache_filter.h" #include "cache_objhead.h" static struct mempool *vbopool; /*-------------------------------------------------------------------- */ void VBO_Init(void) { vbopool = MPL_New("busyobj", &cache_param->pool_vbo, &cache_param->workspace_backend); AN(vbopool); } /*-------------------------------------------------------------------- * BusyObj handling */ static struct busyobj * vbo_New(void) { struct busyobj *bo; unsigned sz; bo = MPL_Get(vbopool, &sz); XXXAN(bo); bo->magic = BUSYOBJ_MAGIC; bo->end = (char *)bo + sz; return (bo); } static void vbo_Free(struct busyobj **bop) { struct busyobj *bo; TAKE_OBJ_NOTNULL(bo, bop, BUSYOBJ_MAGIC); AZ(bo->htc); MPL_Free(vbopool, bo); } struct busyobj * VBO_GetBusyObj(const struct worker *wrk, const struct req *req) { struct busyobj *bo; uint16_t nhttp; unsigned sz; char *p; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); bo = vbo_New(); CHECK_OBJ_NOTNULL(bo, BUSYOBJ_MAGIC); p = (void*)(bo + 1); p = (void*)PRNDUP(p); assert(p < bo->end); nhttp = (uint16_t)cache_param->http_max_hdr; sz = HTTP_estimate(nhttp); bo->bereq0 = HTTP_create(p, nhttp, sz); p += sz; p = (void*)PRNDUP(p); assert(p < bo->end); bo->bereq = HTTP_create(p, nhttp, sz); p += sz; p = (void*)PRNDUP(p); assert(p < bo->end); bo->beresp = HTTP_create(p, nhttp, sz); p += sz; p = (void*)PRNDUP(p); assert(p < bo->end); sz = cache_param->vsl_buffer; VSL_Setup(bo->vsl, p, sz); bo->vsl->wid = VXID_Get(wrk, VSL_BACKENDMARKER); p += sz; p = (void*)PRNDUP(p); assert(p < bo->end); bo->vfc = (void*)p; p += sizeof (*bo->vfc); p = (void*)PRNDUP(p); INIT_OBJ(bo->vfc, VFP_CTX_MAGIC); WS_Init(bo->ws, "bo", p, bo->end - p); bo->do_stream = 1; if (req->client_identity != NULL) { bo->client_identity = WS_Copy(bo->ws, req->client_identity, -1); XXXAN(bo->client_identity); } VRT_Assign_Backend(&bo->director_req, req->director_hint); bo->vcl = req->vcl; VCL_Ref(bo->vcl); bo->t_first = bo->t_prev = NAN; bo->connect_timeout = NAN; bo->first_byte_timeout = NAN; bo->between_bytes_timeout = NAN; memcpy(bo->digest, req->digest, sizeof bo->digest); return (bo); } void VBO_ReleaseBusyObj(struct worker *wrk, struct busyobj **pbo) { struct busyobj *bo; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); TAKE_OBJ_NOTNULL(bo, pbo, BUSYOBJ_MAGIC); CHECK_OBJ_ORNULL(bo->fetch_objcore, OBJCORE_MAGIC); AZ(bo->htc); AZ(bo->stale_oc); VSLb(bo->vsl, SLT_BereqAcct, "%ju %ju %ju %ju %ju %ju", (uintmax_t)bo->acct.bereq_hdrbytes, (uintmax_t)bo->acct.bereq_bodybytes, (uintmax_t)(bo->acct.bereq_hdrbytes + bo->acct.bereq_bodybytes), (uintmax_t)bo->acct.beresp_hdrbytes, (uintmax_t)bo->acct.beresp_bodybytes, (uintmax_t)(bo->acct.beresp_hdrbytes + bo->acct.beresp_bodybytes)); VSL_End(bo->vsl); AZ(bo->htc); if (WS_Overflowed(bo->ws)) wrk->stats->ws_backend_overflow++; if (bo->fetch_objcore != NULL) { (void)HSH_DerefObjCore(wrk, &bo->fetch_objcore, HSH_RUSH_POLICY); } VRT_Assign_Backend(&bo->director_req, NULL); VRT_Assign_Backend(&bo->director_resp, NULL); VCL_Rel(&bo->vcl); #ifdef ENABLE_WORKSPACE_EMULATOR WS_Rollback(bo->ws, 0); #endif memset(&bo->retries, 0, sizeof *bo - offsetof(struct busyobj, retries)); vbo_Free(&bo); } varnish-7.5.0/bin/varnishd/cache/cache_cli.c000066400000000000000000000076371457605730600207160ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Caching process CLI handling. * * We only have one CLI source, the stdin/stdout pipes from the manager * process, but we complicate things by having undocumented commands that * we do not want to show in a plain help, and by having commands that the * manager has already shown in help before asking us. */ #include "config.h" #include "cache_varnishd.h" #include "common/heritage.h" #include "vcli_serve.h" pthread_t cli_thread; static struct lock cli_mtx; static int add_check; static struct VCLS *cache_cls; /* * The CLI commandlist is split in three: * - Commands we get from/share with the manager, we don't show these * in help, as the manager already did that. * - Cache process commands, show in help * - Undocumented debug commands, show in undocumented "help -d" */ /*-------------------------------------------------------------------- * Add CLI functions to the appropriate command set */ void CLI_AddFuncs(struct cli_proto *p) { AZ(add_check); Lck_Lock(&cli_mtx); VCLS_AddFunc(cache_cls, 0, p); Lck_Unlock(&cli_mtx); } static void cli_cb_before(const struct cli *cli) { ASSERT_CLI(); VSL(SLT_CLI, NO_VXID, "Rd %s", VSB_data(cli->cmd)); Lck_Lock(&cli_mtx); VCL_Poll(); } static void cli_cb_after(const struct cli *cli) { ASSERT_CLI(); Lck_Unlock(&cli_mtx); VSL(SLT_CLI, NO_VXID, "Wr %03u %zd %s", cli->result, VSB_len(cli->sb), VSB_data(cli->sb)); } void CLI_Run(void) { int i; struct cli *cli; add_check = 1; /* Tell waiting MGT that we are ready to speak CLI */ AZ(VCLI_WriteResult(heritage.cli_fd, CLIS_OK, "Ready")); cli = VCLS_AddFd(cache_cls, heritage.cli_fd, heritage.cli_fd, NULL, NULL); AN(cli); cli->auth = 255; // Non-zero to disable paranoia in vcli_serve do { i = VCLS_Poll(cache_cls, cli, -1); } while (i == 0); VSL(SLT_CLI, NO_VXID, "EOF on CLI connection, worker stops"); } /*--------------------------------------------------------------------*/ static struct cli_proto cli_cmds[] = { { CLICMD_PING, "i", VCLS_func_ping, VCLS_func_ping_json }, { CLICMD_HELP, "i", VCLS_func_help, VCLS_func_help_json }, { NULL } }; /*-------------------------------------------------------------------- * Initialize the CLI subsystem */ void CLI_Init(void) { Lck_New(&cli_mtx, lck_cli); cli_thread = pthread_self(); cache_cls = VCLS_New(heritage.cls); AN(cache_cls); VCLS_SetLimit(cache_cls, &cache_param->cli_limit); VCLS_SetHooks(cache_cls, cli_cb_before, cli_cb_after); CLI_AddFuncs(cli_cmds); } varnish-7.5.0/bin/varnishd/cache/cache_conn_pool.c000066400000000000000000000424431457605730600221270ustar00rootroot00000000000000/*- * Copyright (c) 2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * (TCP|UDS) connection pools. * */ #include "config.h" #include #include "cache_varnishd.h" #include "vsa.h" #include "vsha256.h" #include "vtcp.h" #include "vus.h" #include "vtim.h" #include "waiter/waiter.h" #include "cache_conn_pool.h" #include "cache_pool.h" struct conn_pool; /*-------------------------------------------------------------------- */ struct pfd { unsigned magic; #define PFD_MAGIC 0x0c5e6593 int fd; VTAILQ_ENTRY(pfd) list; VCL_IP addr; uint8_t state; struct waited waited[1]; struct conn_pool *conn_pool; pthread_cond_t *cond; }; /*-------------------------------------------------------------------- */ typedef int cp_open_f(const struct conn_pool *, vtim_dur tmo, VCL_IP *ap); typedef void cp_close_f(struct pfd *); typedef void cp_name_f(const struct pfd *, char *, unsigned, char *, unsigned); struct cp_methods { cp_open_f *open; cp_close_f *close; cp_name_f *local_name; cp_name_f *remote_name; }; struct conn_pool { unsigned magic; #define CONN_POOL_MAGIC 0x85099bc3 const struct cp_methods *methods; struct vrt_endpoint *endpoint; char ident[VSHA256_DIGEST_LENGTH]; VTAILQ_ENTRY(conn_pool) list; int refcnt; struct lock mtx; VTAILQ_HEAD(, pfd) connlist; int n_conn; int n_kill; int n_used; vtim_mono holddown; int holddown_errno; }; static struct lock conn_pools_mtx; static VTAILQ_HEAD(, conn_pool) conn_pools = VTAILQ_HEAD_INITIALIZER(conn_pools); /*-------------------------------------------------------------------- */ unsigned PFD_State(const struct pfd *p) { CHECK_OBJ_NOTNULL(p, PFD_MAGIC); return (p->state); } int * PFD_Fd(struct pfd *p) { CHECK_OBJ_NOTNULL(p, PFD_MAGIC); return (&(p->fd)); } void PFD_LocalName(const struct pfd *p, char *abuf, unsigned alen, char *pbuf, unsigned plen) { CHECK_OBJ_NOTNULL(p, PFD_MAGIC); CHECK_OBJ_NOTNULL(p->conn_pool, CONN_POOL_MAGIC); p->conn_pool->methods->local_name(p, abuf, alen, pbuf, plen); } void PFD_RemoteName(const struct pfd *p, char *abuf, unsigned alen, char *pbuf, unsigned plen) { CHECK_OBJ_NOTNULL(p, PFD_MAGIC); CHECK_OBJ_NOTNULL(p->conn_pool, CONN_POOL_MAGIC); p->conn_pool->methods->remote_name(p, abuf, alen, pbuf, plen); } /*-------------------------------------------------------------------- * Waiter-handler */ static void v_matchproto_(waiter_handle_f) vcp_handle(struct waited *w, enum wait_event ev, vtim_real now) { struct pfd *pfd; struct conn_pool *cp; CHECK_OBJ_NOTNULL(w, WAITED_MAGIC); CAST_OBJ_NOTNULL(pfd, w->priv1, PFD_MAGIC); (void)ev; (void)now; CHECK_OBJ_NOTNULL(pfd->conn_pool, CONN_POOL_MAGIC); cp = pfd->conn_pool; Lck_Lock(&cp->mtx); switch (pfd->state) { case PFD_STATE_STOLEN: pfd->state = PFD_STATE_USED; VTAILQ_REMOVE(&cp->connlist, pfd, list); AN(pfd->cond); PTOK(pthread_cond_signal(pfd->cond)); break; case PFD_STATE_AVAIL: cp->methods->close(pfd); VTAILQ_REMOVE(&cp->connlist, pfd, list); cp->n_conn--; FREE_OBJ(pfd); break; case PFD_STATE_CLEANUP: cp->methods->close(pfd); cp->n_kill--; memset(pfd, 0x11, sizeof *pfd); free(pfd); break; default: WRONG("Wrong pfd state"); } Lck_Unlock(&cp->mtx); } /*-------------------------------------------------------------------- */ void VCP_AddRef(struct conn_pool *cp) { CHECK_OBJ_NOTNULL(cp, CONN_POOL_MAGIC); Lck_Lock(&conn_pools_mtx); assert(cp->refcnt > 0); cp->refcnt++; Lck_Unlock(&conn_pools_mtx); } /*-------------------------------------------------------------------- * Release Conn pool, destroy if last reference. */ void VCP_Rel(struct conn_pool **cpp) { struct conn_pool *cp; struct pfd *pfd, *pfd2; TAKE_OBJ_NOTNULL(cp, cpp, CONN_POOL_MAGIC); Lck_Lock(&conn_pools_mtx); assert(cp->refcnt > 0); if (--cp->refcnt > 0) { Lck_Unlock(&conn_pools_mtx); return; } AZ(cp->n_used); VTAILQ_REMOVE(&conn_pools, cp, list); Lck_Unlock(&conn_pools_mtx); Lck_Lock(&cp->mtx); VTAILQ_FOREACH_SAFE(pfd, &cp->connlist, list, pfd2) { VTAILQ_REMOVE(&cp->connlist, pfd, list); cp->n_conn--; assert(pfd->state == PFD_STATE_AVAIL); pfd->state = PFD_STATE_CLEANUP; (void)shutdown(pfd->fd, SHUT_RDWR); cp->n_kill++; } while (cp->n_kill) { Lck_Unlock(&cp->mtx); (void)usleep(20000); Lck_Lock(&cp->mtx); } Lck_Unlock(&cp->mtx); Lck_Delete(&cp->mtx); AZ(cp->n_conn); AZ(cp->n_kill); free(cp->endpoint); FREE_OBJ(cp); } /*-------------------------------------------------------------------- * Recycle a connection. */ void VCP_Recycle(const struct worker *wrk, struct pfd **pfdp) { struct pfd *pfd; struct conn_pool *cp; int i = 0; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); TAKE_OBJ_NOTNULL(pfd, pfdp, PFD_MAGIC); cp = pfd->conn_pool; CHECK_OBJ_NOTNULL(cp, CONN_POOL_MAGIC); assert(pfd->state == PFD_STATE_USED); assert(pfd->fd > 0); Lck_Lock(&cp->mtx); cp->n_used--; pfd->waited->priv1 = pfd; pfd->waited->fd = pfd->fd; pfd->waited->idle = VTIM_real(); pfd->state = PFD_STATE_AVAIL; pfd->waited->func = vcp_handle; pfd->waited->tmo = cache_param->backend_idle_timeout; if (Wait_Enter(wrk->pool->waiter, pfd->waited)) { cp->methods->close(pfd); memset(pfd, 0x33, sizeof *pfd); free(pfd); // XXX: stats pfd = NULL; } else { VTAILQ_INSERT_HEAD(&cp->connlist, pfd, list); i++; } if (pfd != NULL) cp->n_conn++; Lck_Unlock(&cp->mtx); if (i && DO_DEBUG(DBG_VTC_MODE)) { /* * In varnishtest we do not have the luxury of using * multiple backend connections, so whenever we end up * in the "pending" case, take a short nap to let the * waiter catch up and put the pfd back into circulations. * * In particular ESI:include related tests suffer random * failures without this. * * In normal operation, the only effect is that we will * have N+1 backend connections rather than N, which is * entirely harmless. */ (void)usleep(10000); } } /*-------------------------------------------------------------------- * Open a new connection from pool. */ int VCP_Open(struct conn_pool *cp, vtim_dur tmo, VCL_IP *ap, int *err) { int r; vtim_mono h; CHECK_OBJ_NOTNULL(cp, CONN_POOL_MAGIC); AN(err); while (cp->holddown > 0) { Lck_Lock(&cp->mtx); if (cp->holddown == 0) { Lck_Unlock(&cp->mtx); break; } if (VTIM_mono() >= cp->holddown) { cp->holddown = 0; Lck_Unlock(&cp->mtx); break; } *err = 0; errno = cp->holddown_errno; Lck_Unlock(&cp->mtx); return (-1); } *err = errno = 0; r = cp->methods->open(cp, tmo, ap); if (r >= 0 && errno == 0 && cp->endpoint->preamble != NULL && cp->endpoint->preamble->len > 0) { if (write(r, cp->endpoint->preamble->blob, cp->endpoint->preamble->len) != cp->endpoint->preamble->len) { *err = errno; closefd(&r); } } else { *err = errno; } if (r >= 0) return (r); h = 0; switch (errno) { case EACCES: case EPERM: h = cache_param->backend_local_error_holddown; break; case EADDRNOTAVAIL: h = cache_param->backend_local_error_holddown; break; case ECONNREFUSED: h = cache_param->backend_remote_error_holddown; break; case ENETUNREACH: h = cache_param->backend_remote_error_holddown; break; default: break; } if (h == 0) return (r); Lck_Lock(&cp->mtx); h += VTIM_mono(); if (cp->holddown == 0 || h < cp->holddown) { cp->holddown = h; cp->holddown_errno = errno; } Lck_Unlock(&cp->mtx); return (r); } /*-------------------------------------------------------------------- * Close a connection. */ void VCP_Close(struct pfd **pfdp) { struct pfd *pfd; struct conn_pool *cp; TAKE_OBJ_NOTNULL(pfd, pfdp, PFD_MAGIC); cp = pfd->conn_pool; CHECK_OBJ_NOTNULL(cp, CONN_POOL_MAGIC); assert(pfd->fd > 0); Lck_Lock(&cp->mtx); assert(pfd->state == PFD_STATE_USED || pfd->state == PFD_STATE_STOLEN); cp->n_used--; if (pfd->state == PFD_STATE_STOLEN) { (void)shutdown(pfd->fd, SHUT_RDWR); VTAILQ_REMOVE(&cp->connlist, pfd, list); pfd->state = PFD_STATE_CLEANUP; cp->n_kill++; } else { assert(pfd->state == PFD_STATE_USED); cp->methods->close(pfd); memset(pfd, 0x44, sizeof *pfd); free(pfd); } Lck_Unlock(&cp->mtx); } /*-------------------------------------------------------------------- * Get a connection, possibly recycled */ struct pfd * VCP_Get(struct conn_pool *cp, vtim_dur tmo, struct worker *wrk, unsigned force_fresh, int *err) { struct pfd *pfd; CHECK_OBJ_NOTNULL(cp, CONN_POOL_MAGIC); CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); AN(err); *err = 0; Lck_Lock(&cp->mtx); pfd = VTAILQ_FIRST(&cp->connlist); CHECK_OBJ_ORNULL(pfd, PFD_MAGIC); if (force_fresh || pfd == NULL || pfd->state == PFD_STATE_STOLEN) { pfd = NULL; } else { assert(pfd->conn_pool == cp); assert(pfd->state == PFD_STATE_AVAIL); VTAILQ_REMOVE(&cp->connlist, pfd, list); VTAILQ_INSERT_TAIL(&cp->connlist, pfd, list); cp->n_conn--; VSC_C_main->backend_reuse++; pfd->state = PFD_STATE_STOLEN; pfd->cond = &wrk->cond; } cp->n_used++; // Opening mostly works Lck_Unlock(&cp->mtx); if (pfd != NULL) return (pfd); ALLOC_OBJ(pfd, PFD_MAGIC); AN(pfd); INIT_OBJ(pfd->waited, WAITED_MAGIC); pfd->state = PFD_STATE_USED; pfd->conn_pool = cp; pfd->fd = VCP_Open(cp, tmo, &pfd->addr, err); if (pfd->fd < 0) { FREE_OBJ(pfd); Lck_Lock(&cp->mtx); cp->n_used--; // Nope, didn't work after all. Lck_Unlock(&cp->mtx); } else VSC_C_main->backend_conn++; return (pfd); } /*-------------------------------------------------------------------- */ int VCP_Wait(struct worker *wrk, struct pfd *pfd, vtim_real when) { struct conn_pool *cp; int r; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(pfd, PFD_MAGIC); cp = pfd->conn_pool; CHECK_OBJ_NOTNULL(cp, CONN_POOL_MAGIC); assert(pfd->cond == &wrk->cond); Lck_Lock(&cp->mtx); while (pfd->state == PFD_STATE_STOLEN) { r = Lck_CondWaitUntil(&wrk->cond, &cp->mtx, when); if (r != 0) { if (r == EINTR) continue; assert(r == ETIMEDOUT); Lck_Unlock(&cp->mtx); return (1); } } assert(pfd->state == PFD_STATE_USED); pfd->cond = NULL; Lck_Unlock(&cp->mtx); return (0); } /*-------------------------------------------------------------------- */ VCL_IP VCP_GetIp(struct pfd *pfd) { CHECK_OBJ_NOTNULL(pfd, PFD_MAGIC); return (pfd->addr); } /*--------------------------------------------------------------------*/ static void vcp_panic_endpoint(struct vsb *vsb, const struct vrt_endpoint *vep) { char h[VTCP_ADDRBUFSIZE]; char p[VTCP_PORTBUFSIZE]; if (PAN_dump_struct(vsb, vep, VRT_ENDPOINT_MAGIC, "vrt_endpoint")) return; if (vep->uds_path) VSB_printf(vsb, "uds_path = %s,\n", vep->uds_path); if (vep->ipv4 && VSA_Sane(vep->ipv4)) { VTCP_name(vep->ipv4, h, sizeof h, p, sizeof p); VSB_printf(vsb, "ipv4 = %s, ", h); VSB_printf(vsb, "port = %s,\n", p); } if (vep->ipv6 && VSA_Sane(vep->ipv6)) { VTCP_name(vep->ipv6, h, sizeof h, p, sizeof p); VSB_printf(vsb, "ipv6 = %s, ", h); VSB_printf(vsb, "port = %s,\n", p); } VSB_indent(vsb, -2); VSB_cat(vsb, "},\n"); } void VCP_Panic(struct vsb *vsb, struct conn_pool *cp) { if (PAN_dump_struct(vsb, cp, CONN_POOL_MAGIC, "conn_pool")) return; VSB_cat(vsb, "ident = "); VSB_quote(vsb, cp->ident, VSHA256_DIGEST_LENGTH, VSB_QUOTE_HEX); VSB_cat(vsb, ",\n"); vcp_panic_endpoint(vsb, cp->endpoint); VSB_indent(vsb, -2); VSB_cat(vsb, "},\n"); } /*--------------------------------------------------------------------*/ void VCP_Init(void) { Lck_New(&conn_pools_mtx, lck_conn_pool); } /**********************************************************************/ static inline int tmo2msec(vtim_dur tmo) { return ((int)floor(tmo * 1000.0)); } static int v_matchproto_(cp_open_f) vtp_open(const struct conn_pool *cp, vtim_dur tmo, VCL_IP *ap) { int s; int msec; CHECK_OBJ_NOTNULL(cp, CONN_POOL_MAGIC); msec = tmo2msec(tmo); if (cache_param->prefer_ipv6) { *ap = cp->endpoint->ipv6; s = VTCP_connect(*ap, msec); if (s >= 0) return (s); } *ap = cp->endpoint->ipv4; s = VTCP_connect(*ap, msec); if (s >= 0) return (s); if (!cache_param->prefer_ipv6) { *ap = cp->endpoint->ipv6; s = VTCP_connect(*ap, msec); } return (s); } /*--------------------------------------------------------------------*/ static void v_matchproto_(cp_close_f) vtp_close(struct pfd *pfd) { CHECK_OBJ_NOTNULL(pfd, PFD_MAGIC); VTCP_close(&pfd->fd); } static void v_matchproto_(cp_name_f) vtp_local_name(const struct pfd *pfd, char *addr, unsigned alen, char *pbuf, unsigned plen) { CHECK_OBJ_NOTNULL(pfd, PFD_MAGIC); VTCP_myname(pfd->fd, addr, alen, pbuf, plen); } static void v_matchproto_(cp_name_f) vtp_remote_name(const struct pfd *pfd, char *addr, unsigned alen, char *pbuf, unsigned plen) { CHECK_OBJ_NOTNULL(pfd, PFD_MAGIC); VTCP_hisname(pfd->fd, addr, alen, pbuf, plen); } static const struct cp_methods vtp_methods = { .open = vtp_open, .close = vtp_close, .local_name = vtp_local_name, .remote_name = vtp_remote_name, }; /*-------------------------------------------------------------------- */ static int v_matchproto_(cp_open_f) vus_open(const struct conn_pool *cp, vtim_dur tmo, VCL_IP *ap) { int s; int msec; CHECK_OBJ_NOTNULL(cp, CONN_POOL_MAGIC); AN(cp->endpoint->uds_path); msec = tmo2msec(tmo); *ap = bogo_ip; s = VUS_connect(cp->endpoint->uds_path, msec); return (s); } static void v_matchproto_(cp_name_f) vus_name(const struct pfd *pfd, char *addr, unsigned alen, char *pbuf, unsigned plen) { (void) pfd; assert(alen > strlen("0.0.0.0")); assert(plen > 1); strcpy(addr, "0.0.0.0"); strcpy(pbuf, "0"); } static const struct cp_methods vus_methods = { .open = vus_open, .close = vtp_close, .local_name = vus_name, .remote_name = vus_name, }; /*-------------------------------------------------------------------- * Reference a TCP pool given by {ip4, ip6} pair or a UDS. Create if * it doesn't exist already. */ struct conn_pool * VCP_Ref(const struct vrt_endpoint *vep, const char *ident) { struct conn_pool *cp, *cp2; struct VSHA256Context cx[1]; unsigned char digest[VSHA256_DIGEST_LENGTH]; CHECK_OBJ_NOTNULL(vep, VRT_ENDPOINT_MAGIC); AN(ident); VSHA256_Init(cx); VSHA256_Update(cx, ident, strlen(ident) + 1); // include \0 if (vep->uds_path != NULL) { AZ(vep->ipv4); AZ(vep->ipv6); VSHA256_Update(cx, "UDS", 4); // include \0 VSHA256_Update(cx, vep->uds_path, strlen(vep->uds_path)); } else { assert(vep->ipv4 != NULL || vep->ipv6 != NULL); if (vep->ipv4 != NULL) { assert(VSA_Sane(vep->ipv4)); VSHA256_Update(cx, "IP4", 4); // include \0 VSHA256_Update(cx, vep->ipv4, vsa_suckaddr_len); } if (vep->ipv6 != NULL) { assert(VSA_Sane(vep->ipv6)); VSHA256_Update(cx, "IP6", 4); // include \0 VSHA256_Update(cx, vep->ipv6, vsa_suckaddr_len); } } if (vep->preamble != NULL && vep->preamble->len > 0) { VSHA256_Update(cx, "PRE", 4); // include \0 VSHA256_Update(cx, vep->preamble->blob, vep->preamble->len); } VSHA256_Final(digest, cx); /* * In heavy use of dynamic backends, traversing this list * can become expensive. In order to not do so twice we * pessimistically create the necessary pool, and discard * it on a hit. (XXX: Consider hash or tree ?) */ ALLOC_OBJ(cp, CONN_POOL_MAGIC); AN(cp); cp->refcnt = 1; cp->holddown = 0; cp->endpoint = VRT_Endpoint_Clone(vep); CHECK_OBJ_NOTNULL(cp->endpoint, VRT_ENDPOINT_MAGIC); memcpy(cp->ident, digest, sizeof cp->ident); if (vep->uds_path != NULL) cp->methods = &vus_methods; else cp->methods = &vtp_methods; Lck_New(&cp->mtx, lck_conn_pool); VTAILQ_INIT(&cp->connlist); CHECK_OBJ_NOTNULL(cp, CONN_POOL_MAGIC); Lck_Lock(&conn_pools_mtx); VTAILQ_FOREACH(cp2, &conn_pools, list) { CHECK_OBJ_NOTNULL(cp2, CONN_POOL_MAGIC); assert(cp2->refcnt > 0); if (!memcmp(digest, cp2->ident, sizeof cp2->ident)) break; } if (cp2 == NULL) VTAILQ_INSERT_HEAD(&conn_pools, cp, list); else cp2->refcnt++; Lck_Unlock(&conn_pools_mtx); if (cp2 == NULL) { CHECK_OBJ_NOTNULL(cp, CONN_POOL_MAGIC); return (cp); } Lck_Delete(&cp->mtx); AZ(cp->n_conn); AZ(cp->n_kill); FREE_OBJ(cp->endpoint); FREE_OBJ(cp); CHECK_OBJ_NOTNULL(cp2, CONN_POOL_MAGIC); return (cp2); } varnish-7.5.0/bin/varnishd/cache/cache_conn_pool.h000066400000000000000000000066501457605730600221340ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Outgoing TCP|UDS connection pools * */ struct conn_pool; struct pfd; #define PFD_STATE_AVAIL (1<<0) #define PFD_STATE_USED (1<<1) #define PFD_STATE_STOLEN (1<<2) #define PFD_STATE_CLEANUP (1<<3) /*--------------------------------------------------------------------- */ unsigned PFD_State(const struct pfd *); int *PFD_Fd(struct pfd *); void PFD_LocalName(const struct pfd *, char *, unsigned, char *, unsigned); void PFD_RemoteName(const struct pfd *, char *, unsigned, char *, unsigned); /*--------------------------------------------------------------------- * Prototypes */ struct conn_pool *VCP_Ref(const struct vrt_endpoint *, const char *ident); /* * Get a reference to a connection pool. Either one or both of ipv4 or * ipv6 arg must be non-NULL, or uds must be non-NULL. If recycling * is to be used, the ident pointer distinguishes the pool from * other pools with same {ipv4, ipv6, uds}. */ void VCP_AddRef(struct conn_pool *); /* * Get another reference to an already referenced connection pool. */ void VCP_Rel(struct conn_pool **); /* * Release reference to a connection pool. When last reference * is released the pool is destroyed and all cached connections * closed. */ int VCP_Open(struct conn_pool *, vtim_dur tmo, VCL_IP *, int*); /* * Open a new connection and return the address used. * errno will be returned in the last argument. */ void VCP_Close(struct pfd **); /* * Close a connection. */ void VCP_Recycle(const struct worker *, struct pfd **); /* * Recycle an open connection. */ struct pfd *VCP_Get(struct conn_pool *, vtim_dur tmo, struct worker *, unsigned force_fresh, int *err); /* * Get a (possibly) recycled connection. * errno will be stored in err */ int VCP_Wait(struct worker *, struct pfd *, vtim_real tmo); /* * If the connection was recycled (state != VCP_STATE_USED) call this * function before attempting to receive on the connection. */ VCL_IP VCP_GetIp(struct pfd *); varnish-7.5.0/bin/varnishd/cache/cache_deliver_proc.c000066400000000000000000000160271457605730600226150ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include "config.h" #include "cache_varnishd.h" #include "cache_filter.h" #include "cache_objhead.h" void VDP_Panic(struct vsb *vsb, const struct vdp_ctx *vdc) { struct vdp_entry *vde; if (PAN_dump_struct(vsb, vdc, VDP_CTX_MAGIC, "vdc")) return; VSB_printf(vsb, "nxt = %p,\n", vdc->nxt); VSB_printf(vsb, "retval = %d,\n", vdc->retval); if (!VTAILQ_EMPTY(&vdc->vdp)) { VSB_cat(vsb, "filters = {\n"); VSB_indent(vsb, 2); VTAILQ_FOREACH(vde, &vdc->vdp, list) VSB_printf(vsb, "%s = %p { priv = %p }\n", vde->vdp->name, vde, vde->priv); VSB_indent(vsb, -2); VSB_cat(vsb, "},\n"); } VSB_indent(vsb, -2); VSB_cat(vsb, "},\n"); } /* * Ensure that transports have called VDP_Close() * to avoid leaks in VDPs */ void VDP_Fini(const struct vdp_ctx *vdc) { assert(VTAILQ_EMPTY(&vdc->vdp)); } void VDP_Init(struct vdp_ctx *vdc, struct worker *wrk, struct vsl_log *vsl, struct req *req) { AN(vdc); CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); AN(vsl); CHECK_OBJ_NOTNULL(req, REQ_MAGIC); INIT_OBJ(vdc, VDP_CTX_MAGIC); VTAILQ_INIT(&vdc->vdp); vdc->wrk = wrk; vdc->vsl = vsl; vdc->req = req; } /* VDP_bytes * * Pushes len bytes at ptr down the delivery processor list. * * This function picks and calls the next delivery processor from the * list. The return value is the return value of the delivery * processor. Upon seeing a non-zero return value, that lowest value * observed is latched in ->retval and all subsequent calls to * VDP_bytes will return that value directly without calling the next * processor. * * VDP_END marks the end of successful processing, it is issued by * VDP_DeliverObj() and may also be sent downstream by processors ending the * stream (for return value != 0) * * VDP_END must at most be received once per processor, so any VDP sending it * downstream must itself not forward it a second time. * * Valid return values (of VDP_bytes and any VDP function): * r < 0: Error, breaks out early on an error condition * r == 0: Continue * r > 0: Stop, breaks out early without error condition */ int VDP_bytes(struct vdp_ctx *vdc, enum vdp_action act, const void *ptr, ssize_t len) { int retval; struct vdp_entry *vdpe; CHECK_OBJ_NOTNULL(vdc, VDP_CTX_MAGIC); if (vdc->retval) return (vdc->retval); vdpe = vdc->nxt; CHECK_OBJ_NOTNULL(vdpe, VDP_ENTRY_MAGIC); /* at most one VDP_END call */ assert(vdpe->end == VDP_NULL); if (act == VDP_NULL) assert(len > 0); else if (act == VDP_END) vdpe->end = VDP_END; else assert(act == VDP_FLUSH); /* Call the present layer, while pointing to the next layer down */ vdc->nxt = VTAILQ_NEXT(vdpe, list); vdpe->calls++; vdc->bytes_done = len; retval = vdpe->vdp->bytes(vdc, act, &vdpe->priv, ptr, len); vdpe->bytes_in += vdc->bytes_done; if (retval && (vdc->retval == 0 || retval < vdc->retval)) vdc->retval = retval; /* Latch error value */ vdc->nxt = vdpe; return (vdc->retval); } int VDP_Push(VRT_CTX, struct vdp_ctx *vdc, struct ws *ws, const struct vdp *vdp, void *priv) { struct vdp_entry *vdpe; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(vdc, VDP_CTX_MAGIC); AN(ws); AN(vdp); AN(vdp->name); AN(vdp->bytes); if (vdc->retval) return (vdc->retval); if (DO_DEBUG(DBG_PROCESSORS)) VSLb(vdc->vsl, SLT_Debug, "VDP_push(%s)", vdp->name); vdpe = WS_Alloc(ws, sizeof *vdpe); if (vdpe == NULL) { AZ(vdc->retval); vdc->retval = -1; return (vdc->retval); } INIT_OBJ(vdpe, VDP_ENTRY_MAGIC); vdpe->vdp = vdp; vdpe->priv = priv; VTAILQ_INSERT_TAIL(&vdc->vdp, vdpe, list); vdc->nxt = VTAILQ_FIRST(&vdc->vdp); AZ(vdc->retval); if (vdpe->vdp->init == NULL) return (vdc->retval); vdc->retval = vdpe->vdp->init(ctx, vdc, &vdpe->priv, vdpe == vdc->nxt ? vdc->req->objcore : NULL); if (vdc->retval > 0) { VTAILQ_REMOVE(&vdc->vdp, vdpe, list); vdc->nxt = VTAILQ_FIRST(&vdc->vdp); vdc->retval = 0; } return (vdc->retval); } uint64_t VDP_Close(struct vdp_ctx *vdc, struct objcore *oc, struct boc *boc) { struct vdp_entry *vdpe; uint64_t rv = 0; CHECK_OBJ_NOTNULL(vdc, VDP_CTX_MAGIC); CHECK_OBJ_NOTNULL(vdc->wrk, WORKER_MAGIC); CHECK_OBJ_ORNULL(oc, OBJCORE_MAGIC); CHECK_OBJ_ORNULL(boc, BOC_MAGIC); while (!VTAILQ_EMPTY(&vdc->vdp)) { vdpe = VTAILQ_FIRST(&vdc->vdp); rv = vdpe->bytes_in; VSLb(vdc->vsl, SLT_VdpAcct, "%s %ju %ju", vdpe->vdp->name, (uintmax_t)vdpe->calls, (uintmax_t)rv); if (vdc->retval >= 0) AN(vdpe); if (vdpe != NULL) { CHECK_OBJ(vdpe, VDP_ENTRY_MAGIC); if (vdpe->vdp->fini != NULL) AZ(vdpe->vdp->fini(vdc, &vdpe->priv)); AZ(vdpe->priv); VTAILQ_REMOVE(&vdc->vdp, vdpe, list); } vdc->nxt = VTAILQ_FIRST(&vdc->vdp); #ifdef VDP_PEDANTIC_ARMED // enable when we are confident to get VDP_END right if (vdc->nxt == NULL && vdc->retval >= 0) assert(vdpe->end == VDP_END); #endif } if (oc != NULL) HSH_Cancel(vdc->wrk, oc, boc); return (rv); } /*--------------------------------------------------------------------*/ static int v_matchproto_(objiterate_f) vdp_objiterate(void *priv, unsigned flush, const void *ptr, ssize_t len) { enum vdp_action act; if (flush == 0) act = VDP_NULL; else if ((flush & OBJ_ITER_END) != 0) act = VDP_END; else act = VDP_FLUSH; return (VDP_bytes(priv, act, ptr, len)); } int VDP_DeliverObj(struct vdp_ctx *vdc, struct objcore *oc) { int r, final; CHECK_OBJ_NOTNULL(vdc, VDP_CTX_MAGIC); CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); CHECK_OBJ_NOTNULL(vdc->wrk, WORKER_MAGIC); AN(vdc->vsl); vdc->req = NULL; final = oc->flags & (OC_F_PRIVATE | OC_F_HFM | OC_F_HFP) ? 1 : 0; r = ObjIterate(vdc->wrk, oc, vdc, vdp_objiterate, final); if (r < 0) return (r); return (0); } varnish-7.5.0/bin/varnishd/cache/cache_director.c000066400000000000000000000320211457605730600217430ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Abstract director API * * The abstract director API does not know how we talk to the backend or * if there even is one in the usual meaning of the word. * */ #include "config.h" #include "cache_varnishd.h" #include "cache_director.h" #include "vcli_serve.h" #include "vte.h" #include "vtim.h" /* -------------------------------------------------------------------*/ struct vdi_ahealth { const char *name; int health; }; #define VBE_AHEALTH(l,u,h) \ static const struct vdi_ahealth vdi_ah_##l[1] = {{#l,h}}; \ const struct vdi_ahealth * const VDI_AH_##u = vdi_ah_##l; VBE_AHEALTH_LIST #undef VBE_AHEALTH static const struct vdi_ahealth * vdi_str2ahealth(const char *t) { #define VBE_AHEALTH(l,u,h) if (!strcasecmp(t, #l)) return (VDI_AH_##u); VBE_AHEALTH_LIST #undef VBE_AHEALTH return (NULL); } static const char * VDI_Ahealth(const struct director *d) { CHECK_OBJ_NOTNULL(d, DIRECTOR_MAGIC); AN(d->vdir->admin_health); if (d->vdir->admin_health != VDI_AH_AUTO) return (d->vdir->admin_health->name); if (d->vdir->methods->healthy != NULL) return ("probe"); return ("healthy"); } /* Resolve director --------------------------------------------------*/ VCL_BACKEND VRT_DirectorResolve(VRT_CTX, VCL_BACKEND d) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); while (d != NULL) { CHECK_OBJ(d, DIRECTOR_MAGIC); AN(d->vdir); if (d->vdir->methods->resolve == NULL) break; d = d->vdir->methods->resolve(ctx, d); } if (d == NULL) return (NULL); CHECK_OBJ(d, DIRECTOR_MAGIC); AN(d->vdir); return (d); } static VCL_BACKEND VDI_Resolve(VRT_CTX) { VCL_BACKEND d; struct busyobj *bo; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); bo = ctx->bo; CHECK_OBJ_NOTNULL(bo, BUSYOBJ_MAGIC); if (bo->director_req == NULL) { VSLb(bo->vsl, SLT_FetchError, "No backend"); return (NULL); } CHECK_OBJ(bo->director_req, DIRECTOR_MAGIC); d = VRT_DirectorResolve(ctx, bo->director_req); if (d != NULL) return (d); VSLb(bo->vsl, SLT_FetchError, "Director %s returned no backend", bo->director_req->vcl_name); return (NULL); } /* Get a set of response headers -------------------------------------*/ int VDI_GetHdr(struct busyobj *bo) { const struct director *d; struct vrt_ctx ctx[1]; int i = -1; CHECK_OBJ_NOTNULL(bo, BUSYOBJ_MAGIC); INIT_OBJ(ctx, VRT_CTX_MAGIC); VCL_Bo2Ctx(ctx, bo); d = VDI_Resolve(ctx); if (d != NULL) { VRT_Assign_Backend(&bo->director_resp, d); AN(d->vdir->methods->gethdrs); bo->director_state = DIR_S_HDRS; i = d->vdir->methods->gethdrs(ctx, d); } if (i) bo->director_state = DIR_S_NULL; return (i); } /* Get IP number (if any ) -------------------------------------------*/ VCL_IP VDI_GetIP(struct busyobj *bo) { const struct director *d; struct vrt_ctx ctx[1]; CHECK_OBJ_NOTNULL(bo, BUSYOBJ_MAGIC); INIT_OBJ(ctx, VRT_CTX_MAGIC); VCL_Bo2Ctx(ctx, bo); d = bo->director_resp; CHECK_OBJ_NOTNULL(d, DIRECTOR_MAGIC); assert(bo->director_state == DIR_S_HDRS || bo->director_state == DIR_S_BODY); AZ(d->vdir->methods->resolve); if (d->vdir->methods->getip == NULL) return (NULL); return (d->vdir->methods->getip(ctx, d)); } /* Finish fetch ------------------------------------------------------*/ void VDI_Finish(struct busyobj *bo) { const struct director *d; struct vrt_ctx ctx[1]; CHECK_OBJ_NOTNULL(bo, BUSYOBJ_MAGIC); INIT_OBJ(ctx, VRT_CTX_MAGIC); VCL_Bo2Ctx(ctx, bo); d = bo->director_resp; CHECK_OBJ_NOTNULL(d, DIRECTOR_MAGIC); AZ(d->vdir->methods->resolve); AN(d->vdir->methods->finish); assert(bo->director_state != DIR_S_NULL); d->vdir->methods->finish(ctx, d); bo->director_state = DIR_S_NULL; } /* Get a connection --------------------------------------------------*/ stream_close_t VDI_Http1Pipe(struct req *req, struct busyobj *bo) { const struct director *d; struct vrt_ctx ctx[1]; CHECK_OBJ_NOTNULL(req, REQ_MAGIC); CHECK_OBJ_NOTNULL(bo, BUSYOBJ_MAGIC); INIT_OBJ(ctx, VRT_CTX_MAGIC); VCL_Bo2Ctx(ctx, bo); VCL_Req2Ctx(ctx, req); d = VDI_Resolve(ctx); if (d == NULL || d->vdir->methods->http1pipe == NULL) { VSLb(bo->vsl, SLT_VCL_Error, "Backend does not support pipe"); return (SC_TX_ERROR); } VRT_Assign_Backend(&bo->director_resp, d); return (d->vdir->methods->http1pipe(ctx, d)); } /* Check health -------------------------------------------------------- * * If director has no healthy method, we just assume it is healthy. */ /*-------------------------------------------------------------------- * Test if backend is healthy and report when that last changed */ VCL_BOOL VRT_Healthy(VRT_CTX, VCL_BACKEND d, VCL_TIME *changed) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); if (d == NULL) return (0); CHECK_OBJ_NOTNULL(d, DIRECTOR_MAGIC); if (d->vdir->admin_health->health >= 0) { if (changed != NULL) *changed = d->vdir->health_changed; return (d->vdir->admin_health->health); } if (d->vdir->methods->healthy == NULL) { if (changed != NULL) *changed = d->vdir->health_changed; return (1); } return (d->vdir->methods->healthy(ctx, d, changed)); } /*-------------------------------------------------------------------- * Update health_changed. */ VCL_VOID VRT_SetChanged(VCL_BACKEND d, VCL_TIME changed) { CHECK_OBJ_ORNULL(d, DIRECTOR_MAGIC); if (d != NULL && changed > d->vdir->health_changed) d->vdir->health_changed = changed; } /* Send Event ---------------------------------------------------------- */ void VDI_Event(const struct director *d, enum vcl_event_e ev) { CHECK_OBJ_NOTNULL(d, DIRECTOR_MAGIC); if (d->vdir->methods->event != NULL) d->vdir->methods->event(d, ev); } /* Dump panic info ----------------------------------------------------- */ void VDI_Panic(const struct director *d, struct vsb *vsb, const char *nm) { if (d == NULL) return; VSB_printf(vsb, "%s = %p {\n", nm, d); VSB_indent(vsb, 2); VSB_printf(vsb, "cli_name = %s,\n", d->vdir->cli_name); VSB_printf(vsb, "admin_health = %s, changed = %f,\n", VDI_Ahealth(d), d->vdir->health_changed); VSB_printf(vsb, "type = %s {\n", d->vdir->methods->type); VSB_indent(vsb, 2); if (d->vdir->methods->panic != NULL) d->vdir->methods->panic(d, vsb); VSB_indent(vsb, -2); VSB_cat(vsb, "},\n"); VSB_indent(vsb, -2); VSB_cat(vsb, "},\n"); } /*---------------------------------------------------------------------*/ struct list_args { unsigned magic; #define LIST_ARGS_MAGIC 0x7e7cefeb int p; int j; const char *jsep; struct vsb *vsb; struct vte *vte; }; static const char * cli_health(VRT_CTX, const struct director *d) { VCL_BOOL healthy = VRT_Healthy(ctx, d, NULL); return (healthy ? "healthy" : "sick"); } static int v_matchproto_(vcl_be_func) do_list(struct cli *cli, struct director *d, void *priv) { char time_str[VTIM_FORMAT_SIZE]; struct list_args *la; struct vrt_ctx *ctx; AN(cli); CAST_OBJ_NOTNULL(la, priv, LIST_ARGS_MAGIC); AN(la->vsb); AN(la->vte); CHECK_OBJ_NOTNULL(d, DIRECTOR_MAGIC); if (d->vdir->admin_health == VDI_AH_DELETED) return (0); ctx = VCL_Get_CliCtx(0); VTE_printf(la->vte, "%s\t%s\t", d->vdir->cli_name, VDI_Ahealth(d)); if (d->vdir->methods->list != NULL) { VSB_clear(la->vsb); d->vdir->methods->list(ctx, d, la->vsb, 0, 0); AZ(VSB_finish(la->vsb)); VTE_cat(la->vte, VSB_data(la->vsb)); } else if (d->vdir->methods->healthy != NULL) VTE_printf(la->vte, "0/0\t%s", cli_health(ctx, d)); else VTE_cat(la->vte, "0/0\thealthy"); VTIM_format(d->vdir->health_changed, time_str); VTE_printf(la->vte, "\t%s", time_str); if (la->p && d->vdir->methods->list != NULL) { VSB_clear(la->vsb); d->vdir->methods->list(ctx, d, la->vsb, la->p, 0); AZ(VSB_finish(la->vsb)); VTE_cat(la->vte, VSB_data(la->vsb)); } VTE_cat(la->vte, "\n"); AZ(VCL_Rel_CliCtx(&ctx)); AZ(ctx); return (0); } static int v_matchproto_(vcl_be_func) do_list_json(struct cli *cli, struct director *d, void *priv) { struct list_args *la; struct vrt_ctx *ctx; CAST_OBJ_NOTNULL(la, priv, LIST_ARGS_MAGIC); CHECK_OBJ_NOTNULL(d, DIRECTOR_MAGIC); if (d->vdir->admin_health == VDI_AH_DELETED) return (0); ctx = VCL_Get_CliCtx(0); VCLI_Out(cli, "%s", la->jsep); la->jsep = ",\n"; VCLI_JSON_str(cli, d->vdir->cli_name); VCLI_Out(cli, ": {\n"); VSB_indent(cli->sb, 2); VCLI_Out(cli, "\"type\": \"%s\",\n", d->vdir->methods->type); VCLI_Out(cli, "\"admin_health\": \"%s\",\n", VDI_Ahealth(d)); VCLI_Out(cli, "\"probe_message\": "); if (d->vdir->methods->list != NULL) d->vdir->methods->list(ctx, d, cli->sb, 0, 1); else if (d->vdir->methods->healthy != NULL) VCLI_Out(cli, "[0, 0, \"%s\"]", cli_health(ctx, d)); else VCLI_Out(cli, "[0, 0, \"healthy\"]"); VCLI_Out(cli, ",\n"); if (la->p && d->vdir->methods->list != NULL) { VCLI_Out(cli, "\"probe_details\": "); d->vdir->methods->list(ctx, d, cli->sb, la->p, 1); } VCLI_Out(cli, "\"last_change\": %.3f\n", d->vdir->health_changed); VSB_indent(cli->sb, -2); VCLI_Out(cli, "}"); AZ(VCL_Rel_CliCtx(&ctx)); AZ(ctx); return (0); } static void v_matchproto_(cli_func_t) cli_backend_list(struct cli *cli, const char * const *av, void *priv) { const char *p; struct list_args la[1]; int i; (void)priv; ASSERT_CLI(); INIT_OBJ(la, LIST_ARGS_MAGIC); la->jsep = ""; for (i = 2; av[i] != NULL && av[i][0] == '-'; i++) { for(p = av[i] + 1; *p; p++) { switch(*p) { case 'j': la->j = 1; break; case 'p': la->p = !la->p; break; default: VCLI_Out(cli, "Invalid flag %c", *p); VCLI_SetResult(cli, CLIS_PARAM); return; } } } if (av[i] != NULL && av[i+1] != NULL) { VCLI_Out(cli, "Too many arguments"); VCLI_SetResult(cli, CLIS_PARAM); return; } if (la->j) { VCLI_JSON_begin(cli, 3, av); VCLI_Out(cli, ",\n"); VCLI_Out(cli, "{\n"); VSB_indent(cli->sb, 2); (void)VCL_IterDirector(cli, av[i], do_list_json, la); VSB_indent(cli->sb, -2); VCLI_Out(cli, "\n"); VCLI_Out(cli, "}"); VCLI_JSON_end(cli); } else { la->vsb = VSB_new_auto(); AN(la->vsb); la->vte = VTE_new(5, 80); AN(la->vte); VTE_printf(la->vte, "%s\t%s\t%s\t%s\t%s\n", "Backend name", "Admin", "Probe", "Health", "Last change"); (void)VCL_IterDirector(cli, av[i], do_list, la); AZ(VTE_finish(la->vte)); AZ(VTE_format(la->vte, VCLI_VTE_format, cli)); VTE_destroy(&la->vte); VSB_destroy(&la->vsb); } } /*---------------------------------------------------------------------*/ struct set_health { unsigned magic; #define SET_HEALTH_MAGIC 0x0c46b9fb const struct vdi_ahealth *ah; }; static int v_matchproto_(vcl_be_func) do_set_health(struct cli *cli, struct director *d, void *priv) { struct set_health *sh; (void)cli; CHECK_OBJ_NOTNULL(d, DIRECTOR_MAGIC); CAST_OBJ_NOTNULL(sh, priv, SET_HEALTH_MAGIC); if (d->vdir->admin_health == VDI_AH_DELETED) return (0); if (d->vdir->admin_health != sh->ah) { d->vdir->health_changed = VTIM_real(); d->vdir->admin_health = sh->ah; } return (0); } static void v_matchproto_() cli_backend_set_health(struct cli *cli, const char * const *av, void *priv) { int n; struct set_health sh[1]; (void)av; (void)priv; ASSERT_CLI(); AN(av[2]); AN(av[3]); INIT_OBJ(sh, SET_HEALTH_MAGIC); sh->ah = vdi_str2ahealth(av[3]); if (sh->ah == NULL || sh->ah == VDI_AH_DELETED) { VCLI_Out(cli, "Invalid state %s", av[3]); VCLI_SetResult(cli, CLIS_PARAM); return; } n = VCL_IterDirector(cli, av[2], do_set_health, sh); if (n == 0) { VCLI_Out(cli, "No Backends matches"); VCLI_SetResult(cli, CLIS_PARAM); } } /*---------------------------------------------------------------------*/ static struct cli_proto backend_cmds[] = { { CLICMD_BACKEND_LIST, "", cli_backend_list, cli_backend_list }, { CLICMD_BACKEND_SET_HEALTH, "", cli_backend_set_health }, { NULL } }; /*---------------------------------------------------------------------*/ void VDI_Init(void) { CLI_AddFuncs(backend_cmds); } varnish-7.5.0/bin/varnishd/cache/cache_director.h000066400000000000000000000043041457605730600217530ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2018 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * This is the private implementation of directors. * You are not supposed to need anything here. * */ struct vcldir { unsigned magic; #define VCLDIR_MAGIC 0xbf726c7d int refcnt; unsigned flags; #define VDIR_FLG_NOREFCNT 1 struct lock dlck; struct director *dir; struct vcl *vcl; const struct vdi_methods *methods; VTAILQ_ENTRY(vcldir) list; const struct vdi_ahealth *admin_health; vtim_real health_changed; char *cli_name; }; #define VBE_AHEALTH_LIST \ VBE_AHEALTH(healthy, HEALTHY, 1) \ VBE_AHEALTH(sick, SICK, 0) \ VBE_AHEALTH(auto, AUTO, -1) \ VBE_AHEALTH(deleted, DELETED, 0) #define VBE_AHEALTH(l,u,h) extern const struct vdi_ahealth * const VDI_AH_##u; VBE_AHEALTH_LIST #undef VBE_AHEALTH varnish-7.5.0/bin/varnishd/cache/cache_esi.h000066400000000000000000000040401457605730600207150ustar00rootroot00000000000000/*- * Copyright (c) 2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #define VEC_GZ (0x21) #define VEC_V1 (0x40 + 1) #define VEC_V2 (0x40 + 2) #define VEC_V8 (0x40 + 8) #define VEC_C1 (0x50 + 1) #define VEC_C2 (0x50 + 2) #define VEC_C8 (0x50 + 8) #define VEC_S1 (0x60 + 1) #define VEC_S2 (0x60 + 2) #define VEC_S8 (0x60 + 8) #define VEC_IA (0x40 + 9) #define VEC_IC (0x60 + 9) typedef ssize_t vep_callback_t(struct vfp_ctx *, void *priv, ssize_t l, enum vgz_flag flg); struct vep_state *VEP_Init(struct vfp_ctx *vc, const struct http *req, vep_callback_t *cb, void *cb_priv); void VEP_Parse(struct vep_state *, const char *p, size_t l); struct vsb *VEP_Finish(struct vep_state *); varnish-7.5.0/bin/varnishd/cache/cache_esi_deliver.c000066400000000000000000000536651457605730600224430ustar00rootroot00000000000000/*- * Copyright (c) 2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * VED - Varnish Esi Delivery */ #include "config.h" #include "cache_varnishd.h" #include #include "cache_transport.h" #include "cache_filter.h" #include "cache_vgz.h" #include "vct.h" #include "vtim.h" #include "cache_esi.h" #include "vend.h" #include "vgz.h" static vtr_deliver_f ved_deliver; static vtr_reembark_f ved_reembark; static const uint8_t gzip_hdr[] = { 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x03 }; struct ecx { unsigned magic; #define ECX_MAGIC 0x0b0f9163 const uint8_t *p; const uint8_t *e; int state; ssize_t l; int isgzip; int woken; int abrt; struct req *preq; struct ecx *pecx; ssize_t l_crc; uint32_t crc; }; static int v_matchproto_(vtr_minimal_response_f) ved_minimal_response(struct req *req, uint16_t status) { (void)req; (void)status; WRONG("esi:includes should not try minimal responses"); } static const struct transport VED_transport = { .magic = TRANSPORT_MAGIC, .name = "ESI_INCLUDE", .deliver = ved_deliver, .reembark = ved_reembark, .minimal_response = ved_minimal_response, }; /*--------------------------------------------------------------------*/ static void v_matchproto_(vtr_reembark_f) ved_reembark(struct worker *wrk, struct req *req) { struct ecx *ecx; (void)wrk; CHECK_OBJ_NOTNULL(req, REQ_MAGIC); CAST_OBJ_NOTNULL(ecx, req->transport_priv, ECX_MAGIC); Lck_Lock(&req->sp->mtx); ecx->woken = 1; PTOK(pthread_cond_signal(&ecx->preq->wrk->cond)); Lck_Unlock(&req->sp->mtx); } /*--------------------------------------------------------------------*/ static void ved_include(struct req *preq, const char *src, const char *host, struct ecx *ecx) { struct worker *wrk; struct sess *sp; struct req *req; enum req_fsm_nxt s; CHECK_OBJ_NOTNULL(preq, REQ_MAGIC); CHECK_OBJ_NOTNULL(preq->top, REQTOP_MAGIC); sp = preq->sp; CHECK_OBJ_NOTNULL(sp, SESS_MAGIC); CHECK_OBJ_NOTNULL(ecx, ECX_MAGIC); wrk = preq->wrk; if (preq->esi_level >= cache_param->max_esi_depth) { VSLb(preq->vsl, SLT_VCL_Error, "ESI depth limit reached (param max_esi_depth = %u)", cache_param->max_esi_depth); if (ecx->abrt) preq->top->topreq->vdc->retval = -1; return; } req = Req_New(sp); AN(req); THR_SetRequest(req); assert(IS_NO_VXID(req->vsl->wid)); req->vsl->wid = VXID_Get(wrk, VSL_CLIENTMARKER); wrk->stats->esi_req++; req->esi_level = preq->esi_level + 1; VSLb(req->vsl, SLT_Begin, "req %ju esi %u", (uintmax_t)VXID(preq->vsl->wid), req->esi_level); VSLb(preq->vsl, SLT_Link, "req %ju esi %u", (uintmax_t)VXID(req->vsl->wid), req->esi_level); VSLb_ts_req(req, "Start", W_TIM_real(wrk)); memset(req->top, 0, sizeof *req->top); req->top = preq->top; HTTP_Setup(req->http, req->ws, req->vsl, SLT_ReqMethod); HTTP_Dup(req->http, preq->http0); http_SetH(req->http, HTTP_HDR_URL, src); if (host != NULL && *host != '\0') { http_Unset(req->http, H_Host); http_SetHeader(req->http, host); } http_ForceField(req->http, HTTP_HDR_METHOD, "GET"); http_ForceField(req->http, HTTP_HDR_PROTO, "HTTP/1.1"); /* Don't allow conditionals, we can't use a 304 */ http_Unset(req->http, H_If_Modified_Since); http_Unset(req->http, H_If_None_Match); /* Don't allow Range */ http_Unset(req->http, H_Range); /* Set Accept-Encoding according to what we want */ if (ecx->isgzip) http_ForceHeader(req->http, H_Accept_Encoding, "gzip"); else http_Unset(req->http, H_Accept_Encoding); /* Client content already taken care of */ http_Unset(req->http, H_Content_Length); http_Unset(req->http, H_Transfer_Encoding); req->req_body_status = BS_NONE; AZ(req->vcl); AN(req->top); if (req->top->vcl0) req->vcl = req->top->vcl0; else req->vcl = preq->vcl; VCL_Ref(req->vcl); assert(req->req_step == R_STP_TRANSPORT); req->t_req = preq->t_req; req->transport = &VED_transport; req->transport_priv = ecx; VCL_TaskEnter(req->privs); while (1) { CNT_Embark(wrk, req); ecx->woken = 0; s = CNT_Request(req); if (s == REQ_FSM_DONE) break; DSL(DBG_WAITINGLIST, req->vsl->wid, "waiting for ESI (%d)", (int)s); assert(s == REQ_FSM_DISEMBARK); Lck_Lock(&sp->mtx); if (!ecx->woken) (void)Lck_CondWait(&ecx->preq->wrk->cond, &sp->mtx); Lck_Unlock(&sp->mtx); AZ(req->wrk); } VCL_Rel(&req->vcl); req->wrk = NULL; THR_SetRequest(preq); Req_Cleanup(sp, wrk, req); Req_Release(req); } /*--------------------------------------------------------------------*/ //#define Debug(fmt, ...) printf(fmt, __VA_ARGS__) #define Debug(fmt, ...) /**/ static ssize_t ved_decode_len(struct vsl_log *vsl, const uint8_t **pp) { const uint8_t *p; ssize_t l; p = *pp; switch (*p & 15) { case 1: l = p[1]; p += 2; break; case 2: l = vbe16dec(p + 1); p += 3; break; case 8: l = vbe64dec(p + 1); p += 9; break; default: VSLb(vsl, SLT_Error, "ESI-corruption: Illegal Length %d %d\n", *p, (*p & 15)); WRONG("ESI-codes: illegal length"); } *pp = p; assert(l > 0); return (l); } /*--------------------------------------------------------------------- */ static int v_matchproto_(vdp_init_f) ved_vdp_esi_init(VRT_CTX, struct vdp_ctx *vdc, void **priv, struct objcore *oc) { struct ecx *ecx; struct req *req; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(vdc, VDP_CTX_MAGIC); CHECK_OBJ_ORNULL(oc, OBJCORE_MAGIC); if (oc == NULL || !ObjHasAttr(vdc->wrk, oc, OA_ESIDATA)) return (1); req = vdc->req; CHECK_OBJ_NOTNULL(req, REQ_MAGIC); AN(priv); AZ(*priv); ALLOC_OBJ(ecx, ECX_MAGIC); AN(ecx); assert(sizeof gzip_hdr == 10); ecx->preq = req; *priv = ecx; RFC2616_Weaken_Etag(req->resp); req->res_mode |= RES_ESI; if (req->resp_len != 0) req->resp_len = -1; if (req->esi_level > 0) { assert(req->transport == &VED_transport); CAST_OBJ_NOTNULL(ecx->pecx, req->transport_priv, ECX_MAGIC); if (!ecx->pecx->isgzip) ecx->pecx = NULL; } return (0); } static int v_matchproto_(vdp_fini_f) ved_vdp_esi_fini(struct vdp_ctx *vdc, void **priv) { struct ecx *ecx; (void)vdc; TAKE_OBJ_NOTNULL(ecx, priv, ECX_MAGIC); FREE_OBJ(ecx); return (0); } static int v_matchproto_(vdp_bytes_f) ved_vdp_esi_bytes(struct vdp_ctx *vdc, enum vdp_action act, void **priv, const void *ptr, ssize_t len) { const uint8_t *q, *r; ssize_t l = 0; uint32_t icrc = 0; uint8_t tailbuf[8 + 5]; const uint8_t *pp; struct ecx *ecx; int retval = 0; if (act == VDP_END) act = VDP_FLUSH; AN(priv); CHECK_OBJ_NOTNULL(vdc, VDP_CTX_MAGIC); CAST_OBJ_NOTNULL(ecx, *priv, ECX_MAGIC); pp = ptr; while (1) { switch (ecx->state) { case 0: ecx->p = ObjGetAttr(vdc->wrk, ecx->preq->objcore, OA_ESIDATA, &l); AN(ecx->p); assert(l > 0); ecx->e = ecx->p + l; if (*ecx->p == VEC_GZ) { if (ecx->pecx == NULL) retval = VDP_bytes(vdc, VDP_NULL, gzip_hdr, 10); ecx->l_crc = 0; ecx->crc = crc32(0L, Z_NULL, 0); ecx->isgzip = 1; ecx->p++; } ecx->state = 1; break; case 1: if (ecx->p >= ecx->e) { ecx->state = 2; break; } switch (*ecx->p) { case VEC_V1: case VEC_V2: case VEC_V8: ecx->l = ved_decode_len(vdc->vsl, &ecx->p); if (ecx->l < 0) return (-1); if (ecx->isgzip) { assert(*ecx->p == VEC_C1 || *ecx->p == VEC_C2 || *ecx->p == VEC_C8); l = ved_decode_len(vdc->vsl, &ecx->p); if (l < 0) return (-1); icrc = vbe32dec(ecx->p); ecx->p += 4; ecx->crc = crc32_combine( ecx->crc, icrc, l); ecx->l_crc += l; } ecx->state = 3; break; case VEC_S1: case VEC_S2: case VEC_S8: ecx->l = ved_decode_len(vdc->vsl, &ecx->p); if (ecx->l < 0) return (-1); Debug("SKIP1(%d)\n", (int)ecx->l); ecx->state = 4; break; case VEC_IA: ecx->abrt = FEATURE(FEATURE_ESI_INCLUDE_ONERROR); /* FALLTHROUGH */ case VEC_IC: ecx->p++; q = (void*)strchr((const char*)ecx->p, '\0'); AN(q); q++; r = (void*)strchr((const char*)q, '\0'); AN(r); if (VDP_bytes(vdc, VDP_FLUSH, NULL, 0)) { ecx->p = ecx->e; break; } Debug("INCL [%s][%s] BEGIN\n", q, ecx->p); ved_include(ecx->preq, (const char*)q, (const char*)ecx->p, ecx); Debug("INCL [%s][%s] END\n", q, ecx->p); ecx->p = r + 1; break; default: VSLb(vdc->vsl, SLT_Error, "ESI corruption line %d 0x%02x [%s]\n", __LINE__, *ecx->p, ecx->p); WRONG("ESI-codes: Illegal code"); } break; case 2: ptr = NULL; len = 0; if (ecx->isgzip && ecx->pecx == NULL) { /* * We are bytealigned here, so simply emit * a gzip literal block with finish bit set. */ tailbuf[0] = 0x01; tailbuf[1] = 0x00; tailbuf[2] = 0x00; tailbuf[3] = 0xff; tailbuf[4] = 0xff; /* Emit CRC32 */ vle32enc(tailbuf + 5, ecx->crc); /* MOD(2^32) length */ vle32enc(tailbuf + 9, ecx->l_crc); ptr = tailbuf; len = 13; } else if (ecx->pecx != NULL) { ecx->pecx->crc = crc32_combine(ecx->pecx->crc, ecx->crc, ecx->l_crc); ecx->pecx->l_crc += ecx->l_crc; } retval = VDP_bytes(vdc, VDP_END, ptr, len); ecx->state = 99; return (retval); case 3: case 4: /* * There is no guarantee that the 'l' bytes are all * in the same storage segment, so loop over storage * until we have processed them all. */ if (ecx->l <= len) { if (ecx->state == 3) retval = VDP_bytes(vdc, act, pp, ecx->l); len -= ecx->l; pp += ecx->l; ecx->state = 1; break; } if (ecx->state == 3 && len > 0) retval = VDP_bytes(vdc, act, pp, len); ecx->l -= len; return (retval); case 99: /* * VEP does not account for the PAD+CRC+LEN * so we can see up to approx 15 bytes here. */ return (retval); default: WRONG("FOO"); break; } if (retval) return (retval); } } const struct vdp VDP_esi = { .name = "esi", .init = ved_vdp_esi_init, .bytes = ved_vdp_esi_bytes, .fini = ved_vdp_esi_fini, }; /* * Account body bytes on req * Push bytes to preq */ static inline int ved_bytes(struct ecx *ecx, enum vdp_action act, const void *ptr, ssize_t len) { if (act == VDP_END) act = VDP_FLUSH; return (VDP_bytes(ecx->preq->vdc, act, ptr, len)); } /*--------------------------------------------------------------------- * If a gzip'ed ESI object includes a ungzip'ed object, we need to make * it looked like a gzip'ed data stream. The official way to do so would * be to fire up libvgz and gzip it, but we don't, we fake it. * * First, we cannot know if it is ungzip'ed on purpose, the admin may * know something we don't. * * What do you mean "BS ?" * * All right then... * * The matter of the fact is that we simply will not fire up a gzip in * the output path because it costs too much memory and CPU, so we simply * wrap the data in very convenient "gzip copy-blocks" and send it down * the stream with a bit more overhead. */ static int v_matchproto_(vdp_fini_f) ved_pretend_gzip_fini(struct vdp_ctx *vdc, void **priv) { (void)vdc; *priv = NULL; return (0); } static int v_matchproto_(vdp_bytes_f) ved_pretend_gzip_bytes(struct vdp_ctx *vdc, enum vdp_action act, void **priv, const void *pv, ssize_t l) { uint8_t buf1[5], buf2[5]; const uint8_t *p; uint16_t lx; struct ecx *ecx; CHECK_OBJ_NOTNULL(vdc, VDP_CTX_MAGIC); CAST_OBJ_NOTNULL(ecx, *priv, ECX_MAGIC); (void)priv; if (l == 0) return (ved_bytes(ecx, act, pv, l)); p = pv; AN (ecx->isgzip); ecx->crc = crc32(ecx->crc, p, l); ecx->l_crc += l; /* * buf1 can safely be emitted multiple times for objects longer * than 64K-1 bytes. */ lx = 65535; buf1[0] = 0; vle16enc(buf1 + 1, lx); vle16enc(buf1 + 3, ~lx); while (l > 0) { if (l >= 65535) { lx = 65535; if (ved_bytes(ecx, VDP_NULL, buf1, sizeof buf1)) return (-1); } else { lx = (uint16_t)l; buf2[0] = 0; vle16enc(buf2 + 1, lx); vle16enc(buf2 + 3, ~lx); if (ved_bytes(ecx, VDP_NULL, buf2, sizeof buf2)) return (-1); } if (ved_bytes(ecx, VDP_NULL, p, lx)) return (-1); l -= lx; p += lx; } /* buf1 & buf2 are local, so we have to flush */ return (ved_bytes(ecx, VDP_FLUSH, NULL, 0)); } static const struct vdp ved_pretend_gz = { .name = "PGZ", .bytes = ved_pretend_gzip_bytes, .fini = ved_pretend_gzip_fini, }; /*--------------------------------------------------------------------- * Include a gzip'ed object in a gzip'ed ESI object delivery * * This is the interesting case: Deliver all the deflate blocks, stripping * the "LAST" bit of the last one and padding it, as necessary, to a byte * boundary. * */ struct ved_foo { unsigned magic; #define VED_FOO_MAGIC 0x6a5a262d struct ecx *ecx; struct objcore *objcore; uint64_t start, last, stop, lpad; ssize_t ll; uint64_t olen; uint8_t dbits[8]; uint8_t tailbuf[8]; }; static int v_matchproto_(vdp_init_f) ved_gzgz_init(VRT_CTX, struct vdp_ctx *vdc, void **priv, struct objcore *oc) { ssize_t l; const char *p; struct ved_foo *foo; struct req *req; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(vdc, VDP_CTX_MAGIC); (void)oc; req = vdc->req; CHECK_OBJ_NOTNULL(req, REQ_MAGIC); CAST_OBJ_NOTNULL(foo, *priv, VED_FOO_MAGIC); CHECK_OBJ_NOTNULL(foo->objcore, OBJCORE_MAGIC); memset(foo->tailbuf, 0xdd, sizeof foo->tailbuf); AN(ObjCheckFlag(vdc->wrk, foo->objcore, OF_GZIPED)); p = ObjGetAttr(vdc->wrk, foo->objcore, OA_GZIPBITS, &l); AN(p); assert(l == 32); foo->start = vbe64dec(p); foo->last = vbe64dec(p + 8); foo->stop = vbe64dec(p + 16); foo->olen = ObjGetLen(vdc->wrk, foo->objcore); assert(foo->start > 0 && foo->start < foo->olen * 8); assert(foo->last > 0 && foo->last < foo->olen * 8); assert(foo->stop > 0 && foo->stop < foo->olen * 8); assert(foo->last >= foo->start); assert(foo->last < foo->stop); /* The start bit must be byte aligned. */ AZ(foo->start & 7); return (0); } /* * XXX: for act == VDP_END || act == VDP_FLUSH, we send a flush more often than * we need. The VDP_END case would trip our "at most one VDP_END call" assertion * in VDP_bytes(), but ved_bytes() covers it. * * To avoid unnecessary chunks downstream, it would be nice to re-structure the * code to identify the last block, send VDP_END/VDP_FLUSH for that one and * VDP_NULL for anything before it. */ static int v_matchproto_(vdp_bytes_f) ved_gzgz_bytes(struct vdp_ctx *vdc, enum vdp_action act, void **priv, const void *ptr, ssize_t len) { struct ved_foo *foo; const uint8_t *pp; ssize_t dl; ssize_t l; (void)vdc; CAST_OBJ_NOTNULL(foo, *priv, VED_FOO_MAGIC); pp = ptr; if (len > 0) { /* Skip over the GZIP header */ dl = foo->start / 8 - foo->ll; if (dl > 0) { /* Before foo.start, skip */ if (dl > len) dl = len; foo->ll += dl; len -= dl; pp += dl; } } if (len > 0) { /* The main body of the object */ dl = foo->last / 8 - foo->ll; if (dl > 0) { dl = vmin(dl, len); if (ved_bytes(foo->ecx, act, pp, dl)) return (-1); foo->ll += dl; len -= dl; pp += dl; } } if (len > 0 && foo->ll == foo->last / 8) { /* Remove the "LAST" bit */ foo->dbits[0] = *pp; foo->dbits[0] &= ~(1U << (foo->last & 7)); if (ved_bytes(foo->ecx, act, foo->dbits, 1)) return (-1); foo->ll++; len--; pp++; } if (len > 0) { /* Last block */ dl = foo->stop / 8 - foo->ll; if (dl > 0) { dl = vmin(dl, len); if (ved_bytes(foo->ecx, act, pp, dl)) return (-1); foo->ll += dl; len -= dl; pp += dl; } } if (len > 0 && (foo->stop & 7) && foo->ll == foo->stop / 8) { /* Add alignment to byte boundary */ foo->dbits[1] = *pp; foo->ll++; len--; pp++; switch ((int)(foo->stop & 7)) { case 1: /* * x000.... * 00000000 00000000 11111111 11111111 */ case 3: /* * xxx000.. * 00000000 00000000 11111111 11111111 */ case 5: /* * xxxxx000 * 00000000 00000000 11111111 11111111 */ foo->dbits[2] = 0x00; foo->dbits[3] = 0x00; foo->dbits[4] = 0xff; foo->dbits[5] = 0xff; foo->lpad = 5; break; case 2: /* xx010000 00000100 00000001 00000000 */ foo->dbits[1] |= 0x08; foo->dbits[2] = 0x20; foo->dbits[3] = 0x80; foo->dbits[4] = 0x00; foo->lpad = 4; break; case 4: /* xxxx0100 00000001 00000000 */ foo->dbits[1] |= 0x20; foo->dbits[2] = 0x80; foo->dbits[3] = 0x00; foo->lpad = 3; break; case 6: /* xxxxxx01 00000000 */ foo->dbits[1] |= 0x80; foo->dbits[2] = 0x00; foo->lpad = 2; break; case 7: /* * xxxxxxx0 * 00...... * 00000000 00000000 11111111 11111111 */ foo->dbits[2] = 0x00; foo->dbits[3] = 0x00; foo->dbits[4] = 0x00; foo->dbits[5] = 0xff; foo->dbits[6] = 0xff; foo->lpad = 6; break; case 0: /* xxxxxxxx */ default: WRONG("compiler must be broken"); } if (ved_bytes(foo->ecx, act, foo->dbits + 1, foo->lpad)) return (-1); } if (len > 0) { /* Recover GZIP tail */ dl = foo->olen - foo->ll; assert(dl >= 0); if (dl > len) dl = len; if (dl > 0) { assert(dl <= 8); l = foo->ll - (foo->olen - 8); assert(l >= 0); assert(l <= 8); assert(l + dl <= 8); memcpy(foo->tailbuf + l, pp, dl); foo->ll += dl; len -= dl; } } assert(len == 0); return (0); } static int v_matchproto_(vdp_fini_f) ved_gzgz_fini(struct vdp_ctx *vdc, void **priv) { uint32_t icrc; uint32_t ilen; struct ved_foo *foo; (void)vdc; TAKE_OBJ_NOTNULL(foo, priv, VED_FOO_MAGIC); /* XXX * this works due to the esi layering, a VDP pushing bytes from _fini * will otherwise have it's own _bytes method called. * * Could rewrite use VDP_END */ (void)ved_bytes(foo->ecx, VDP_FLUSH, NULL, 0); icrc = vle32dec(foo->tailbuf); ilen = vle32dec(foo->tailbuf + 4); foo->ecx->crc = crc32_combine(foo->ecx->crc, icrc, ilen); foo->ecx->l_crc += ilen; return (0); } static const struct vdp ved_gzgz = { .name = "VZZ", .init = ved_gzgz_init, .bytes = ved_gzgz_bytes, .fini = ved_gzgz_fini, }; /*-------------------------------------------------------------------- * Straight through without processing. */ static int v_matchproto_(vdp_fini_f) ved_vdp_fini(struct vdp_ctx *vdc, void **priv) { (void)vdc; *priv = NULL; return (0); } static int v_matchproto_(vdp_bytes_f) ved_vdp_bytes(struct vdp_ctx *vdc, enum vdp_action act, void **priv, const void *ptr, ssize_t len) { struct ecx *ecx; (void)vdc; CAST_OBJ_NOTNULL(ecx, *priv, ECX_MAGIC); return (ved_bytes(ecx, act, ptr, len)); } static const struct vdp ved_ved = { .name = "VED", .bytes = ved_vdp_bytes, .fini = ved_vdp_fini, }; static void ved_close(struct req *req, struct boc *boc, int error) { req->acct.resp_bodybytes += VDP_Close(req->vdc, req->objcore, boc); if (! error) return; req->top->topreq->vdc->retval = -1; req->top->topreq->doclose = req->doclose; } /*--------------------------------------------------------------------*/ static void v_matchproto_(vtr_deliver_f) ved_deliver(struct req *req, struct boc *boc, int wantbody) { int i = 0; const char *p; uint16_t status; struct ecx *ecx; struct ved_foo foo[1]; struct vrt_ctx ctx[1]; CHECK_OBJ_NOTNULL(req, REQ_MAGIC); CHECK_OBJ_ORNULL(boc, BOC_MAGIC); CHECK_OBJ_NOTNULL(req->objcore, OBJCORE_MAGIC); CAST_OBJ_NOTNULL(ecx, req->transport_priv, ECX_MAGIC); status = req->resp->status % 1000; if (FEATURE(FEATURE_ESI_INCLUDE_ONERROR) && status != 200 && status != 204) { ved_close(req, boc, ecx->abrt); return; } if (wantbody == 0) { ved_close(req, boc, 0); return; } if (boc == NULL && ObjGetLen(req->wrk, req->objcore) == 0) { ved_close(req, boc, 0); return; } if (http_GetHdr(req->resp, H_Content_Encoding, &p)) i = http_coding_eq(p, gzip); if (i) i = ObjCheckFlag(req->wrk, req->objcore, OF_GZIPED); INIT_OBJ(ctx, VRT_CTX_MAGIC); VCL_Req2Ctx(ctx, req); if (ecx->isgzip && i && !(req->res_mode & RES_ESI)) { /* A gzip'ed include which is not ESI processed */ /* OA_GZIPBITS are not valid until BOS_FINISHED */ if (boc != NULL) ObjWaitState(req->objcore, BOS_FINISHED); if (req->objcore->flags & OC_F_FAILED) { /* No way of signalling errors in the middle of * the ESI body. Omit this ESI fragment. * XXX change error argument to 1 */ ved_close(req, boc, 0); return; } INIT_OBJ(foo, VED_FOO_MAGIC); foo->ecx = ecx; foo->objcore = req->objcore; i = VDP_Push(ctx, req->vdc, req->ws, &ved_gzgz, foo); } else if (ecx->isgzip && !i) { /* Non-Gzip'ed include in gzip'ed parent */ i = VDP_Push(ctx, req->vdc, req->ws, &ved_pretend_gz, ecx); } else { /* Anything else goes straight through */ i = VDP_Push(ctx, req->vdc, req->ws, &ved_ved, ecx); } if (i == 0) { i = VDP_DeliverObj(req->vdc, req->objcore); } else { VSLb(req->vsl, SLT_Error, "Failure to push ESI processors"); req->doclose = SC_OVERLOAD; } if (i && req->doclose == SC_NULL) req->doclose = SC_REM_CLOSE; ved_close(req, boc, i && ecx->abrt ? 1 : 0); } varnish-7.5.0/bin/varnishd/cache/cache_esi_fetch.c000066400000000000000000000175721457605730600220770ustar00rootroot00000000000000/*- * Copyright (c) 2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * VEF Varnish Esi Fetching */ #include "config.h" #include #include "cache_varnishd.h" #include "cache_filter.h" #include "cache_vgz.h" #include "cache_esi.h" #include "vrnd.h" /*--------------------------------------------------------------------- */ struct vef_priv { unsigned magic; #define VEF_MAGIC 0xf104b51f int error; ssize_t tot; struct vgz *vgz; struct vep_state *vep; char *ibuf; char *ibuf_i; char *ibuf_o; ssize_t ibuf_sz; }; static ssize_t vfp_vep_callback(struct vfp_ctx *vc, void *priv, ssize_t l, enum vgz_flag flg) { struct vef_priv *vef; ssize_t dl; const void *dp; uint8_t *ptr; int i; CHECK_OBJ_NOTNULL(vc, VFP_CTX_MAGIC); CAST_OBJ_NOTNULL(vef, priv, VEF_MAGIC); assert(l >= 0); if (vef->error) { vef->tot += l; return (vef->tot); } /* * l == 0 is valid when 'flg' calls for action, but in the * normal case we can just ignore a l==0 request. * (It would cause Z_BUF_ERROR anyway) */ if (l == 0 && flg == VGZ_NORMAL) return (vef->tot); VGZ_Ibuf(vef->vgz, vef->ibuf_o, l); do { dl = 0; if (VFP_GetStorage(vc, &dl, &ptr) != VFP_OK) { vef->error = ENOMEM; vef->tot += l; return (vef->tot); } VGZ_Obuf(vef->vgz, ptr, dl); i = VGZ_Gzip(vef->vgz, &dp, &dl, flg); VGZ_UpdateObj(vc, vef->vgz, VUA_UPDATE); if (dl > 0) { vef->tot += dl; VFP_Extend(vc, dl, VFP_OK); } } while (i != VGZ_ERROR && (!VGZ_IbufEmpty(vef->vgz) || VGZ_ObufFull(vef->vgz))); assert(i == VGZ_ERROR || VGZ_IbufEmpty(vef->vgz)); vef->ibuf_o += l; return (vef->tot); } static enum vfp_status vfp_esi_end(struct vfp_ctx *vc, struct vef_priv *vef, enum vfp_status retval) { struct vsb *vsb; ssize_t l; void *p; CHECK_OBJ_NOTNULL(vc, VFP_CTX_MAGIC); CHECK_OBJ_NOTNULL(vef, VEF_MAGIC); if (retval == VFP_ERROR) { if (vef->error == 0) vef->error = errno ? errno : EINVAL; } else { assert(retval == VFP_END); } vsb = VEP_Finish(vef->vep); if (vsb != NULL) { if (retval == VFP_END) { l = VSB_len(vsb); assert(l > 0); p = ObjSetAttr(vc->wrk, vc->oc, OA_ESIDATA, l, VSB_data(vsb)); if (p == NULL) { retval = VFP_Error(vc, "Could not allocate storage for esidata"); } } VSB_destroy(&vsb); } if (vef->vgz != NULL) { if (retval == VFP_END) VGZ_UpdateObj(vc, vef->vgz, VUA_END_GZIP); if (VGZ_Destroy(&vef->vgz) != VGZ_END) retval = VFP_Error(vc, "ESI+Gzip Failed at the very end"); } if (vef->ibuf != NULL) free(vef->ibuf); FREE_OBJ(vef); return (retval); } static enum vfp_status v_matchproto_(vfp_init_f) vfp_esi_gzip_init(VRT_CTX, struct vfp_ctx *vc, struct vfp_entry *vfe) { struct vef_priv *vef; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(vc, VFP_CTX_MAGIC); CHECK_OBJ_NOTNULL(vc->req, HTTP_MAGIC); CHECK_OBJ_NOTNULL(vfe, VFP_ENTRY_MAGIC); if (http_GetStatus(vc->resp) == 206) { VSLb(vc->wrk->vsl, SLT_VCL_Error, "Attempted ESI on partial (206) response"); return (VFP_ERROR); } ALLOC_OBJ(vef, VEF_MAGIC); if (vef == NULL) return (VFP_ERROR); vc->obj_flags |= OF_GZIPED | OF_CHGCE | OF_ESIPROC; vef->vep = VEP_Init(vc, vc->req, vfp_vep_callback, vef); if (vef->vep == NULL) { FREE_OBJ(vef); return (VFP_ERROR); } vef->vgz = VGZ_NewGzip(vc->wrk->vsl, "G F E"); AN(vef->vgz); vef->ibuf_sz = cache_param->gzip_buffer; vef->ibuf = calloc(1L, vef->ibuf_sz); if (vef->ibuf == NULL) return (vfp_esi_end(vc, vef, VFP_ERROR)); vef->ibuf_i = vef->ibuf; vef->ibuf_o = vef->ibuf; vfe->priv1 = vef; RFC2616_Weaken_Etag(vc->resp); http_Unset(vc->resp, H_Content_Length); http_Unset(vc->resp, H_Content_Encoding); http_SetHeader(vc->resp, "Content-Encoding: gzip"); RFC2616_Vary_AE(vc->resp); return (VFP_OK); } static enum vfp_status v_matchproto_(vfp_pull_f) vfp_esi_gzip_pull(struct vfp_ctx *vc, struct vfp_entry *vfe, void *p, ssize_t *lp) { enum vfp_status vp; ssize_t d, l; struct vef_priv *vef; CHECK_OBJ_NOTNULL(vc, VFP_CTX_MAGIC); CHECK_OBJ_NOTNULL(vfe, VFP_ENTRY_MAGIC); CAST_OBJ_NOTNULL(vef, vfe->priv1, VEF_MAGIC); AN(p); AN(lp); *lp = 0; l = vef->ibuf_sz - (vef->ibuf_i - vef->ibuf); if (DO_DEBUG(DBG_ESI_CHOP)) { d = (VRND_RandomTestable() & 3) + 1; if (d < l) l = d; } vp = VFP_Suck(vc, vef->ibuf_i, &l); if (l > 0) { VEP_Parse(vef->vep, vef->ibuf_i, l); vef->ibuf_i += l; assert(vef->ibuf_o >= vef->ibuf && vef->ibuf_o <= vef->ibuf_i); if (vef->error) { errno = vef->error; return (VFP_ERROR); } l = vef->ibuf_i - vef->ibuf_o; if (l > 0) memmove(vef->ibuf, vef->ibuf_o, l); vef->ibuf_o = vef->ibuf; vef->ibuf_i = vef->ibuf + l; } if (vp == VFP_END) { vp = vfp_esi_end(vc, vef, vp); vfe->priv1 = NULL; } return (vp); } static enum vfp_status v_matchproto_(vfp_init_f) vfp_esi_init(VRT_CTX, struct vfp_ctx *vc, struct vfp_entry *vfe) { struct vef_priv *vef; struct vep_state *vep; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(vc, VFP_CTX_MAGIC); CHECK_OBJ_NOTNULL(vc->req, HTTP_MAGIC); if (http_GetStatus(vc->resp) == 206) { VSLb(vc->wrk->vsl, SLT_VCL_Error, "Attempted ESI on partial (206) response"); return (VFP_ERROR); } vep = VEP_Init(vc, vc->req, NULL, NULL); if (vep == NULL) return (VFP_ERROR); ALLOC_OBJ(vef, VEF_MAGIC); if (vef == NULL) return (VFP_ERROR); vc->obj_flags |= OF_ESIPROC; vef->vep = vep; vfe->priv1 = vef; return (VFP_OK); } static enum vfp_status v_matchproto_(vfp_pull_f) vfp_esi_pull(struct vfp_ctx *vc, struct vfp_entry *vfe, void *p, ssize_t *lp) { enum vfp_status vp; struct vef_priv *vef; CHECK_OBJ_NOTNULL(vc, VFP_CTX_MAGIC); CHECK_OBJ_NOTNULL(vfe, VFP_ENTRY_MAGIC); CAST_OBJ_NOTNULL(vef, vfe->priv1, VEF_MAGIC); AN(p); AN(lp); if (DO_DEBUG(DBG_ESI_CHOP)) { *lp = vmin_t(size_t, *lp, (VRND_RandomTestable() & 3) + 1); } vp = VFP_Suck(vc, p, lp); if (vp != VFP_ERROR && *lp > 0) VEP_Parse(vef->vep, p, *lp); if (vp == VFP_END) { vp = vfp_esi_end(vc, vef, vp); vfe->priv1 = NULL; } return (vp); } static void v_matchproto_(vfp_fini_f) vfp_esi_fini(struct vfp_ctx *vc, struct vfp_entry *vfe) { CHECK_OBJ_NOTNULL(vc, VFP_CTX_MAGIC); CHECK_OBJ_NOTNULL(vfe, VFP_ENTRY_MAGIC); if (vfe->priv1 == NULL) return; if (vc->oc->stobj->stevedore == NULL) errno = ENOMEM; (void)vfp_esi_end(vc, vfe->priv1, VFP_ERROR); vfe->priv1 = NULL; } const struct vfp VFP_esi = { .name = "esi", .init = vfp_esi_init, .pull = vfp_esi_pull, .fini = vfp_esi_fini, }; const struct vfp VFP_esi_gzip = { .name = "esi_gzip", .init = vfp_esi_gzip_init, .pull = vfp_esi_gzip_pull, .fini = vfp_esi_fini, }; varnish-7.5.0/bin/varnishd/cache/cache_esi_parse.c000066400000000000000000000770601457605730600221160ustar00rootroot00000000000000/*- * Copyright (c) 2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * VEP Varnish Esi Parsing */ #include "config.h" #include "cache_varnishd.h" #include "cache_filter.h" #include "cache_vgz.h" #include "cache_esi.h" #include "vct.h" #include "vend.h" #include "vgz.h" //#define Debug(fmt, ...) printf(fmt, __VA_ARGS__) #define Debug(fmt, ...) /**/ struct vep_state; enum dowhat {DO_ATTR, DO_TAG}; typedef void dostuff_f(struct vep_state *, enum dowhat); struct vep_match { const char *match; const char * const *state; }; enum vep_mark { VERBATIM = 0, SKIP }; struct vep_state { unsigned magic; #define VEP_MAGIC 0x55cb9b82 struct vsb *vsb; const char *url; struct vfp_ctx *vc; int dogzip; vep_callback_t *cb; void *cb_priv; /* Internal Counter for default call-back function */ ssize_t cb_x; /* parser state */ const char *state; unsigned startup; unsigned esi_found; unsigned endtag; unsigned emptytag; unsigned canattr; unsigned remove; ssize_t o_wait; ssize_t o_pending; ssize_t o_total; uint32_t crc; ssize_t o_crc; uint32_t crcp; ssize_t o_last; const char *hack_p; const char *ver_p; const char *until; const char *until_p; const char *until_s; int in_esi_tag; const char *esicmt; const char *esicmt_p; struct vep_match *attr; struct vsb *attr_vsb; int attr_delim; struct vep_match *match; struct vep_match *match_hit; char tag[8]; int tag_i; dostuff_f *dostuff; struct vsb *include_src; unsigned include_continue; unsigned nm_skip; unsigned nm_verbatim; unsigned nm_pending; enum vep_mark last_mark; }; /*---------------------------------------------------------------------*/ static const char * const VEP_START = "[Start]"; static const char * const VEP_BOM = "[BOM]"; static const char * const VEP_TESTXML = "[TestXml]"; static const char * const VEP_NOTXML = "[NotXml]"; static const char * const VEP_NEXTTAG = "[NxtTag]"; static const char * const VEP_NOTMYTAG = "[NotMyTag]"; static const char * const VEP_STARTTAG = "[StartTag]"; static const char * const VEP_COMMENTESI = "[CommentESI]"; static const char * const VEP_COMMENT = "[Comment]"; static const char * const VEP_CDATA = "[CDATA]"; static const char * const VEP_ESITAG = "[ESITag]"; static const char * const VEP_ESIENDTAG = "[/ESITag]"; static const char * const VEP_ESIREMOVE = "[ESI:Remove]"; static const char * const VEP_ESIINCLUDE = "[ESI:Include]"; static const char * const VEP_ESICOMMENT = "[ESI:Comment]"; static const char * const VEP_ESIBOGON = "[ESI:Bogon]"; static const char * const VEP_INTAG = "[InTag]"; static const char * const VEP_TAGERROR = "[TagError]"; static const char * const VEP_ATTR = "[Attribute]"; static const char * const VEP_SKIPATTR = "[SkipAttribute]"; static const char * const VEP_ATTRDELIM = "[AttrDelim]"; static const char * const VEP_ATTRGETVAL = "[AttrGetValue]"; static const char * const VEP_ATTRVAL = "[AttrValue]"; static const char * const VEP_UNTIL = "[Until]"; static const char * const VEP_MATCHBUF = "[MatchBuf]"; static const char * const VEP_MATCH = "[Match]"; /*---------------------------------------------------------------------*/ static struct vep_match vep_match_starttag[] = { { "!--esi", &VEP_COMMENTESI }, { "!---->", &VEP_NEXTTAG }, { "!--", &VEP_COMMENT }, { "/esi:", &VEP_ESIENDTAG }, { "esi:", &VEP_ESITAG }, { "![CDATA[", &VEP_CDATA }, { NULL, &VEP_NOTMYTAG } }; /*---------------------------------------------------------------------*/ static struct vep_match vep_match_esi[] = { { "include", &VEP_ESIINCLUDE }, { "remove", &VEP_ESIREMOVE }, { "comment", &VEP_ESICOMMENT }, { NULL, &VEP_ESIBOGON } }; /*---------------------------------------------------------------------*/ static struct vep_match vep_match_attr_include[] = { { "src=", &VEP_ATTRGETVAL }, { "onerror=", &VEP_ATTRGETVAL }, { NULL, &VEP_SKIPATTR } }; /*---------------------------------------------------------------------*/ static struct vep_match vep_match_bom[] = { { "\xeb\xbb\xbf", &VEP_START }, { NULL, &VEP_BOM } }; /*-------------------------------------------------------------------- * Report a parsing error */ static void vep_error(const struct vep_state *vep, const char *p) { VSC_C_main->esi_errors++; VSLb(vep->vc->wrk->vsl, SLT_ESI_xmlerror, "ERR after %zd %s", vep->o_last, p); } /*-------------------------------------------------------------------- * Report a parsing warning */ static void vep_warn(const struct vep_state *vep, const char *p) { VSC_C_main->esi_warnings++; VSLb(vep->vc->wrk->vsl, SLT_ESI_xmlerror, "WARN after %zd %s", vep->o_last, p); } /*--------------------------------------------------------------------- * return match or NULL if more input needed. */ static struct vep_match * vep_match(const struct vep_state *vep, const char *b, const char *e) { struct vep_match *vm; const char *q, *r; AN(vep->match); for (vm = vep->match; vm->match != NULL; vm++) { assert(strlen(vm->match) <= sizeof (vep->tag)); r = b; for (q = vm->match; *q != '\0' && r < e; q++, r++) if (*q != *r) break; if (*q == '\0') break; if (r == e) return (NULL); } return (vm); } /*--------------------------------------------------------------------- * */ static void vep_emit_len(const struct vep_state *vep, ssize_t l, int m8, int m16, int m64) { uint8_t buf[9]; assert(l > 0); if (l < 256) { buf[0] = (uint8_t)m8; buf[1] = (uint8_t)l; assert((ssize_t)buf[1] == l); VSB_bcat(vep->vsb, buf, 2); } else if (l < 65536) { buf[0] = (uint8_t)m16; vbe16enc(buf + 1, (uint16_t)l); assert((ssize_t)vbe16dec(buf + 1) == l); VSB_bcat(vep->vsb, buf, 3); } else { buf[0] = (uint8_t)m64; vbe64enc(buf + 1, l); assert((ssize_t)vbe64dec(buf + 1) == l); VSB_bcat(vep->vsb, buf, 9); } } static void vep_emit_skip(const struct vep_state *vep, ssize_t l) { vep_emit_len(vep, l, VEC_S1, VEC_S2, VEC_S8); } static void vep_emit_verbatim(const struct vep_state *vep, ssize_t l, ssize_t l_crc) { uint8_t buf[4]; vep_emit_len(vep, l, VEC_V1, VEC_V2, VEC_V8); if (vep->dogzip) { vep_emit_len(vep, l_crc, VEC_C1, VEC_C2, VEC_C8); vbe32enc(buf, vep->crc); VSB_bcat(vep->vsb, buf, sizeof buf); } } static void vep_emit_common(struct vep_state *vep, ssize_t l, enum vep_mark mark) { assert(l >= 0); if (l == 0) return; assert(mark == SKIP || mark == VERBATIM); if (mark == SKIP) vep_emit_skip(vep, l); else vep_emit_verbatim(vep, l, vep->o_crc); vep->crc = crc32(0L, Z_NULL, 0); vep->o_crc = 0; vep->o_total += l; } /*--------------------------------------------------------------------- * */ static void vep_mark_common(struct vep_state *vep, const char *p, enum vep_mark mark) { ssize_t l, lcb; assert(mark == SKIP || mark == VERBATIM); /* The NO-OP case, no data, no pending data & no change of mode */ if (vep->last_mark == mark && p == vep->ver_p && vep->o_pending == 0) return; /* * If we changed mode, emit whatever the opposite mode * assembled before the pending bytes. */ if (vep->last_mark != mark && (vep->o_wait > 0 || vep->startup)) { lcb = vep->cb(vep->vc, vep->cb_priv, 0, mark == VERBATIM ? VGZ_RESET : VGZ_ALIGN); vep_emit_common(vep, lcb - vep->o_last, vep->last_mark); vep->o_last = lcb; vep->o_wait = 0; } /* Transfer pending bytes CRC into active mode CRC */ if (vep->o_pending) { (void)vep->cb(vep->vc, vep->cb_priv, vep->o_pending, VGZ_NORMAL); if (vep->o_crc == 0) { vep->crc = vep->crcp; vep->o_crc = vep->o_pending; } else { vep->crc = crc32_combine(vep->crc, vep->crcp, vep->o_pending); vep->o_crc += vep->o_pending; } vep->crcp = crc32(0L, Z_NULL, 0); vep->o_wait += vep->o_pending; vep->o_pending = 0; } /* * Process this bit of input */ AN(vep->ver_p); l = p - vep->ver_p; assert(l >= 0); vep->crc = crc32(vep->crc, (const void*)vep->ver_p, l); vep->o_crc += l; vep->ver_p = p; vep->o_wait += l; vep->last_mark = mark; (void)vep->cb(vep->vc, vep->cb_priv, l, VGZ_NORMAL); } static void vep_mark_verbatim(struct vep_state *vep, const char *p) { vep_mark_common(vep, p, VERBATIM); vep->nm_verbatim++; } static void vep_mark_skip(struct vep_state *vep, const char *p) { vep_mark_common(vep, p, SKIP); vep->nm_skip++; } static void vep_mark_pending(struct vep_state *vep, const char *p) { ssize_t l; AN(vep->ver_p); l = p - vep->ver_p; assert(l > 0); vep->crcp = crc32(vep->crcp, (const void *)vep->ver_p, l); vep->ver_p = p; vep->o_pending += l; vep->nm_pending++; } /*--------------------------------------------------------------------- */ static void v_matchproto_() vep_do_comment(struct vep_state *vep, enum dowhat what) { Debug("DO_COMMENT(%d)\n", what); assert(what == DO_TAG); if (!vep->emptytag) vep_error(vep, "ESI 1.0 needs final '/'"); } /*--------------------------------------------------------------------- */ static void v_matchproto_() vep_do_remove(struct vep_state *vep, enum dowhat what) { Debug("DO_REMOVE(%d, end %d empty %d remove %d)\n", what, vep->endtag, vep->emptytag, vep->remove); assert(what == DO_TAG); if (vep->emptytag) vep_error(vep, "ESI 1.0 not legal"); else if (vep->remove && !vep->endtag) vep_error(vep, "ESI 1.0 already open"); else if (!vep->remove && vep->endtag) vep_error(vep, "ESI 1.0 not open"); else vep->remove = !vep->endtag; } /*--------------------------------------------------------------------- */ static void include_attr_src(struct vep_state *vep) { const char *p; if (vep->include_src != NULL) { vep_error(vep, "ESI 1.0 " "has multiple src= attributes"); vep->state = VEP_TAGERROR; VSB_destroy(&vep->attr_vsb); VSB_destroy(&vep->include_src); return; } for (p = VSB_data(vep->attr_vsb); *p != '\0'; p++) if (vct_islws(*p)) break; if (*p != '\0') { vep_error(vep, "ESI 1.0 " "has whitespace in src= attribute"); vep->state = VEP_TAGERROR; VSB_destroy(&vep->attr_vsb); if (vep->include_src != NULL) VSB_destroy(&vep->include_src); return; } vep->include_src = vep->attr_vsb; vep->attr_vsb = NULL; } static void include_attr_onerror(struct vep_state *vep) { vep->include_continue = !strcmp("continue", VSB_data(vep->attr_vsb)); VSB_destroy(&vep->attr_vsb); } static void v_matchproto_() vep_do_include(struct vep_state *vep, enum dowhat what) { const char *p, *q, *h; ssize_t l; char incl; Debug("DO_INCLUDE(%d)\n", what); if (what == DO_ATTR) { Debug("ATTR (%s) (%s)\n", vep->match_hit->match, VSB_data(vep->attr_vsb)); if (!strcmp("src=", vep->match_hit->match)) { include_attr_src(vep); return; } if (!strcmp("onerror=", vep->match_hit->match)) { include_attr_onerror(vep); return; } WRONG("Unhandled attribute"); } assert(what == DO_TAG); if (!vep->emptytag) vep_warn(vep, "ESI 1.0 lacks final '/'"); if (vep->include_src == NULL) { vep_error(vep, "ESI 1.0 lacks src attr"); return; } /* * Strictly speaking, we ought to spit out any piled up skip before * emitting the VEC for the include, but objectively that makes no * difference and robs us of a chance to collapse another skip into * this on so we don't do that. * However, we cannot tolerate any verbatim stuff piling up. * The mark_skip() before calling dostuff should have taken * care of that. Make sure. */ assert(vep->o_wait == 0 || vep->last_mark == SKIP); /* XXX: what if it contains NUL bytes ?? */ p = VSB_data(vep->include_src); l = VSB_len(vep->include_src); h = 0; incl = vep->include_continue ? VEC_IC : VEC_IA; if (l > 7 && !memcmp(p, "http://", 7)) { h = p + 7; p = strchr(h, '/'); if (p == NULL) { vep_error(vep, "ESI 1.0 invalid src= URL"); vep->state = VEP_TAGERROR; AZ(vep->attr_vsb); VSB_destroy(&vep->include_src); return; } Debug("HOST <%.*s> PATH <%s>\n", (int)(p-h),h, p); VSB_printf(vep->vsb, "%c", incl); VSB_printf(vep->vsb, "Host: %.*s%c", (int)(p-h), h, 0); } else if (l > 8 && !memcmp(p, "https://", 8)) { if (!FEATURE(FEATURE_ESI_IGNORE_HTTPS)) { vep_warn(vep, "ESI 1.0 with https:// ignored"); vep->state = VEP_TAGERROR; AZ(vep->attr_vsb); VSB_destroy(&vep->include_src); return; } vep_warn(vep, "ESI 1.0 https:// treated as http://"); h = p + 8; p = strchr(h, '/'); if (p == NULL) { vep_error(vep, "ESI 1.0 invalid src= URL"); vep->state = VEP_TAGERROR; AZ(vep->attr_vsb); VSB_destroy(&vep->include_src); return; } VSB_printf(vep->vsb, "%c", incl); VSB_printf(vep->vsb, "Host: %.*s%c", (int)(p-h), h, 0); } else if (*p == '/') { VSB_printf(vep->vsb, "%c", incl); VSB_printf(vep->vsb, "%c", 0); } else { VSB_printf(vep->vsb, "%c", incl); VSB_printf(vep->vsb, "%c", 0); /* Look for the last / before a '?' */ h = NULL; for (q = vep->url; *q && *q != '?'; q++) if (*q == '/') h = q; if (h == NULL) h = q + 1; Debug("INCL:: [%.*s]/[%s]\n", (int)(h - vep->url), vep->url, p); VSB_printf(vep->vsb, "%.*s/", (int)(h - vep->url), vep->url); } l -= (p - VSB_data(vep->include_src)); for (q = p; *q != '\0'; ) { if (*q == '&') { #define R(w,f,r) \ if (q + w <= p + l && !memcmp(q, f, w)) { \ VSB_printf(vep->vsb, "%c", r); \ q += w; \ continue; \ } R(6, "'", '\''); R(6, """, '"'); R(4, "<", '<'); R(4, ">", '>'); R(5, "&", '&'); } VSB_printf(vep->vsb, "%c", *q++); } #undef R VSB_printf(vep->vsb, "%c", 0); VSB_destroy(&vep->include_src); vep->include_continue = 0; } /*--------------------------------------------------------------------- * Lex/Parse object for ESI instructions * * This function is called with the input object piecemal so do not * assume that we have more than one char available at at time, but * optimize for getting huge chunks. * * NB: At the bottom of this source-file, there is a dot-diagram matching * NB: the state-machine. Please maintain it along with the code. */ void VEP_Parse(struct vep_state *vep, const char *p, size_t l) { const char *e; struct vep_match *vm; int i; CHECK_OBJ_NOTNULL(vep, VEP_MAGIC); assert(l > 0); if (vep->startup) { /* * We must force the GZIP header out as a SKIP string, * otherwise an object starting with ver_p = ""; vep->last_mark = SKIP; vep_mark_common(vep, vep->ver_p, VERBATIM); vep->startup = 0; AZ(vep->hack_p); vep->hack_p = p; } vep->ver_p = p; e = p + l; while (p < e) { AN(vep->state); Debug("EP %s %d (%.*s) [%.*s]\n", vep->state, vep->remove, vep->tag_i, vep->tag, (e - p) > 10 ? 10 : (int)(e-p), p); assert(p >= vep->ver_p); /****************************************************** * SECTION A */ if (vep->state == VEP_START) { if (FEATURE(FEATURE_ESI_REMOVE_BOM) && *p == (char)0xeb) { vep->match = vep_match_bom; vep->state = VEP_MATCH; } else vep->state = VEP_BOM; } else if (vep->state == VEP_BOM) { vep_mark_skip(vep, p); if (FEATURE(FEATURE_ESI_DISABLE_XML_CHECK)) vep->state = VEP_NEXTTAG; else vep->state = VEP_TESTXML; } else if (vep->state == VEP_TESTXML) { /* * If the first non-whitespace char is different * from '<' we assume this is not XML. */ while (p < e && vct_islws(*p)) p++; vep_mark_verbatim(vep, p); if (p < e && *p == '<') { p++; vep->state = VEP_STARTTAG; } else if (p < e && *p == (char)0xeb) { VSLb(vep->vc->wrk->vsl, SLT_ESI_xmlerror, "No ESI processing, " "first char not '<' but BOM." " (See feature esi_remove_bom)" ); vep->state = VEP_NOTXML; } else if (p < e) { VSLb(vep->vc->wrk->vsl, SLT_ESI_xmlerror, "No ESI processing, " "first char not '<'." " (See feature esi_disable_xml_check)" ); vep->state = VEP_NOTXML; } } else if (vep->state == VEP_NOTXML) { /* * This is not recognized as XML, just skip thru * vfp_esi_end() will handle the rest */ p = e; vep_mark_verbatim(vep, p); /****************************************************** * SECTION B */ } else if (vep->state == VEP_NOTMYTAG) { if (FEATURE(FEATURE_ESI_IGNORE_OTHER_ELEMENTS)) { p++; vep->state = VEP_NEXTTAG; } else { vep->tag_i = 0; while (p < e) { if (*p++ == '>') { vep->state = VEP_NEXTTAG; break; } } } if (p == e && !vep->remove) vep_mark_verbatim(vep, p); } else if (vep->state == VEP_NEXTTAG) { /* * Hunt for start of next tag and keep an eye * out for end of EsiCmt if armed. */ vep->emptytag = 0; vep->attr = NULL; vep->dostuff = NULL; while (p < e && *p != '<') { if (vep->esicmt_p == NULL) { p++; continue; } if (*p != *vep->esicmt_p) { p++; vep->esicmt_p = vep->esicmt; continue; } if (!vep->remove && vep->esicmt_p == vep->esicmt) vep_mark_verbatim(vep, p); p++; if (*++vep->esicmt_p == '\0') { vep->esi_found = 1; vep->esicmt = NULL; vep->esicmt_p = NULL; /* * The end of the esicmt * should not be emitted. * But the stuff before should */ vep_mark_skip(vep, p); } } if (p < e) { if (!vep->remove) vep_mark_verbatim(vep, p); assert(*p == '<'); p++; vep->state = VEP_STARTTAG; } else if (vep->esicmt_p == vep->esicmt && !vep->remove) vep_mark_verbatim(vep, p); /****************************************************** * SECTION C */ } else if (vep->state == VEP_STARTTAG) { /* Start of tag, set up match table */ vep->endtag = 0; vep->match = vep_match_starttag; vep->state = VEP_MATCH; } else if (vep->state == VEP_COMMENT) { vep->esicmt_p = vep->esicmt = NULL; vep->until_p = vep->until = "-->"; vep->until_s = VEP_NEXTTAG; vep->state = VEP_UNTIL; } else if (vep->state == VEP_COMMENTESI) { if (vep->remove) vep_error(vep, "ESI 1.0 Nested "; vep->state = VEP_NEXTTAG; vep_mark_skip(vep, p); } else if (vep->state == VEP_CDATA) { /* * Easy: just look for the end of CDATA */ vep->until_p = vep->until = "]]>"; vep->until_s = VEP_NEXTTAG; vep->state = VEP_UNTIL; } else if (vep->state == VEP_ESIENDTAG) { vep->endtag = 1; vep->state = VEP_ESITAG; } else if (vep->state == VEP_ESITAG) { vep->in_esi_tag = 1; vep->esi_found = 1; vep_mark_skip(vep, p); vep->match = vep_match_esi; vep->state = VEP_MATCH; } else if (vep->state == VEP_ESIINCLUDE) { if (vep->remove) { vep_error(vep, "ESI 1.0 element" " nested in "); vep->state = VEP_TAGERROR; } else if (vep->endtag) { vep_error(vep, "ESI 1.0 illegal end-tag"); vep->state = VEP_TAGERROR; } else { vep->dostuff = vep_do_include; vep->state = VEP_INTAG; vep->attr = vep_match_attr_include; } } else if (vep->state == VEP_ESIREMOVE) { vep->dostuff = vep_do_remove; vep->state = VEP_INTAG; } else if (vep->state == VEP_ESICOMMENT) { if (vep->remove) { vep_error(vep, "ESI 1.0 element" " nested in "); vep->state = VEP_TAGERROR; } else if (vep->endtag) { vep_error(vep, "ESI 1.0 illegal end-tag"); vep->state = VEP_TAGERROR; } else { vep->dostuff = vep_do_comment; vep->state = VEP_INTAG; } } else if (vep->state == VEP_ESIBOGON) { vep_error(vep, "ESI 1.0 element"); vep->state = VEP_TAGERROR; /****************************************************** * SECTION D */ } else if (vep->state == VEP_INTAG) { vep->tag_i = 0; while (p < e && vct_islws(*p) && !vep->emptytag) { p++; vep->canattr = 1; } if (p < e && *p == '/' && !vep->emptytag) { p++; vep->emptytag = 1; vep->canattr = 0; } if (p < e && *p == '>') { p++; AN(vep->dostuff); vep_mark_skip(vep, p); vep->dostuff(vep, DO_TAG); vep->in_esi_tag = 0; vep->state = VEP_NEXTTAG; } else if (p < e && vep->emptytag) { vep_error(vep, "XML 1.0 '>' does not follow '/' in tag"); vep->state = VEP_TAGERROR; } else if (p < e && vep->canattr && vct_isxmlnamestart(*p)) { vep->state = VEP_ATTR; } else if (p < e) { vep_error(vep, "XML 1.0 Illegal attribute start char"); vep->state = VEP_TAGERROR; } } else if (vep->state == VEP_TAGERROR) { while (p < e && *p != '>') p++; if (p < e) { p++; vep_mark_skip(vep, p); vep->in_esi_tag = 0; vep->state = VEP_NEXTTAG; if (vep->attr_vsb) VSB_destroy(&vep->attr_vsb); } /****************************************************** * SECTION E */ } else if (vep->state == VEP_ATTR) { AZ(vep->attr_delim); if (vep->attr == NULL) { p++; AZ(vep->attr_vsb); vep->state = VEP_SKIPATTR; } else { vep->match = vep->attr; vep->state = VEP_MATCH; } } else if (vep->state == VEP_SKIPATTR) { while (p < e && vct_isxmlname(*p)) p++; if (p < e && *p == '=') { p++; vep->state = VEP_ATTRDELIM; } else if (p < e && *p == '>') { vep->state = VEP_INTAG; } else if (p < e && *p == '/') { vep->state = VEP_INTAG; } else if (p < e && vct_issp(*p)) { vep->state = VEP_INTAG; } else if (p < e) { vep_error(vep, "XML 1.0 Illegal attr char"); vep->state = VEP_TAGERROR; } } else if (vep->state == VEP_ATTRGETVAL) { AZ(vep->attr_vsb); vep->attr_vsb = VSB_new_auto(); vep->state = VEP_ATTRDELIM; } else if (vep->state == VEP_ATTRDELIM) { AZ(vep->attr_delim); if (*p == '"' || *p == '\'') { vep->attr_delim = *p++; vep->state = VEP_ATTRVAL; } else if (!vct_issp(*p)) { vep->attr_delim = ' '; vep->state = VEP_ATTRVAL; } else { vep_error(vep, "XML 1.0 Illegal attribute delimiter"); vep->state = VEP_TAGERROR; } } else if (vep->state == VEP_ATTRVAL) { while (p < e && *p != '>' && *p != vep->attr_delim && (vep->attr_delim != ' ' || !vct_issp(*p))) { if (vep->attr_vsb != NULL) VSB_putc(vep->attr_vsb, *p); p++; } if (p < e && *p == '>') { vep_error(vep, "XML 1.0 Missing end attribute delimiter"); vep->state = VEP_TAGERROR; vep->attr_delim = 0; if (vep->attr_vsb != NULL) { AZ(VSB_finish(vep->attr_vsb)); VSB_destroy(&vep->attr_vsb); } } else if (p < e) { vep->attr_delim = 0; p++; vep->state = VEP_INTAG; if (vep->attr_vsb != NULL) { AZ(VSB_finish(vep->attr_vsb)); AN(vep->dostuff); vep->dostuff(vep, DO_ATTR); vep->attr_vsb = NULL; } } /****************************************************** * Utility Section */ } else if (vep->state == VEP_MATCH) { /* * Match against a table */ vm = vep_match(vep, p, e); vep->match_hit = vm; if (vm != NULL) { if (vm->match != NULL) p += strlen(vm->match); vep->state = *vm->state; vep->match = NULL; vep->tag_i = 0; } else { assert(p + sizeof(vep->tag) >= e); memcpy(vep->tag, p, e - p); vep->tag_i = e - p; vep->state = VEP_MATCHBUF; p = e; } } else if (vep->state == VEP_MATCHBUF) { /* * Match against a table while split over input * sections. */ AN(vep->match); i = sizeof(vep->tag) - vep->tag_i; if (i > e - p) i = e - p; memcpy(vep->tag + vep->tag_i, p, i); vm = vep_match(vep, vep->tag, vep->tag + vep->tag_i + i); Debug("MB (%.*s) tag_i %d i %d = vm %p match %s\n", vep->tag_i + i, vep->tag, vep->tag_i, i, vm, vm ? vm->match : "(nil)"); if (vm == NULL) { vep->tag_i += i; p += i; assert(p == e); } else { vep->match_hit = vm; vep->state = *vm->state; if (vm->match != NULL) { i = strlen(vm->match); if (i > vep->tag_i) p += i - vep->tag_i; } vep->match = NULL; vep->tag_i = 0; } } else if (vep->state == VEP_UNTIL) { /* * Skip until we see magic string */ while (p < e) { if (*p++ != *vep->until_p++) { vep->until_p = vep->until; } else if (*vep->until_p == '\0') { vep->state = vep->until_s; break; } } if (p == e && !vep->remove) vep_mark_verbatim(vep, p); } else { Debug("*** Unknown state %s\n", vep->state); WRONG("WRONG ESI PARSER STATE"); } } /* * We must always mark up the storage we got, try to do so * in the most efficient way, in particular with respect to * minimizing and limiting use of pending. */ if (p == vep->ver_p) ; else if (vep->in_esi_tag) vep_mark_skip(vep, p); else if (vep->remove) vep_mark_skip(vep, p); else vep_mark_pending(vep, p); } /*--------------------------------------------------------------------- */ static ssize_t v_matchproto_(vep_callback_t) vep_default_cb(struct vfp_ctx *vc, void *priv, ssize_t l, enum vgz_flag flg) { ssize_t *s; CHECK_OBJ_NOTNULL(vc, VFP_CTX_MAGIC); AN(priv); s = priv; *s += l; (void)flg; return (*s); } /*--------------------------------------------------------------------- */ struct vep_state * VEP_Init(struct vfp_ctx *vc, const struct http *req, vep_callback_t *cb, void *cb_priv) { struct vep_state *vep; CHECK_OBJ_NOTNULL(vc, VFP_CTX_MAGIC); CHECK_OBJ_NOTNULL(req, HTTP_MAGIC); vep = WS_Alloc(vc->resp->ws, sizeof *vep); if (vep == NULL) { VSLb(vc->wrk->vsl, SLT_VCL_Error, "VEP_Init() workspace overflow"); return (NULL); } INIT_OBJ(vep, VEP_MAGIC); vep->url = req->hd[HTTP_HDR_URL].b; vep->vc = vc; vep->vsb = VSB_new_auto(); AN(vep->vsb); if (cb != NULL) { vep->dogzip = 1; /* XXX */ VSB_printf(vep->vsb, "%c", VEC_GZ); vep->cb = cb; vep->cb_priv = cb_priv; } else { vep->cb = vep_default_cb; vep->cb_priv = &vep->cb_x; } vep->state = VEP_START; vep->crc = crc32(0L, Z_NULL, 0); vep->crcp = crc32(0L, Z_NULL, 0); vep->startup = 1; return (vep); } /*--------------------------------------------------------------------- */ struct vsb * VEP_Finish(struct vep_state *vep) { ssize_t l, lcb; CHECK_OBJ_NOTNULL(vep, VEP_MAGIC); if (vep->include_src) VSB_destroy(&vep->include_src); if (vep->attr_vsb) VSB_destroy(&vep->attr_vsb); if (vep->state != VEP_START && vep->state != VEP_BOM && vep->state != VEP_TESTXML && vep->state != VEP_NOTXML && vep->state != VEP_NEXTTAG) { vep_error(vep, "VEP ended inside a tag"); } if (vep->o_pending) vep_mark_common(vep, vep->ver_p, vep->last_mark); if (vep->o_wait > 0) { lcb = vep->cb(vep->vc, vep->cb_priv, 0, VGZ_ALIGN); vep_emit_common(vep, lcb - vep->o_last, vep->last_mark); } // NB: We don't account for PAD+SUM+LEN in gzip'ed objects (void)vep->cb(vep->vc, vep->cb_priv, 0, VGZ_FINISH); AZ(VSB_finish(vep->vsb)); l = VSB_len(vep->vsb); if (vep->esi_found && l > 0) return (vep->vsb); VSB_destroy(&vep->vsb); return (NULL); } #if 0 digraph xml { rankdir="LR" size="7,10" ################################################################# # SECTION A # START [shape=ellipse] TESTXML [shape=ellipse] NOTXML [shape=ellipse] NEXTTAGa [shape=hexagon, label="NEXTTAG"] STARTTAGa [shape=hexagon, label="STARTTAG"] START -> TESTXML START -> NEXTTAGa [style=dotted, label="syntax:1"] TESTXML -> TESTXML [label="lws"] TESTXML -> NOTXML TESTXML -> STARTTAGa [label="'<'"] ################################################################# # SECTION B NOTMYTAG [shape=ellipse] NEXTTAG [shape=ellipse] NOTMYTAG -> NEXTTAG [style=dotted, label="syntax:2"] STARTTAGb [shape=hexagon, label="STARTTAG"] NOTMYTAG -> NEXTTAG [label="'>'"] NOTMYTAG -> NOTMYTAG [label="*"] NEXTTAG -> NEXTTAG [label="'-->'"] NEXTTAG -> NEXTTAG [label="*"] NEXTTAG -> STARTTAGb [label="'<'"] ################################################################# # SECTION C STARTTAG [shape=ellipse] COMMENT [shape=ellipse] CDATA [shape=ellipse] ESITAG [shape=ellipse] ESIETAG [shape=ellipse] ESIINCLUDE [shape=ellipse] ESIREMOVE [shape=ellipse] ESICOMMENT [shape=ellipse] ESIBOGON [shape=ellipse] INTAGc [shape=hexagon, label="INTAG"] NOTMYTAGc [shape=hexagon, label="NOTMYTAG"] NEXTTAGc [shape=hexagon, label="NEXTTAG"] TAGERRORc [shape=hexagon, label="TAGERROR"] C1 [shape=circle,label=""] STARTTAG -> COMMENT [label="'"] CDATA -> CDATA [label="*"] CDATA -> NEXTTAGc [label="]]>"] ESITAG -> ESIINCLUDE [label="'include'"] ESITAG -> ESIREMOVE [label="'remove'"] ESITAG -> ESICOMMENT [label="'comment'"] ESITAG -> ESIBOGON [label="*"] ESICOMMENT -> INTAGc ESICOMMENT -> TAGERRORc ESICOMMENT -> TAGERRORc [style=dotted, label="nested\nin\nremove"] ESIREMOVE -> INTAGc ESIREMOVE -> TAGERRORc ESIINCLUDE -> INTAGc ESIINCLUDE -> TAGERRORc ESIINCLUDE -> TAGERRORc [style=dotted, label="nested\nin\nremove"] ESIBOGON -> TAGERRORc ################################################################# # SECTION D INTAG [shape=ellipse] TAGERROR [shape=ellipse] NEXTTAGd [shape=hexagon, label="NEXTTAG"] ATTRd [shape=hexagon, label="ATTR"] D1 [shape=circle, label=""] D2 [shape=circle, label=""] INTAG -> D1 [label="lws"] D1 -> D2 [label="/"] INTAG -> D2 [label="/"] INTAG -> NEXTTAGd [label=">"] D1 -> NEXTTAGd [label=">"] D2 -> NEXTTAGd [label=">"] D1 -> ATTRd [label="XMLstartchar"] D1 -> TAGERROR [label="*"] D2 -> TAGERROR [label="*"] TAGERROR -> TAGERROR [label="*"] TAGERROR -> NEXTTAGd [label="'>'"] ################################################################# # SECTION E ATTR [shape=ellipse] SKIPATTR [shape=ellipse] ATTRGETVAL [shape=ellipse] ATTRDELIM [shape=ellipse] ATTRVAL [shape=ellipse] TAGERRORe [shape=hexagon, label="TAGERROR"] INTAGe [shape=hexagon, label="INTAG"] ATTR -> SKIPATTR [label="*"] ATTR -> ATTRGETVAL [label="wanted attr"] SKIPATTR -> SKIPATTR [label="XMLname"] SKIPATTR -> ATTRDELIM [label="'='"] SKIPATTR -> TAGERRORe [label="*"] ATTRGETVAL -> ATTRDELIM ATTRDELIM -> ATTRVAL [label="\""] ATTRDELIM -> ATTRVAL [label="\'"] ATTRDELIM -> ATTRVAL [label="*"] ATTRDELIM -> TAGERRORe [label="lws"] ATTRVAL -> TAGERRORe [label="'>'"] ATTRVAL -> INTAGe [label="delim"] ATTRVAL -> ATTRVAL [label="*"] } #endif varnish-7.5.0/bin/varnishd/cache/cache_expire.c000066400000000000000000000274571457605730600214450ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * LRU and object timer handling. * */ #include "config.h" #include #include "cache_varnishd.h" #include "cache_objhead.h" #include "vbh.h" #include "vtim.h" struct exp_priv { unsigned magic; #define EXP_PRIV_MAGIC 0x9db22482 /* shared */ struct lock mtx; VSTAILQ_HEAD(,objcore) inbox; pthread_cond_t condvar; /* owned by exp thread */ struct worker *wrk; struct vsl_log vsl; struct vbh *heap; pthread_t thread; }; static struct exp_priv *exphdl; static int exp_shutdown = 0; /*-------------------------------------------------------------------- * Calculate an object's effective ttl time, taking req.ttl into account * if it is available. */ vtim_real EXP_Ttl(const struct req *req, const struct objcore *oc) { vtim_dur r; CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); r = oc->ttl; if (req != NULL && req->d_ttl > 0. && req->d_ttl < r) r = req->d_ttl; return (oc->t_origin + r); } /*-------------------------------------------------------------------- * Calculate an object's effective ttl+grace time, taking req.grace into * account if it is available. */ vtim_real EXP_Ttl_grace(const struct req *req, const struct objcore *oc) { vtim_dur g; CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); g = oc->grace; if (req != NULL && req->d_grace >= 0. && req->d_grace < g) g = req->d_grace; return (EXP_Ttl(req, oc) + g); } /*-------------------------------------------------------------------- * Post an objcore to the exp_thread's inbox. */ static void exp_mail_it(struct objcore *oc, uint8_t cmds) { CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); assert(oc->refcnt > 0); AZ(cmds & OC_EF_REFD); Lck_AssertHeld(&exphdl->mtx); if (oc->exp_flags & OC_EF_REFD) { if (!(oc->exp_flags & OC_EF_POSTED)) { if (cmds & OC_EF_REMOVE) VSTAILQ_INSERT_HEAD(&exphdl->inbox, oc, exp_list); else VSTAILQ_INSERT_TAIL(&exphdl->inbox, oc, exp_list); VSC_C_main->exp_mailed++; } oc->exp_flags |= cmds | OC_EF_POSTED; PTOK(pthread_cond_signal(&exphdl->condvar)); } } /*-------------------------------------------------------------------- * Setup a new ObjCore for control by expire. Should be called with the * ObjHead locked by HSH_Unbusy(/HSH_Insert) (in private access). */ void EXP_RefNewObjcore(struct objcore *oc) { CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); Lck_AssertHeld(&oc->objhead->mtx); AZ(oc->exp_flags); assert(oc->refcnt >= 1); oc->refcnt++; oc->exp_flags |= OC_EF_REFD | OC_EF_NEW; } /*-------------------------------------------------------------------- * Call EXP's attention to a an oc */ void EXP_Remove(struct objcore *oc, const struct objcore *new_oc) { CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); CHECK_OBJ_ORNULL(new_oc, OBJCORE_MAGIC); if (oc->exp_flags & OC_EF_REFD) { Lck_Lock(&exphdl->mtx); if (new_oc != NULL) VSC_C_main->n_superseded++; if (oc->exp_flags & OC_EF_NEW) { /* EXP_Insert has not been called for this object * yet. Mark it for removal, and EXP_Insert will * clean up once it is called. */ AZ(oc->exp_flags & OC_EF_POSTED); oc->exp_flags |= OC_EF_REMOVE; } else exp_mail_it(oc, OC_EF_REMOVE); Lck_Unlock(&exphdl->mtx); } } /*-------------------------------------------------------------------- * Insert new object. * * Caller got a oc->refcnt for us. */ void EXP_Insert(struct worker *wrk, struct objcore *oc) { unsigned remove_race = 0; struct objcore *tmpoc; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); AZ(oc->flags & OC_F_BUSY); if (!(oc->exp_flags & OC_EF_REFD)) return; /* One ref held by the caller, and one that wil be owned by * expiry. */ assert(oc->refcnt >= 2); ObjSendEvent(wrk, oc, OEV_INSERT); Lck_Lock(&exphdl->mtx); AN(oc->exp_flags & OC_EF_NEW); oc->exp_flags &= ~OC_EF_NEW; AZ(oc->exp_flags & (OC_EF_INSERT | OC_EF_MOVE | OC_EF_POSTED)); if (oc->exp_flags & OC_EF_REMOVE) { /* We raced some other thread executing EXP_Remove */ remove_race = 1; oc->exp_flags &= ~(OC_EF_REFD | OC_EF_REMOVE); } else exp_mail_it(oc, OC_EF_INSERT | OC_EF_MOVE); Lck_Unlock(&exphdl->mtx); if (remove_race) { ObjSendEvent(wrk, oc, OEV_EXPIRE); tmpoc = oc; assert(oc->refcnt >= 2); /* Silence coverity */ (void)HSH_DerefObjCore(wrk, &oc, 0); AZ(oc); assert(tmpoc->refcnt >= 1); /* Silence coverity */ } } /*-------------------------------------------------------------------- * We have changed one or more of the object timers, tell the exp_thread * */ void EXP_Rearm(struct objcore *oc, vtim_real now, vtim_dur ttl, vtim_dur grace, vtim_dur keep) { vtim_real when; CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); assert(oc->refcnt > 0); if (!(oc->exp_flags & OC_EF_REFD)) return; if (!isnan(ttl)) oc->ttl = now + ttl - oc->t_origin; if (!isnan(grace)) oc->grace = grace; if (!isnan(keep)) oc->keep = keep; when = EXP_WHEN(oc); VSL(SLT_ExpKill, NO_VXID, "EXP_Rearm p=%p E=%.6f e=%.6f f=0x%x", oc, oc->timer_when, when, oc->flags); if (when < oc->t_origin || when < oc->timer_when) { Lck_Lock(&exphdl->mtx); if (oc->exp_flags & OC_EF_NEW) { /* EXP_Insert has not been called yet, do nothing * as the initial insert will execute the move * operation. */ } else exp_mail_it(oc, OC_EF_MOVE); Lck_Unlock(&exphdl->mtx); } } /*-------------------------------------------------------------------- * Handle stuff in the inbox */ static void exp_inbox(struct exp_priv *ep, struct objcore *oc, unsigned flags, double now) { CHECK_OBJ_NOTNULL(ep, EXP_PRIV_MAGIC); CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); assert(oc->refcnt > 0); VSLb(&ep->vsl, SLT_ExpKill, "EXP_Inbox flg=%x p=%p e=%.6f f=0x%x", flags, oc, oc->timer_when, oc->flags); if (flags & OC_EF_REMOVE) { if (!(flags & OC_EF_INSERT)) { assert(oc->timer_idx != VBH_NOIDX); VBH_delete(ep->heap, oc->timer_idx); } assert(oc->timer_idx == VBH_NOIDX); assert(oc->refcnt > 0); AZ(oc->exp_flags); VSLb(&ep->vsl, SLT_ExpKill, "EXP_Removed x=%ju t=%.0f h=%jd", VXID(ObjGetXID(ep->wrk, oc)), EXP_Ttl(NULL, oc) - now, (intmax_t)oc->hits); ObjSendEvent(ep->wrk, oc, OEV_EXPIRE); (void)HSH_DerefObjCore(ep->wrk, &oc, 0); return; } if (flags & OC_EF_MOVE) { oc->timer_when = EXP_WHEN(oc); ObjSendEvent(ep->wrk, oc, OEV_TTLCHG); } VSLb(&ep->vsl, SLT_ExpKill, "EXP_When p=%p e=%.6f f=0x%x", oc, oc->timer_when, flags); /* * XXX: There are some pathological cases here, were we * XXX: insert or move an expired object, only to find out * XXX: the next moment and rip them out again. */ if (flags & OC_EF_INSERT) { assert(oc->timer_idx == VBH_NOIDX); VBH_insert(exphdl->heap, oc); assert(oc->timer_idx != VBH_NOIDX); } else if (flags & OC_EF_MOVE) { assert(oc->timer_idx != VBH_NOIDX); VBH_reorder(exphdl->heap, oc->timer_idx); assert(oc->timer_idx != VBH_NOIDX); } else { WRONG("Objcore state wrong in inbox"); } } /*-------------------------------------------------------------------- * Expire stuff from the binheap */ static vtim_real exp_expire(struct exp_priv *ep, vtim_real now) { struct objcore *oc; CHECK_OBJ_NOTNULL(ep, EXP_PRIV_MAGIC); oc = VBH_root(ep->heap); if (oc == NULL) return (now + 355. / 113.); VSLb(&ep->vsl, SLT_ExpKill, "EXP_Inspect p=%p e=%.6f f=0x%x", oc, oc->timer_when - now, oc->flags); CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); /* Ready ? */ if (oc->timer_when > now) return (oc->timer_when); VSC_C_main->n_expired++; Lck_Lock(&ep->mtx); if (oc->exp_flags & OC_EF_POSTED) { oc->exp_flags |= OC_EF_REMOVE; oc = NULL; } else { oc->exp_flags &= ~OC_EF_REFD; } Lck_Unlock(&ep->mtx); if (oc != NULL) { if (!(oc->flags & OC_F_DYING)) HSH_Kill(oc); /* Remove from binheap */ assert(oc->timer_idx != VBH_NOIDX); VBH_delete(ep->heap, oc->timer_idx); assert(oc->timer_idx == VBH_NOIDX); CHECK_OBJ_NOTNULL(oc->objhead, OBJHEAD_MAGIC); VSLb(&ep->vsl, SLT_ExpKill, "EXP_Expired x=%ju t=%.0f h=%jd", VXID(ObjGetXID(ep->wrk, oc)), EXP_Ttl(NULL, oc) - now, (intmax_t)oc->hits); ObjSendEvent(ep->wrk, oc, OEV_EXPIRE); (void)HSH_DerefObjCore(ep->wrk, &oc, 0); } return (0); } /*-------------------------------------------------------------------- * This thread monitors the root of the binary heap and whenever an * object expires, accounting also for graceability, it is killed. */ static int v_matchproto_(vbh_cmp_t) object_cmp(void *priv, const void *a, const void *b) { const struct objcore *aa, *bb; (void)priv; CAST_OBJ_NOTNULL(aa, a, OBJCORE_MAGIC); CAST_OBJ_NOTNULL(bb, b, OBJCORE_MAGIC); return (aa->timer_when < bb->timer_when); } static void v_matchproto_(vbh_update_t) object_update(void *priv, void *p, unsigned u) { struct objcore *oc; (void)priv; CAST_OBJ_NOTNULL(oc, p, OBJCORE_MAGIC); oc->timer_idx = u; } static void * v_matchproto_(bgthread_t) exp_thread(struct worker *wrk, void *priv) { struct objcore *oc; vtim_real t = 0, tnext = 0; struct exp_priv *ep; unsigned flags = 0; CAST_OBJ_NOTNULL(ep, priv, EXP_PRIV_MAGIC); ep->wrk = wrk; VSL_Setup(&ep->vsl, NULL, 0); ep->heap = VBH_new(NULL, object_cmp, object_update); AN(ep->heap); while (exp_shutdown == 0) { Lck_Lock(&ep->mtx); oc = VSTAILQ_FIRST(&ep->inbox); CHECK_OBJ_ORNULL(oc, OBJCORE_MAGIC); if (oc != NULL) { assert(oc->refcnt >= 1); VSTAILQ_REMOVE(&ep->inbox, oc, objcore, exp_list); VSC_C_main->exp_received++; tnext = 0; flags = oc->exp_flags; if (flags & OC_EF_REMOVE) oc->exp_flags = 0; else oc->exp_flags &= OC_EF_REFD; } else if (tnext > t) { VSL_Flush(&ep->vsl, 0); Pool_Sumstat(wrk); (void)Lck_CondWaitUntil(&ep->condvar, &ep->mtx, tnext); } Lck_Unlock(&ep->mtx); t = VTIM_real(); if (oc != NULL) exp_inbox(ep, oc, flags, t); else tnext = exp_expire(ep, t); } return (NULL); } /*--------------------------------------------------------------------*/ void EXP_Init(void) { struct exp_priv *ep; pthread_t pt; ALLOC_OBJ(ep, EXP_PRIV_MAGIC); AN(ep); Lck_New(&ep->mtx, lck_exp); PTOK(pthread_cond_init(&ep->condvar, NULL)); VSTAILQ_INIT(&ep->inbox); WRK_BgThread(&pt, "cache-exp", exp_thread, ep); ep->thread = pt; exphdl = ep; } void EXP_Shutdown(void) { struct exp_priv *ep = exphdl; void *status; Lck_Lock(&ep->mtx); exp_shutdown = 1; PTOK(pthread_cond_signal(&ep->condvar)); Lck_Unlock(&ep->mtx); AN(ep->thread); PTOK(pthread_join(ep->thread, &status)); AZ(status); memset(&ep->thread, 0, sizeof ep->thread); /* XXX could cleanup more - not worth it for now */ } varnish-7.5.0/bin/varnishd/cache/cache_fetch.c000066400000000000000000000771201457605730600212320ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include "config.h" #include "cache_varnishd.h" #include "cache_filter.h" #include "cache_objhead.h" #include "storage/storage.h" #include "vcl.h" #include "vtim.h" #include "vcc_interface.h" #define FETCH_STEPS \ FETCH_STEP(mkbereq, MKBEREQ) \ FETCH_STEP(retry, RETRY) \ FETCH_STEP(startfetch, STARTFETCH) \ FETCH_STEP(condfetch, CONDFETCH) \ FETCH_STEP(fetch, FETCH) \ FETCH_STEP(fetchbody, FETCHBODY) \ FETCH_STEP(fetchend, FETCHEND) \ FETCH_STEP(error, ERROR) \ FETCH_STEP(fail, FAIL) \ FETCH_STEP(done, DONE) typedef const struct fetch_step *vbf_state_f(struct worker *, struct busyobj *); struct fetch_step { const char *name; vbf_state_f *func; }; #define FETCH_STEP(l, U) \ static vbf_state_f vbf_stp_##l; \ static const struct fetch_step F_STP_##U[1] = {{ .name = "Fetch Step " #l, .func = vbf_stp_##l, }}; FETCH_STEPS #undef FETCH_STEP /*-------------------------------------------------------------------- * Allocate an object, with fall-back to Transient. * XXX: This somewhat overlaps the stuff in stevedore.c * XXX: Should this be merged over there ? */ static int vbf_allocobj(struct busyobj *bo, unsigned l) { struct objcore *oc; const struct stevedore *stv; vtim_dur lifetime; CHECK_OBJ_NOTNULL(bo, BUSYOBJ_MAGIC); oc = bo->fetch_objcore; CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); lifetime = oc->ttl + oc->grace + oc->keep; if (bo->uncacheable) { stv = stv_transient; bo->wrk->stats->beresp_uncacheable++; } else if (lifetime < cache_param->shortlived) { stv = stv_transient; bo->wrk->stats->beresp_shortlived++; } else stv = bo->storage; bo->storage = NULL; if (stv == NULL) return (0); if (STV_NewObject(bo->wrk, oc, stv, l)) return (1); if (stv == stv_transient) return (0); /* * Try to salvage the transaction by allocating a shortlived object * on Transient storage. */ oc->ttl = vmin_t(float, oc->ttl, cache_param->shortlived); oc->grace = 0.0; oc->keep = 0.0; return (STV_NewObject(bo->wrk, oc, stv_transient, l)); } static void vbf_cleanup(struct busyobj *bo) { struct vfp_ctx *vfc; CHECK_OBJ_NOTNULL(bo, BUSYOBJ_MAGIC); vfc = bo->vfc; CHECK_OBJ_NOTNULL(vfc, VFP_CTX_MAGIC); bo->acct.beresp_bodybytes += VFP_Close(vfc); bo->vfp_filter_list = NULL; if (bo->director_state != DIR_S_NULL) VDI_Finish(bo); } void Bereq_Rollback(VRT_CTX) { struct busyobj *bo; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); bo = ctx->bo; CHECK_OBJ_NOTNULL(bo, BUSYOBJ_MAGIC); if (bo->htc != NULL && bo->htc->body_status != BS_NONE && bo->htc->body_status != BS_TAKEN) bo->htc->doclose = SC_RESP_CLOSE; vbf_cleanup(bo); VCL_TaskLeave(ctx, bo->privs); VCL_TaskEnter(bo->privs); HTTP_Clone(bo->bereq, bo->bereq0); bo->vfp_filter_list = NULL; bo->err_reason = NULL; AN(bo->ws_bo); WS_Rollback(bo->ws, bo->ws_bo); } /*-------------------------------------------------------------------- * Turn the beresp into a obj */ static int vbf_beresp2obj(struct busyobj *bo) { unsigned l, l2; const char *b; uint8_t *bp; struct vsb *vary = NULL; int varyl = 0; struct objcore *oc; CHECK_OBJ_NOTNULL(bo, BUSYOBJ_MAGIC); oc = bo->fetch_objcore; CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); l = 0; /* Create Vary instructions */ if (!(oc->flags & OC_F_PRIVATE)) { varyl = VRY_Create(bo, &vary); if (varyl > 0) { AN(vary); assert(varyl == VSB_len(vary)); l += PRNDUP((intptr_t)varyl); } else if (varyl < 0) { /* * Vary parse error * Complain about it, and make this a pass. */ VSLb(bo->vsl, SLT_Error, "Illegal 'Vary' header from backend, " "making this a pass."); bo->uncacheable = 1; AZ(vary); } else /* No vary */ AZ(vary); } l2 = http_EstimateWS(bo->beresp, bo->uncacheable ? HTTPH_A_PASS : HTTPH_A_INS); l += l2; if (bo->uncacheable) oc->flags |= OC_F_HFM; if (!vbf_allocobj(bo, l)) { if (vary != NULL) VSB_destroy(&vary); AZ(vary); return (VFP_Error(bo->vfc, "Could not get storage")); } if (vary != NULL) { AN(ObjSetAttr(bo->wrk, oc, OA_VARY, varyl, VSB_data(vary))); VSB_destroy(&vary); } AZ(ObjSetXID(bo->wrk, oc, bo->vsl->wid)); /* for HTTP_Encode() VSLH call */ bo->beresp->logtag = SLT_ObjMethod; /* Filter into object */ bp = ObjSetAttr(bo->wrk, oc, OA_HEADERS, l2, NULL); AN(bp); HTTP_Encode(bo->beresp, bp, l2, bo->uncacheable ? HTTPH_A_PASS : HTTPH_A_INS); if (http_GetHdr(bo->beresp, H_Last_Modified, &b)) AZ(ObjSetDouble(bo->wrk, oc, OA_LASTMODIFIED, VTIM_parse(b))); else AZ(ObjSetDouble(bo->wrk, oc, OA_LASTMODIFIED, floor(oc->t_origin))); return (0); } /*-------------------------------------------------------------------- * Copy req->bereq and release req if no body */ static const struct fetch_step * v_matchproto_(vbf_state_f) vbf_stp_mkbereq(struct worker *wrk, struct busyobj *bo) { const char *q; struct objcore *oc; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(bo, BUSYOBJ_MAGIC); CHECK_OBJ_NOTNULL(bo->req, REQ_MAGIC); oc = bo->fetch_objcore; CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); assert(oc->boc->state == BOS_INVALID); AZ(bo->storage); HTTP_Setup(bo->bereq0, bo->ws, bo->vsl, SLT_BereqMethod); http_FilterReq(bo->bereq0, bo->req->http, bo->uncacheable ? HTTPH_R_PASS : HTTPH_R_FETCH); if (bo->uncacheable) AZ(bo->stale_oc); else { http_ForceField(bo->bereq0, HTTP_HDR_METHOD, "GET"); if (cache_param->http_gzip_support) http_ForceHeader(bo->bereq0, H_Accept_Encoding, "gzip"); } http_ForceField(bo->bereq0, HTTP_HDR_PROTO, "HTTP/1.1"); if (bo->stale_oc != NULL && ObjCheckFlag(bo->wrk, bo->stale_oc, OF_IMSCAND) && (bo->stale_oc->boc != NULL || ObjGetLen(wrk, bo->stale_oc) != 0)) { AZ(bo->stale_oc->flags & (OC_F_HFM|OC_F_PRIVATE)); q = RFC2616_Strong_LM(NULL, wrk, bo->stale_oc); if (q != NULL) http_PrintfHeader(bo->bereq0, "If-Modified-Since: %s", q); q = HTTP_GetHdrPack(bo->wrk, bo->stale_oc, H_ETag); if (q != NULL) http_PrintfHeader(bo->bereq0, "If-None-Match: %s", q); } http_CopyHome(bo->bereq0); HTTP_Setup(bo->bereq, bo->ws, bo->vsl, SLT_BereqMethod); bo->ws_bo = WS_Snapshot(bo->ws); HTTP_Clone(bo->bereq, bo->bereq0); if (bo->req->req_body_status->avail == 0) { bo->req = NULL; ObjSetState(bo->wrk, oc, BOS_REQ_DONE); } else if (bo->req->req_body_status == BS_CACHED) { AN(bo->req->body_oc); bo->bereq_body = bo->req->body_oc; HSH_Ref(bo->bereq_body); bo->req = NULL; ObjSetState(bo->wrk, oc, BOS_REQ_DONE); } return (F_STP_STARTFETCH); } /*-------------------------------------------------------------------- * Start a new VSL transaction and try again * Prepare the busyobj and fetch processors */ static const struct fetch_step * v_matchproto_(vbf_state_f) vbf_stp_retry(struct worker *wrk, struct busyobj *bo) { CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(bo, BUSYOBJ_MAGIC); assert(bo->fetch_objcore->boc->state <= BOS_REQ_DONE); if (bo->no_retry != NULL) { VSLb(bo->vsl, SLT_Error, "Retry not possible, %s", bo->no_retry); return (F_STP_FAIL); } VSLb_ts_busyobj(bo, "Retry", W_TIM_real(wrk)); /* VDI_Finish (via vbf_cleanup) must have been called before */ assert(bo->director_state == DIR_S_NULL); /* reset other bo attributes - See VBO_GetBusyObj */ bo->storage = NULL; bo->do_esi = 0; bo->do_stream = 1; bo->was_304 = 0; bo->err_code = 0; bo->err_reason = NULL; bo->connect_timeout = NAN; bo->first_byte_timeout = NAN; bo->between_bytes_timeout = NAN; if (bo->htc != NULL) bo->htc->doclose = SC_NULL; // XXX: BereqEnd + BereqAcct ? VSL_ChgId(bo->vsl, "bereq", "retry", VXID_Get(wrk, VSL_BACKENDMARKER)); VSLb_ts_busyobj(bo, "Start", bo->t_prev); http_VSL_log(bo->bereq); return (F_STP_STARTFETCH); } /*-------------------------------------------------------------------- * 304 setup logic */ static int vbf_304_logic(struct busyobj *bo) { if (bo->stale_oc != NULL && ObjCheckFlag(bo->wrk, bo->stale_oc, OF_IMSCAND)) { AZ(bo->stale_oc->flags & (OC_F_HFM|OC_F_PRIVATE)); if (ObjCheckFlag(bo->wrk, bo->stale_oc, OF_CHGCE)) { /* * If a VFP changed C-E in the stored * object, then don't overwrite C-E from * the IMS fetch, and we must weaken any * new ETag we get. */ RFC2616_Weaken_Etag(bo->beresp); } http_Unset(bo->beresp, H_Content_Encoding); http_Unset(bo->beresp, H_Content_Length); HTTP_Merge(bo->wrk, bo->stale_oc, bo->beresp); assert(http_IsStatus(bo->beresp, 200)); bo->was_304 = 1; } else if (!bo->uncacheable) { /* * Backend sent unallowed 304 */ VSLb(bo->vsl, SLT_Error, "304 response but not conditional fetch"); bo->htc->doclose = SC_RX_BAD; vbf_cleanup(bo); return (-1); } return (1); } /*-------------------------------------------------------------------- * Setup bereq from bereq0, run vcl_backend_fetch */ static const struct fetch_step * v_matchproto_(vbf_state_f) vbf_stp_startfetch(struct worker *wrk, struct busyobj *bo) { int i; vtim_real now; unsigned handling; struct objcore *oc; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(bo, BUSYOBJ_MAGIC); oc = bo->fetch_objcore; CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); AZ(bo->storage); bo->storage = bo->uncacheable ? stv_transient : STV_next(); if (bo->retries > 0) http_Unset(bo->bereq, "\012X-Varnish:"); http_PrintfHeader(bo->bereq, "X-Varnish: %ju", VXID(bo->vsl->wid)); VCL_backend_fetch_method(bo->vcl, wrk, NULL, bo, NULL); if (wrk->vpi->handling == VCL_RET_ABANDON || wrk->vpi->handling == VCL_RET_FAIL) return (F_STP_FAIL); assert (wrk->vpi->handling == VCL_RET_FETCH || wrk->vpi->handling == VCL_RET_ERROR); HTTP_Setup(bo->beresp, bo->ws, bo->vsl, SLT_BerespMethod); assert(oc->boc->state <= BOS_REQ_DONE); AZ(bo->htc); VFP_Setup(bo->vfc, wrk); bo->vfc->oc = oc; bo->vfc->resp = bo->beresp; bo->vfc->req = bo->bereq; if (wrk->vpi->handling == VCL_RET_ERROR) return (F_STP_ERROR); VSLb_ts_busyobj(bo, "Fetch", W_TIM_real(wrk)); i = VDI_GetHdr(bo); if (bo->htc != NULL) CHECK_OBJ_NOTNULL(bo->htc->doclose, STREAM_CLOSE_MAGIC); bo->t_resp = now = W_TIM_real(wrk); VSLb_ts_busyobj(bo, "Beresp", now); if (i) { assert(bo->director_state == DIR_S_NULL); return (F_STP_ERROR); } if (bo->htc != NULL && bo->htc->body_status == BS_ERROR) { bo->htc->doclose = SC_RX_BODY; vbf_cleanup(bo); VSLb(bo->vsl, SLT_Error, "Body cannot be fetched"); assert(bo->director_state == DIR_S_NULL); return (F_STP_ERROR); } if (!http_GetHdr(bo->beresp, H_Date, NULL)) { /* * RFC 2616 14.18 Date: The Date general-header field * represents the date and time at which the message was * originated, having the same semantics as orig-date in * RFC 822. ... A received message that does not have a * Date header field MUST be assigned one by the recipient * if the message will be cached by that recipient or * gatewayed via a protocol which requires a Date. * * If we didn't get a Date header, we assign one here. */ http_TimeHeader(bo->beresp, "Date: ", now); } /* * These two headers can be spread over multiple actual headers * and we rely on their content outside of VCL, so collect them * into one line here. */ http_CollectHdr(bo->beresp, H_Cache_Control); http_CollectHdr(bo->beresp, H_Vary); /* What does RFC2616 think about TTL ? */ RFC2616_Ttl(bo, now, &oc->t_origin, &oc->ttl, &oc->grace, &oc->keep); AZ(bo->do_esi); AZ(bo->was_304); if (http_IsStatus(bo->beresp, 304) && vbf_304_logic(bo) < 0) return (F_STP_ERROR); if (bo->htc != NULL && bo->htc->doclose == SC_NULL && http_GetHdrField(bo->bereq, H_Connection, "close", NULL)) bo->htc->doclose = SC_REQ_CLOSE; VCL_backend_response_method(bo->vcl, wrk, NULL, bo, NULL); if (bo->htc != NULL && bo->htc->doclose == SC_NULL && http_GetHdrField(bo->beresp, H_Connection, "close", NULL)) bo->htc->doclose = SC_RESP_CLOSE; if (VRG_CheckBo(bo) < 0) { VDI_Finish(bo); return (F_STP_ERROR); } if (wrk->vpi->handling == VCL_RET_ABANDON || wrk->vpi->handling == VCL_RET_FAIL || wrk->vpi->handling == VCL_RET_ERROR) { /* do not count deliberately ending the backend connection as * fetch failure */ handling = wrk->vpi->handling; if (bo->htc) bo->htc->doclose = SC_RESP_CLOSE; vbf_cleanup(bo); wrk->vpi->handling = handling; if (wrk->vpi->handling == VCL_RET_ERROR) return (F_STP_ERROR); else return (F_STP_FAIL); } if (wrk->vpi->handling == VCL_RET_RETRY) { if (bo->htc && bo->htc->body_status != BS_NONE) bo->htc->doclose = SC_RESP_CLOSE; vbf_cleanup(bo); if (bo->retries++ < cache_param->max_retries) return (F_STP_RETRY); VSLb(bo->vsl, SLT_VCL_Error, "Too many retries, delivering 503"); assert(bo->director_state == DIR_S_NULL); return (F_STP_ERROR); } VSLb_ts_busyobj(bo, "Process", W_TIM_real(wrk)); assert(oc->boc->state <= BOS_REQ_DONE); if (oc->boc->state != BOS_REQ_DONE) { bo->req = NULL; ObjSetState(wrk, oc, BOS_REQ_DONE); } if (bo->do_esi) bo->do_stream = 0; if (wrk->vpi->handling == VCL_RET_PASS) { oc->flags |= OC_F_HFP; bo->uncacheable = 1; wrk->vpi->handling = VCL_RET_DELIVER; } if (!bo->uncacheable || !bo->do_stream) oc->boc->transit_buffer = 0; if (bo->uncacheable) oc->flags |= OC_F_HFM; assert(wrk->vpi->handling == VCL_RET_DELIVER); return (bo->was_304 ? F_STP_CONDFETCH : F_STP_FETCH); } /*-------------------------------------------------------------------- */ static const struct fetch_step * v_matchproto_(vbf_state_f) vbf_stp_fetchbody(struct worker *wrk, struct busyobj *bo) { ssize_t l; uint8_t *ptr; enum vfp_status vfps = VFP_ERROR; ssize_t est; struct vfp_ctx *vfc; struct objcore *oc; CHECK_OBJ_NOTNULL(bo, BUSYOBJ_MAGIC); vfc = bo->vfc; CHECK_OBJ_NOTNULL(vfc, VFP_CTX_MAGIC); oc = bo->fetch_objcore; CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); AN(vfc->vfp_nxt); est = bo->htc->content_length; if (est < 0) est = 0; do { if (oc->flags & OC_F_CANCEL) { /* * A pass object and delivery was terminated * We don't fail the fetch, in order for HitMiss * objects to be created. */ AN(oc->flags & OC_F_HFM); VSLb(wrk->vsl, SLT_Debug, "Fetch: Pass delivery abandoned"); bo->htc->doclose = SC_RX_BODY; break; } AZ(vfc->failed); l = est; oc = bo->fetch_objcore; if (oc->boc->transit_buffer > 0) l = vmin_t(ssize_t, l, oc->boc->transit_buffer); assert(l >= 0); if (VFP_GetStorage(vfc, &l, &ptr) != VFP_OK) { bo->htc->doclose = SC_RX_BODY; break; } AZ(vfc->failed); vfps = VFP_Suck(vfc, ptr, &l); if (l >= 0 && vfps != VFP_ERROR) { VFP_Extend(vfc, l, vfps); if (est >= l) est -= l; else est = 0; } } while (vfps == VFP_OK); if (vfc->failed) { (void)VFP_Error(vfc, "Fetch pipeline failed to process"); bo->htc->doclose = SC_RX_BODY; vbf_cleanup(bo); if (!bo->do_stream) { assert(oc->boc->state < BOS_STREAM); // XXX: doclose = ? return (F_STP_ERROR); } else { wrk->stats->fetch_failed++; return (F_STP_FAIL); } } return (F_STP_FETCHEND); } static const struct fetch_step * v_matchproto_(vbf_state_f) vbf_stp_fetch(struct worker *wrk, struct busyobj *bo) { struct vrt_ctx ctx[1]; struct objcore *oc; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(bo, BUSYOBJ_MAGIC); oc = bo->fetch_objcore; CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); assert(wrk->vpi->handling == VCL_RET_DELIVER); if (bo->htc == NULL) { (void)VFP_Error(bo->vfc, "No backend connection (rollback?)"); vbf_cleanup(bo); return (F_STP_ERROR); } /* No body -> done */ if (bo->htc->body_status == BS_NONE || bo->htc->content_length == 0) { http_Unset(bo->beresp, H_Content_Encoding); bo->do_gzip = bo->do_gunzip = 0; bo->do_stream = 0; bo->vfp_filter_list = ""; } else if (bo->vfp_filter_list == NULL) { bo->vfp_filter_list = VBF_Get_Filter_List(bo); } if (bo->vfp_filter_list == NULL || VCL_StackVFP(bo->vfc, bo->vcl, bo->vfp_filter_list)) { (bo)->htc->doclose = SC_OVERLOAD; vbf_cleanup(bo); return (F_STP_ERROR); } if (oc->flags & OC_F_PRIVATE) AN(bo->uncacheable); oc->boc->fetched_so_far = 0; INIT_OBJ(ctx, VRT_CTX_MAGIC); VCL_Bo2Ctx(ctx, bo); if (VFP_Open(ctx, bo->vfc)) { (void)VFP_Error(bo->vfc, "Fetch pipeline failed to open"); bo->htc->doclose = SC_RX_BODY; vbf_cleanup(bo); return (F_STP_ERROR); } if (vbf_beresp2obj(bo)) { bo->htc->doclose = SC_RX_BODY; vbf_cleanup(bo); return (F_STP_ERROR); } #define OBJ_FLAG(U, l, v) \ if (bo->vfc->obj_flags & OF_##U) \ ObjSetFlag(bo->wrk, oc, OF_##U, 1); #include "tbl/obj_attr.h" if (!(oc->flags & OC_F_HFM) && http_IsStatus(bo->beresp, 200) && ( RFC2616_Strong_LM(bo->beresp, NULL, NULL) != NULL || http_GetHdr(bo->beresp, H_ETag, NULL))) ObjSetFlag(bo->wrk, oc, OF_IMSCAND, 1); assert(oc->boc->refcount >= 1); assert(oc->boc->state == BOS_REQ_DONE); if (bo->do_stream) { ObjSetState(wrk, oc, BOS_PREP_STREAM); HSH_Unbusy(wrk, oc); ObjSetState(wrk, oc, BOS_STREAM); } VSLb(bo->vsl, SLT_Fetch_Body, "%u %s %s", bo->htc->body_status->nbr, bo->htc->body_status->name, bo->do_stream ? "stream" : "-"); if (bo->htc->body_status != BS_NONE) { assert(bo->htc->body_status != BS_ERROR); return (F_STP_FETCHBODY); } AZ(bo->vfc->failed); return (F_STP_FETCHEND); } static const struct fetch_step * v_matchproto_(vbf_state_f) vbf_stp_fetchend(struct worker *wrk, struct busyobj *bo) { struct objcore *oc; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(bo, BUSYOBJ_MAGIC); oc = bo->fetch_objcore; CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); AZ(bo->vfc->failed); /* Recycle the backend connection before setting BOS_FINISHED to give predictable backend reuse behavior for varnishtest */ vbf_cleanup(bo); AZ(ObjSetU64(wrk, oc, OA_LEN, oc->boc->fetched_so_far)); if (bo->do_stream) assert(oc->boc->state == BOS_STREAM); else { assert(oc->boc->state == BOS_REQ_DONE); ObjSetState(wrk, oc, BOS_PREP_STREAM); HSH_Unbusy(wrk, oc); } ObjSetState(wrk, oc, BOS_FINISHED); VSLb_ts_busyobj(bo, "BerespBody", W_TIM_real(wrk)); if (bo->stale_oc != NULL) { VSL(SLT_ExpKill, NO_VXID, "VBF_Superseded x=%ju n=%ju", VXID(ObjGetXID(wrk, bo->stale_oc)), VXID(ObjGetXID(wrk, bo->fetch_objcore))); HSH_Replace(bo->stale_oc, bo->fetch_objcore); } return (F_STP_DONE); } /*-------------------------------------------------------------------- */ struct vbf_objiter_priv { unsigned magic; #define VBF_OBITER_PRIV_MAGIC 0x3c272a17 struct busyobj *bo; // not yet allocated ssize_t l; // current allocation uint8_t *p; ssize_t pl; }; static int v_matchproto_(objiterate_f) vbf_objiterate(void *priv, unsigned flush, const void *ptr, ssize_t len) { struct vbf_objiter_priv *vop; ssize_t l; const uint8_t *ps = ptr; CAST_OBJ_NOTNULL(vop, priv, VBF_OBITER_PRIV_MAGIC); CHECK_OBJ_NOTNULL(vop->bo, BUSYOBJ_MAGIC); flush &= OBJ_ITER_END; while (len > 0) { if (vop->pl == 0) { vop->p = NULL; AN(vop->l); vop->pl = vop->l; if (VFP_GetStorage(vop->bo->vfc, &vop->pl, &vop->p) != VFP_OK) return (1); if (vop->pl < vop->l) vop->l -= vop->pl; else vop->l = 0; } AN(vop->pl); AN(vop->p); l = vmin(vop->pl, len); memcpy(vop->p, ps, l); VFP_Extend(vop->bo->vfc, l, flush && l == len ? VFP_END : VFP_OK); ps += l; vop->p += l; len -= l; vop->pl -= l; } if (flush) AZ(vop->l); return (0); } static const struct fetch_step * v_matchproto_(vbf_state_f) vbf_stp_condfetch(struct worker *wrk, struct busyobj *bo) { struct boc *stale_boc; enum boc_state_e stale_state; struct objcore *oc, *stale_oc; struct vbf_objiter_priv vop[1]; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(bo, BUSYOBJ_MAGIC); oc = bo->fetch_objcore; CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); stale_oc = bo->stale_oc; CHECK_OBJ_NOTNULL(stale_oc, OBJCORE_MAGIC); stale_boc = HSH_RefBoc(stale_oc); CHECK_OBJ_ORNULL(stale_boc, BOC_MAGIC); if (stale_boc) { /* Wait for the stale object to become fully fetched, so * that we can catch fetch errors, before we unbusy the * new object. This serves two purposes. First it helps * with request coalesching, and stops long chains of * IMS-updated short-TTL objects all streaming from a * single slow body fetch. Second it makes sure that all * the object attributes are complete when we copy them * (this would be an issue for ie OA_GZIPBITS). */ VSLb(bo->vsl, SLT_Notice, "vsl: Conditional fetch wait for streaming object"); ObjWaitState(stale_oc, BOS_FINISHED); stale_state = stale_boc->state; HSH_DerefBoc(bo->wrk, stale_oc); stale_boc = NULL; if (stale_state != BOS_FINISHED) { assert(stale_state == BOS_FAILED); AN(stale_oc->flags & OC_F_FAILED); } } AZ(stale_boc); if (stale_oc->flags & OC_F_FAILED) { (void)VFP_Error(bo->vfc, "Template object failed"); vbf_cleanup(bo); wrk->stats->fetch_failed++; return (F_STP_FAIL); } if (vbf_beresp2obj(bo)) { vbf_cleanup(bo); wrk->stats->fetch_failed++; return (F_STP_FAIL); } if (ObjHasAttr(bo->wrk, stale_oc, OA_ESIDATA)) AZ(ObjCopyAttr(bo->wrk, oc, stale_oc, OA_ESIDATA)); AZ(ObjCopyAttr(bo->wrk, oc, stale_oc, OA_FLAGS)); if (oc->flags & OC_F_HFM) ObjSetFlag(bo->wrk, oc, OF_IMSCAND, 0); AZ(ObjCopyAttr(bo->wrk, oc, stale_oc, OA_GZIPBITS)); if (bo->do_stream) { ObjSetState(wrk, oc, BOS_PREP_STREAM); HSH_Unbusy(wrk, oc); ObjSetState(wrk, oc, BOS_STREAM); } INIT_OBJ(vop, VBF_OBITER_PRIV_MAGIC); vop->bo = bo; vop->l = ObjGetLen(bo->wrk, stale_oc); if (ObjIterate(wrk, stale_oc, vop, vbf_objiterate, 0)) (void)VFP_Error(bo->vfc, "Template object failed"); if (bo->vfc->failed) { vbf_cleanup(bo); wrk->stats->fetch_failed++; return (F_STP_FAIL); } return (F_STP_FETCHEND); } /*-------------------------------------------------------------------- * Create synth object * * replaces a stale object unless * - abandoning the bereq or * - leaving vcl_backend_error with return (deliver) and beresp.ttl == 0s or * - there is a waitinglist on this object because in this case the default ttl * would be 1s, so we might be looking at the same case as the previous * * We do want the stale replacement to avoid an object pileup with short ttl and * long grace/keep, yet there could exist cases where a cache object is * deliberately created to momentarily override a stale object. * * If this case exists, we should add a vcl veto (e.g. beresp.replace_stale with * default true) */ static const struct fetch_step * v_matchproto_(vbf_state_f) vbf_stp_error(struct worker *wrk, struct busyobj *bo) { ssize_t l, ll, o; vtim_real now; uint8_t *ptr; struct vsb *synth_body; struct objcore *stale, *oc; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(bo, BUSYOBJ_MAGIC); oc = bo->fetch_objcore; CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); AN(oc->flags & OC_F_BUSY); assert(bo->director_state == DIR_S_NULL); if (wrk->vpi->handling != VCL_RET_ERROR) wrk->stats->fetch_failed++; now = W_TIM_real(wrk); VSLb_ts_busyobj(bo, "Error", now); if (oc->stobj->stevedore != NULL) { oc->boc->fetched_so_far = 0; ObjFreeObj(bo->wrk, oc); } if (bo->storage == NULL) bo->storage = STV_next(); // XXX: reset all beresp flags ? HTTP_Setup(bo->beresp, bo->ws, bo->vsl, SLT_BerespMethod); if (bo->err_code > 0) http_PutResponse(bo->beresp, "HTTP/1.1", bo->err_code, bo->err_reason); else http_PutResponse(bo->beresp, "HTTP/1.1", 503, "Backend fetch failed"); http_TimeHeader(bo->beresp, "Date: ", now); http_SetHeader(bo->beresp, "Server: Varnish"); stale = bo->stale_oc; oc->t_origin = now; if (!VTAILQ_EMPTY(&oc->objhead->waitinglist)) { /* * If there is a waitinglist, it means that there is no * grace-able object, so cache the error return for a * short time, so the waiting list can drain, rather than * each objcore on the waiting list sequentially attempt * to fetch from the backend. */ oc->ttl = 1; oc->grace = 5; oc->keep = 5; stale = NULL; } else { oc->ttl = 0; oc->grace = 0; oc->keep = 0; } synth_body = VSB_new_auto(); AN(synth_body); VCL_backend_error_method(bo->vcl, wrk, NULL, bo, synth_body); AZ(VSB_finish(synth_body)); if (wrk->vpi->handling == VCL_RET_ABANDON || wrk->vpi->handling == VCL_RET_FAIL) { VSB_destroy(&synth_body); return (F_STP_FAIL); } if (wrk->vpi->handling == VCL_RET_RETRY) { VSB_destroy(&synth_body); if (bo->retries++ < cache_param->max_retries) return (F_STP_RETRY); VSLb(bo->vsl, SLT_VCL_Error, "Too many retries, failing"); return (F_STP_FAIL); } assert(wrk->vpi->handling == VCL_RET_DELIVER); assert(bo->vfc->wrk == bo->wrk); assert(bo->vfc->oc == oc); assert(bo->vfc->resp == bo->beresp); assert(bo->vfc->req == bo->bereq); if (vbf_beresp2obj(bo)) { VSB_destroy(&synth_body); return (F_STP_FAIL); } oc->boc->transit_buffer = 0; ll = VSB_len(synth_body); o = 0; while (ll > 0) { l = ll; if (VFP_GetStorage(bo->vfc, &l, &ptr) != VFP_OK) break; l = vmin(l, ll); memcpy(ptr, VSB_data(synth_body) + o, l); VFP_Extend(bo->vfc, l, l == ll ? VFP_END : VFP_OK); ll -= l; o += l; } AZ(ObjSetU64(wrk, oc, OA_LEN, o)); VSB_destroy(&synth_body); ObjSetState(wrk, oc, BOS_PREP_STREAM); HSH_Unbusy(wrk, oc); if (stale != NULL && oc->ttl > 0) HSH_Kill(stale); ObjSetState(wrk, oc, BOS_FINISHED); return (F_STP_DONE); } /*-------------------------------------------------------------------- */ static const struct fetch_step * v_matchproto_(vbf_state_f) vbf_stp_fail(struct worker *wrk, struct busyobj *bo) { struct objcore *oc; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(bo, BUSYOBJ_MAGIC); oc = bo->fetch_objcore; CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); assert(oc->boc->state < BOS_FINISHED); HSH_Fail(oc); if (!(oc->flags & OC_F_BUSY)) HSH_Kill(oc); ObjSetState(wrk, oc, BOS_FAILED); return (F_STP_DONE); } /*-------------------------------------------------------------------- */ static const struct fetch_step * v_matchproto_(vbf_state_f) vbf_stp_done(struct worker *wrk, struct busyobj *bo) { CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(bo, BUSYOBJ_MAGIC); WRONG("Just plain wrong"); NEEDLESS(return (F_STP_DONE)); } static void v_matchproto_(task_func_t) vbf_fetch_thread(struct worker *wrk, void *priv) { struct vrt_ctx ctx[1]; struct busyobj *bo; struct objcore *oc; const struct fetch_step *stp; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CAST_OBJ_NOTNULL(bo, priv, BUSYOBJ_MAGIC); CHECK_OBJ_NOTNULL(bo->req, REQ_MAGIC); oc = bo->fetch_objcore; CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); THR_SetBusyobj(bo); stp = F_STP_MKBEREQ; assert(isnan(bo->t_first)); assert(isnan(bo->t_prev)); VSLb_ts_busyobj(bo, "Start", W_TIM_real(wrk)); bo->wrk = wrk; wrk->vsl = bo->vsl; #if 0 if (bo->stale_oc != NULL) { CHECK_OBJ_NOTNULL(bo->stale_oc, OBJCORE_MAGIC); /* We don't want the oc/stevedore ops in fetching thread */ if (!ObjCheckFlag(wrk, bo->stale_oc, OF_IMSCAND)) (void)HSH_DerefObjCore(wrk, &bo->stale_oc, 0); } #endif VCL_TaskEnter(bo->privs); while (stp != F_STP_DONE) { CHECK_OBJ_NOTNULL(bo, BUSYOBJ_MAGIC); assert(oc->boc->refcount >= 1); if (oc->boc->state < BOS_REQ_DONE) AN(bo->req); else AZ(bo->req); AN(stp); AN(stp->name); AN(stp->func); stp = stp->func(wrk, bo); } assert(bo->director_state == DIR_S_NULL); INIT_OBJ(ctx, VRT_CTX_MAGIC); VCL_Bo2Ctx(ctx, bo); VCL_TaskLeave(ctx, bo->privs); http_Teardown(bo->bereq); http_Teardown(bo->beresp); // can not make assumptions about the number of references here #3434 if (bo->bereq_body != NULL) (void) HSH_DerefObjCore(bo->wrk, &bo->bereq_body, 0); if (oc->boc->state == BOS_FINISHED) { AZ(oc->flags & OC_F_FAILED); VSLb(bo->vsl, SLT_Length, "%ju", (uintmax_t)ObjGetLen(bo->wrk, oc)); } // AZ(oc->boc); // XXX if (bo->stale_oc != NULL) (void)HSH_DerefObjCore(wrk, &bo->stale_oc, 0); wrk->vsl = NULL; HSH_DerefBoc(wrk, oc); SES_Rel(bo->sp); VBO_ReleaseBusyObj(wrk, &bo); THR_SetBusyobj(NULL); } /*-------------------------------------------------------------------- */ void VBF_Fetch(struct worker *wrk, struct req *req, struct objcore *oc, struct objcore *oldoc, enum vbf_fetch_mode_e mode) { struct boc *boc; struct busyobj *bo; enum task_prio prio; const char *how; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(req, REQ_MAGIC); CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); AN(oc->flags & OC_F_BUSY); CHECK_OBJ_ORNULL(oldoc, OBJCORE_MAGIC); bo = VBO_GetBusyObj(wrk, req); CHECK_OBJ_NOTNULL(bo, BUSYOBJ_MAGIC); AN(bo->vcl); boc = HSH_RefBoc(oc); CHECK_OBJ_NOTNULL(boc, BOC_MAGIC); switch (mode) { case VBF_PASS: prio = TASK_QUEUE_BO; how = "pass"; bo->uncacheable = 1; break; case VBF_NORMAL: prio = TASK_QUEUE_BO; how = "fetch"; break; case VBF_BACKGROUND: prio = TASK_QUEUE_BG; how = "bgfetch"; bo->is_bgfetch = 1; break; default: WRONG("Wrong fetch mode"); } #define REQ_BEREQ_FLAG(l, r, w, d) bo->l = req->l; #include "tbl/req_bereq_flags.h" VSLb(bo->vsl, SLT_Begin, "bereq %ju %s", VXID(req->vsl->wid), how); VSLbs(bo->vsl, SLT_VCL_use, TOSTRAND(VCL_Name(bo->vcl))); VSLb(req->vsl, SLT_Link, "bereq %ju %s", VXID(bo->vsl->wid), how); THR_SetBusyobj(bo); bo->sp = req->sp; SES_Ref(bo->sp); oc->boc->vary = req->vary_b; req->vary_b = NULL; HSH_Ref(oc); AZ(bo->fetch_objcore); bo->fetch_objcore = oc; AZ(bo->stale_oc); if (oldoc != NULL) { assert(oldoc->refcnt > 0); HSH_Ref(oldoc); bo->stale_oc = oldoc; } AZ(bo->req); bo->req = req; bo->fetch_task->priv = bo; bo->fetch_task->func = vbf_fetch_thread; if (Pool_Task(wrk->pool, bo->fetch_task, prio)) { wrk->stats->bgfetch_no_thread++; VSLb(bo->vsl, SLT_FetchError, "No thread available for bgfetch"); (void)vbf_stp_fail(req->wrk, bo); if (bo->stale_oc != NULL) (void)HSH_DerefObjCore(wrk, &bo->stale_oc, 0); HSH_DerefBoc(wrk, oc); SES_Rel(bo->sp); THR_SetBusyobj(NULL); VBO_ReleaseBusyObj(wrk, &bo); } else { THR_SetBusyobj(NULL); bo = NULL; /* ref transferred to fetch thread */ if (mode == VBF_BACKGROUND) { ObjWaitState(oc, BOS_REQ_DONE); (void)VRB_Ignore(req); } else { ObjWaitState(oc, BOS_STREAM); if (oc->boc->state == BOS_FAILED) { AN((oc->flags & OC_F_FAILED)); } else { AZ(oc->flags & OC_F_BUSY); } } } AZ(bo); VSLb_ts_req(req, "Fetch", W_TIM_real(wrk)); assert(oc->boc == boc); HSH_DerefBoc(wrk, oc); if (mode == VBF_BACKGROUND) (void)HSH_DerefObjCore(wrk, &oc, HSH_RUSH_POLICY); } varnish-7.5.0/bin/varnishd/cache/cache_fetch_proc.c000066400000000000000000000144221457605730600222510ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include "config.h" #include #include "cache_varnishd.h" #include "cache_filter.h" #include "vcli_serve.h" static unsigned fetchfrag; /*-------------------------------------------------------------------- * We want to issue the first error we encounter on fetching and * suppress the rest. This function does that. * * Other code is allowed to look at busyobj->fetch_failed to bail out * * For convenience, always return VFP_ERROR */ enum vfp_status VFP_Error(struct vfp_ctx *vc, const char *fmt, ...) { va_list ap; CHECK_OBJ_NOTNULL(vc, VFP_CTX_MAGIC); if (!vc->failed) { va_start(ap, fmt); VSLbv(vc->wrk->vsl, SLT_FetchError, fmt, ap); va_end(ap); vc->failed = 1; } return (VFP_ERROR); } /*-------------------------------------------------------------------- * Fetch Storage to put object into. * */ enum vfp_status VFP_GetStorage(struct vfp_ctx *vc, ssize_t *sz, uint8_t **ptr) { CHECK_OBJ_NOTNULL(vc, VFP_CTX_MAGIC); AN(sz); assert(*sz >= 0); AN(ptr); if (fetchfrag > 0) *sz = fetchfrag; if (!ObjGetSpace(vc->wrk, vc->oc, sz, ptr)) { *sz = 0; *ptr = NULL; return (VFP_Error(vc, "Could not get storage")); } assert(*sz > 0); AN(*ptr); return (VFP_OK); } void VFP_Extend(const struct vfp_ctx *vc, ssize_t sz, enum vfp_status flg) { CHECK_OBJ_NOTNULL(vc, VFP_CTX_MAGIC); ObjExtend(vc->wrk, vc->oc, sz, flg == VFP_END ? 1 : 0); } /********************************************************************** */ void VFP_Setup(struct vfp_ctx *vc, struct worker *wrk) { INIT_OBJ(vc, VFP_CTX_MAGIC); VTAILQ_INIT(&vc->vfp); vc->wrk = wrk; } /********************************************************************** * Returns the number of bytes processed by the lowest VFP in the stack */ uint64_t VFP_Close(struct vfp_ctx *vc) { struct vfp_entry *vfe, *tmp; uint64_t rv = 0; VTAILQ_FOREACH_SAFE(vfe, &vc->vfp, list, tmp) { if (vfe->vfp->fini != NULL) vfe->vfp->fini(vc, vfe); rv = vfe->bytes_out; VSLb(vc->wrk->vsl, SLT_VfpAcct, "%s %ju %ju", vfe->vfp->name, (uintmax_t)vfe->calls, (uintmax_t)rv); VTAILQ_REMOVE(&vc->vfp, vfe, list); } return (rv); } int VFP_Open(VRT_CTX, struct vfp_ctx *vc) { struct vfp_entry *vfe; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(vc, VFP_CTX_MAGIC); CHECK_OBJ_NOTNULL(vc->resp, HTTP_MAGIC); CHECK_OBJ_NOTNULL(vc->wrk, WORKER_MAGIC); AN(vc->wrk->vsl); VTAILQ_FOREACH_REVERSE(vfe, &vc->vfp, vfp_entry_s, list) { if (vfe->vfp->init == NULL) continue; if (DO_DEBUG(DBG_PROCESSORS)) VSLb(vc->wrk->vsl, SLT_Debug, "VFP_Open(%s)", vfe->vfp->name); vfe->closed = vfe->vfp->init(ctx, vc, vfe); if (vfe->closed != VFP_OK && vfe->closed != VFP_NULL) { (void)VFP_Error(vc, "Fetch filter %s failed to open", vfe->vfp->name); (void)VFP_Close(vc); return (-1); } } return (0); } /********************************************************************** * Suck data up from lower levels. * Once a layer return non VFP_OK, clean it up and produce the same * return value for any subsequent calls. */ enum vfp_status VFP_Suck(struct vfp_ctx *vc, void *p, ssize_t *lp) { enum vfp_status vp; struct vfp_entry *vfe; CHECK_OBJ_NOTNULL(vc, VFP_CTX_MAGIC); AN(p); AN(lp); vfe = vc->vfp_nxt; CHECK_OBJ_NOTNULL(vfe, VFP_ENTRY_MAGIC); vc->vfp_nxt = VTAILQ_NEXT(vfe, list); if (vfe->closed == VFP_NULL) { /* Layer asked to be bypassed when opened */ vp = VFP_Suck(vc, p, lp); } else if (vfe->closed == VFP_OK) { vp = vfe->vfp->pull(vc, vfe, p, lp); if (vp != VFP_OK && vp != VFP_END && vp != VFP_ERROR) vp = VFP_Error(vc, "Fetch filter %s returned %d", vfe->vfp->name, vp); else vfe->bytes_out += *lp; vfe->closed = vp; vfe->calls++; } else { /* Already closed filter */ *lp = 0; vp = vfe->closed; } vc->vfp_nxt = vfe; assert(vp != VFP_NULL); return (vp); } /*-------------------------------------------------------------------- */ struct vfp_entry * VFP_Push(struct vfp_ctx *vc, const struct vfp *vfp) { struct vfp_entry *vfe; CHECK_OBJ_NOTNULL(vc, VFP_CTX_MAGIC); CHECK_OBJ_NOTNULL(vc->resp, HTTP_MAGIC); vfe = WS_Alloc(vc->resp->ws, sizeof *vfe); if (vfe == NULL) { (void)VFP_Error(vc, "Workspace overflow"); return (NULL); } INIT_OBJ(vfe, VFP_ENTRY_MAGIC); vfe->vfp = vfp; vfe->closed = VFP_OK; VTAILQ_INSERT_HEAD(&vc->vfp, vfe, list); vc->vfp_nxt = vfe; return (vfe); } /*-------------------------------------------------------------------- * Debugging aids */ static void v_matchproto_(cli_func_t) debug_fragfetch(struct cli *cli, const char * const *av, void *priv) { (void)priv; (void)cli; fetchfrag = strtoul(av[2], NULL, 0); } static struct cli_proto debug_cmds[] = { { CLICMD_DEBUG_FRAGFETCH, "d", debug_fragfetch }, { NULL } }; /*-------------------------------------------------------------------- * */ void VFP_Init(void) { CLI_AddFuncs(debug_cmds); } varnish-7.5.0/bin/varnishd/cache/cache_filter.h000066400000000000000000000111271457605730600214260ustar00rootroot00000000000000/*- * Copyright (c) 2013-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ struct req; struct vfp_entry; struct vfp_ctx; struct vdp_ctx; /* Fetch processors --------------------------------------------------*/ enum vfp_status { VFP_ERROR = -1, VFP_OK = 0, VFP_END = 1, VFP_NULL = 2, // signal bypass, never returned by VFP_Suck() }; typedef enum vfp_status vfp_init_f(VRT_CTX, struct vfp_ctx *, struct vfp_entry *); typedef enum vfp_status vfp_pull_f(struct vfp_ctx *, struct vfp_entry *, void *ptr, ssize_t *len); typedef void vfp_fini_f(struct vfp_ctx *, struct vfp_entry *); struct vfp { const char *name; vfp_init_f *init; vfp_pull_f *pull; vfp_fini_f *fini; const void *priv1; }; struct vfp_entry { unsigned magic; #define VFP_ENTRY_MAGIC 0xbe32a027 enum vfp_status closed; const struct vfp *vfp; void *priv1; // XXX ambiguous with priv1 in struct vfp ssize_t priv2; VTAILQ_ENTRY(vfp_entry) list; uint64_t calls; uint64_t bytes_out; }; /*-------------------------------------------------------------------- * VFP filter state */ VTAILQ_HEAD(vfp_entry_s, vfp_entry); struct vfp_ctx { unsigned magic; #define VFP_CTX_MAGIC 0x61d9d3e5 int failed; struct http *req; struct http *resp; struct worker *wrk; struct objcore *oc; struct vfp_entry_s vfp; struct vfp_entry *vfp_nxt; unsigned obj_flags; }; enum vfp_status VFP_Suck(struct vfp_ctx *, void *p, ssize_t *lp); enum vfp_status VFP_Error(struct vfp_ctx *, const char *fmt, ...) v_printflike_(2, 3); void v_deprecated_ VRT_AddVFP(VRT_CTX, const struct vfp *); void v_deprecated_ VRT_RemoveVFP(VRT_CTX, const struct vfp *); /* Deliver processors ------------------------------------------------*/ enum vdp_action { VDP_NULL, /* Input buffer valid after call */ VDP_FLUSH, /* Input buffer will be invalidated */ VDP_END, /* Last buffer or after, implies VDP_FLUSH */ }; typedef int vdp_init_f(VRT_CTX, struct vdp_ctx *, void **priv, struct objcore *); /* * Return value: * negative: Error - abandon delivery * zero: OK * positive: Don't push this VDP anyway */ typedef int vdp_fini_f(struct vdp_ctx *, void **priv); typedef int vdp_bytes_f(struct vdp_ctx *, enum vdp_action, void **priv, const void *ptr, ssize_t len); struct vdp { const char *name; vdp_init_f *init; vdp_bytes_f *bytes; vdp_fini_f *fini; const void *priv1; }; struct vdp_entry { unsigned magic; #define VDP_ENTRY_MAGIC 0x353eb781 enum vdp_action end; // VDP_NULL or VDP_END const struct vdp *vdp; void *priv; VTAILQ_ENTRY(vdp_entry) list; uint64_t calls; uint64_t bytes_in; }; VTAILQ_HEAD(vdp_entry_s, vdp_entry); struct vdp_ctx { unsigned magic; #define VDP_CTX_MAGIC 0xee501df7 int retval; uint64_t bytes_done; struct vdp_entry_s vdp; struct vdp_entry *nxt; struct worker *wrk; struct vsl_log *vsl; struct req *req; }; int VDP_bytes(struct vdp_ctx *, enum vdp_action act, const void *, ssize_t); void v_deprecated_ VRT_AddVDP(VRT_CTX, const struct vdp *); void v_deprecated_ VRT_RemoveVDP(VRT_CTX, const struct vdp *); /* Registry functions -------------------------------------------------*/ const char *VRT_AddFilter(VRT_CTX, const struct vfp *, const struct vdp *); void VRT_RemoveFilter(VRT_CTX, const struct vfp *, const struct vdp *); varnish-7.5.0/bin/varnishd/cache/cache_gzip.c000066400000000000000000000407671457605730600211210ustar00rootroot00000000000000/*- * Copyright (c) 2013-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Interaction with the linvgz (zlib) library. * * The zlib library pollutes namespace a LOT when you include the "vgz.h" * (aka (zlib.h") file so we contain the damage by vectoring all access * to libz through this source file. * * The API defined by this file, will also insulate the rest of the code, * should we find a better gzip library at a later date. * */ #include "config.h" #include #include "cache_varnishd.h" #include "cache_filter.h" #include "cache_objhead.h" #include "cache_vgz.h" #include "vend.h" #include "vgz.h" struct vgz { unsigned magic; #define VGZ_MAGIC 0x162df0cb enum {VGZ_GZ,VGZ_UN} dir; struct vsl_log *vsl; const char *id; int last_i; enum vgz_flag flag; char *m_buf; ssize_t m_sz; ssize_t m_len; intmax_t bits; z_stream vz; }; static const char * vgz_msg(const struct vgz *vg) { CHECK_OBJ_NOTNULL(vg, VGZ_MAGIC); return (vg->vz.msg ? vg->vz.msg : "(null)"); } /*-------------------------------------------------------------------- * Set up a gunzip instance */ static struct vgz * vgz_gunzip(struct vsl_log *vsl, const char *id) { struct vgz *vg; ALLOC_OBJ(vg, VGZ_MAGIC); AN(vg); vg->vsl = vsl; vg->id = id; vg->dir = VGZ_UN; /* * Max memory usage according to zonf.h: * mem_needed = "a few kb" + (1 << (windowBits)) * Since we don't control windowBits, we have to assume * it is 15, so 34-35KB or so. */ assert(Z_OK == inflateInit2(&vg->vz, 31)); return (vg); } static struct vgz * VGZ_NewGunzip(struct vsl_log *vsl, const char *id) { VSC_C_main->n_gunzip++; return (vgz_gunzip(vsl, id)); } static struct vgz * VGZ_NewTestGunzip(struct vsl_log *vsl, const char *id) { VSC_C_main->n_test_gunzip++; return (vgz_gunzip(vsl, id)); } struct vgz * VGZ_NewGzip(struct vsl_log *vsl, const char *id) { struct vgz *vg; int i; VSC_C_main->n_gzip++; ALLOC_OBJ(vg, VGZ_MAGIC); AN(vg); vg->vsl = vsl; vg->id = id; vg->dir = VGZ_GZ; /* * From zconf.h: * * mem_needed = "a few kb" * + (1 << (windowBits+2)) * + (1 << (memLevel+9)) * * windowBits [8..15] (-> 1K..128K) * memLevel [1..9] (-> 1K->256K) */ i = deflateInit2(&vg->vz, cache_param->gzip_level, /* Level */ Z_DEFLATED, /* Method */ 16 + 15, /* Window bits (16=gzip) */ cache_param->gzip_memlevel, /* memLevel */ Z_DEFAULT_STRATEGY); assert(Z_OK == i); return (vg); } /*-------------------------------------------------------------------- */ static int vgz_getmbuf(struct vgz *vg) { CHECK_OBJ_NOTNULL(vg, VGZ_MAGIC); AZ(vg->m_sz); AZ(vg->m_len); AZ(vg->m_buf); vg->m_sz = cache_param->gzip_buffer; vg->m_buf = malloc(vg->m_sz); if (vg->m_buf == NULL) { vg->m_sz = 0; return (-1); } return (0); } /*--------------------------------------------------------------------*/ void VGZ_Ibuf(struct vgz *vg, const void *ptr, ssize_t len) { CHECK_OBJ_NOTNULL(vg, VGZ_MAGIC); AZ(vg->vz.avail_in); vg->vz.next_in = TRUST_ME(ptr); vg->vz.avail_in = len; } int VGZ_IbufEmpty(const struct vgz *vg) { CHECK_OBJ_NOTNULL(vg, VGZ_MAGIC); return (vg->vz.avail_in == 0); } /*--------------------------------------------------------------------*/ void VGZ_Obuf(struct vgz *vg, void *ptr, ssize_t len) { CHECK_OBJ_NOTNULL(vg, VGZ_MAGIC); vg->vz.next_out = TRUST_ME(ptr); vg->vz.avail_out = len; } int VGZ_ObufFull(const struct vgz *vg) { CHECK_OBJ_NOTNULL(vg, VGZ_MAGIC); return (vg->vz.avail_out == 0); } /*--------------------------------------------------------------------*/ static enum vgzret_e VGZ_Gunzip(struct vgz *vg, const void **pptr, ssize_t *plen) { int i; ssize_t l; const uint8_t *before; CHECK_OBJ_NOTNULL(vg, VGZ_MAGIC); *pptr = NULL; *plen = 0; AN(vg->vz.next_out); AN(vg->vz.avail_out); before = vg->vz.next_out; i = inflate(&vg->vz, 0); if (i == Z_OK || i == Z_STREAM_END) { *pptr = before; l = (const uint8_t *)vg->vz.next_out - before; *plen = l; } vg->last_i = i; if (i == Z_OK) return (VGZ_OK); if (i == Z_STREAM_END) return (VGZ_END); if (i == Z_BUF_ERROR) return (VGZ_STUCK); VSLb(vg->vsl, SLT_Gzip, "Gunzip error: %d (%s)", i, vgz_msg(vg)); return (VGZ_ERROR); } /*--------------------------------------------------------------------*/ enum vgzret_e VGZ_Gzip(struct vgz *vg, const void **pptr, ssize_t *plen, enum vgz_flag flags) { int i; int zflg; ssize_t l; const uint8_t *before; CHECK_OBJ_NOTNULL(vg, VGZ_MAGIC); *pptr = NULL; *plen = 0; AN(vg->vz.next_out); AN(vg->vz.avail_out); before = vg->vz.next_out; switch (flags) { case VGZ_NORMAL: zflg = Z_NO_FLUSH; break; case VGZ_ALIGN: zflg = Z_SYNC_FLUSH; break; case VGZ_RESET: zflg = Z_FULL_FLUSH; break; case VGZ_FINISH: zflg = Z_FINISH; break; default: INCOMPL(); } i = deflate(&vg->vz, zflg); if (i == Z_OK || i == Z_STREAM_END) { *pptr = before; l = (const uint8_t *)vg->vz.next_out - before; *plen = l; } vg->last_i = i; if (i == Z_OK) return (VGZ_OK); if (i == Z_STREAM_END) return (VGZ_END); if (i == Z_BUF_ERROR) return (VGZ_STUCK); VSLb(vg->vsl, SLT_Gzip, "Gzip error: %d (%s)", i, vgz_msg(vg)); return (VGZ_ERROR); } /*-------------------------------------------------------------------- * VDP for gunzip'ing */ static int v_matchproto_(vdp_init_f) vdp_gunzip_init(VRT_CTX, struct vdp_ctx *vdc, void **priv, struct objcore *oc) { struct vgz *vg; struct boc *boc; struct req *req; enum boc_state_e bos; const char *p; ssize_t dl; uint64_t u; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(vdc, VDP_CTX_MAGIC); CHECK_OBJ_ORNULL(oc, OBJCORE_MAGIC); req = vdc->req; CHECK_OBJ_NOTNULL(req, REQ_MAGIC); vg = VGZ_NewGunzip(vdc->vsl, "U D -"); AN(vg); if (vgz_getmbuf(vg)) { (void)VGZ_Destroy(&vg); return (-1); } VGZ_Obuf(vg, vg->m_buf, vg->m_sz); *priv = vg; http_Unset(req->resp, H_Content_Encoding); req->resp_len = -1; if (oc == NULL) return (0); boc = HSH_RefBoc(oc); if (boc != NULL) { CHECK_OBJ(boc, BOC_MAGIC); bos = boc->state; HSH_DerefBoc(vdc->wrk, oc); if (bos < BOS_FINISHED) return (0); /* OA_GZIPBITS is not stable yet */ } p = ObjGetAttr(vdc->wrk, oc, OA_GZIPBITS, &dl); if (p != NULL && dl == 32) { u = vbe64dec(p + 24); if (u != 0) req->resp_len = u; } return (0); } static int v_matchproto_(vdp_fini_f) vdp_gunzip_fini(struct vdp_ctx *vdc, void **priv) { struct vgz *vg; (void)vdc; CAST_OBJ_NOTNULL(vg, *priv, VGZ_MAGIC); AN(vg->m_buf); (void)VGZ_Destroy(&vg); *priv = NULL; return (0); } static int v_matchproto_(vdp_bytes_f) vdp_gunzip_bytes(struct vdp_ctx *vdc, enum vdp_action act, void **priv, const void *ptr, ssize_t len) { enum vgzret_e vr; ssize_t dl; const void *dp; struct worker *wrk; struct vgz *vg; CHECK_OBJ_NOTNULL(vdc, VDP_CTX_MAGIC); wrk = vdc->wrk; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); (void)act; CAST_OBJ_NOTNULL(vg, *priv, VGZ_MAGIC); AN(vg->m_buf); if (len == 0) return (0); VGZ_Ibuf(vg, ptr, len); do { vr = VGZ_Gunzip(vg, &dp, &dl); if (vr == VGZ_END && !VGZ_IbufEmpty(vg)) { VSLb(vg->vsl, SLT_Gzip, "G(un)zip error: %d (%s)", vr, "junk after VGZ_END"); return (-1); } vg->m_len += dl; if (vr < VGZ_OK) return (-1); if (vg->m_len == vg->m_sz || vr != VGZ_OK) { if (VDP_bytes(vdc, vr == VGZ_END ? VDP_END : VDP_FLUSH, vg->m_buf, vg->m_len)) return (vdc->retval); vg->m_len = 0; VGZ_Obuf(vg, vg->m_buf, vg->m_sz); } } while (!VGZ_IbufEmpty(vg)); assert(vr == VGZ_STUCK || vr == VGZ_OK || vr == VGZ_END); return (0); } const struct vdp VDP_gunzip = { .name = "gunzip", .init = vdp_gunzip_init, .bytes = vdp_gunzip_bytes, .fini = vdp_gunzip_fini, }; /*--------------------------------------------------------------------*/ void VGZ_UpdateObj(const struct vfp_ctx *vc, struct vgz *vg, enum vgz_ua_e e) { char *p; intmax_t ii; CHECK_OBJ_NOTNULL(vg, VGZ_MAGIC); ii = vg->vz.start_bit + vg->vz.last_bit + vg->vz.stop_bit; if (e == VUA_UPDATE && ii == vg->bits) return; vg->bits = ii; p = ObjSetAttr(vc->wrk, vc->oc, OA_GZIPBITS, 32, NULL); AN(p); vbe64enc(p, vg->vz.start_bit); vbe64enc(p + 8, vg->vz.last_bit); vbe64enc(p + 16, vg->vz.stop_bit); if (e == VUA_END_GZIP) vbe64enc(p + 24, vg->vz.total_in); if (e == VUA_END_GUNZIP) vbe64enc(p + 24, vg->vz.total_out); } /*-------------------------------------------------------------------- */ enum vgzret_e VGZ_Destroy(struct vgz **vgp) { struct vgz *vg; enum vgzret_e vr; int i; TAKE_OBJ_NOTNULL(vg, vgp, VGZ_MAGIC); AN(vg->id); VSLb(vg->vsl, SLT_Gzip, "%s %jd %jd %jd %jd %jd", vg->id, (intmax_t)vg->vz.total_in, (intmax_t)vg->vz.total_out, (intmax_t)vg->vz.start_bit, (intmax_t)vg->vz.last_bit, (intmax_t)vg->vz.stop_bit); if (vg->dir == VGZ_GZ) i = deflateEnd(&vg->vz); else i = inflateEnd(&vg->vz); if (vg->last_i == Z_STREAM_END && i == Z_OK) i = Z_STREAM_END; if (vg->m_buf) free(vg->m_buf); if (i == Z_OK) vr = VGZ_OK; else if (i == Z_STREAM_END) vr = VGZ_END; else if (i == Z_BUF_ERROR) vr = VGZ_STUCK; else { VSLb(vg->vsl, SLT_Gzip, "G(un)zip error: %d (%s)", i, vgz_msg(vg)); vr = VGZ_ERROR; } FREE_OBJ(vg); return (vr); } /*--------------------------------------------------------------------*/ static enum vfp_status v_matchproto_(vfp_init_f) vfp_gzip_init(VRT_CTX, struct vfp_ctx *vc, struct vfp_entry *vfe) { struct vgz *vg; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(vc, VFP_CTX_MAGIC); CHECK_OBJ_NOTNULL(vfe, VFP_ENTRY_MAGIC); /* * G(un)zip makes no sence on partial responses, but since * it is an pure 1:1 transform, we can just ignore it. */ if (http_GetStatus(vc->resp) == 206) return (VFP_NULL); if (vfe->vfp == &VFP_gzip) { if (http_GetHdr(vc->resp, H_Content_Encoding, NULL)) return (VFP_NULL); vg = VGZ_NewGzip(vc->wrk->vsl, vfe->vfp->priv1); vc->obj_flags |= OF_GZIPED | OF_CHGCE; } else { if (!http_HdrIs(vc->resp, H_Content_Encoding, "gzip")) return (VFP_NULL); if (vfe->vfp == &VFP_gunzip) { vg = VGZ_NewGunzip(vc->wrk->vsl, vfe->vfp->priv1); vc->obj_flags &= ~OF_GZIPED; vc->obj_flags |= OF_CHGCE; } else { vg = VGZ_NewTestGunzip(vc->wrk->vsl, vfe->vfp->priv1); vc->obj_flags |= OF_GZIPED; } } AN(vg); vfe->priv1 = vg; if (vgz_getmbuf(vg)) return (VFP_ERROR); VGZ_Ibuf(vg, vg->m_buf, 0); AZ(vg->m_len); if (vfe->vfp == &VFP_gunzip || vfe->vfp == &VFP_gzip) { http_Unset(vc->resp, H_Content_Encoding); http_Unset(vc->resp, H_Content_Length); RFC2616_Weaken_Etag(vc->resp); } if (vfe->vfp == &VFP_gzip) http_SetHeader(vc->resp, "Content-Encoding: gzip"); if (vfe->vfp == &VFP_gzip || vfe->vfp == &VFP_testgunzip) RFC2616_Vary_AE(vc->resp); return (VFP_OK); } /*-------------------------------------------------------------------- * VFP_GUNZIP * * A VFP for gunzip'ing an object as we receive it from the backend */ static enum vfp_status v_matchproto_(vfp_pull_f) vfp_gunzip_pull(struct vfp_ctx *vc, struct vfp_entry *vfe, void *p, ssize_t *lp) { ssize_t l; struct vgz *vg; enum vgzret_e vr = VGZ_ERROR; const void *dp; ssize_t dl; enum vfp_status vp = VFP_OK; CHECK_OBJ_NOTNULL(vc, VFP_CTX_MAGIC); CHECK_OBJ_NOTNULL(vfe, VFP_ENTRY_MAGIC); CAST_OBJ_NOTNULL(vg, vfe->priv1, VGZ_MAGIC); AN(p); AN(lp); l = *lp; *lp = 0; VGZ_Obuf(vg, p, l); do { if (VGZ_IbufEmpty(vg)) { l = vg->m_sz; vp = VFP_Suck(vc, vg->m_buf, &l); if (vp == VFP_ERROR) return (vp); VGZ_Ibuf(vg, vg->m_buf, l); } if (!VGZ_IbufEmpty(vg) || vp == VFP_END) { vr = VGZ_Gunzip(vg, &dp, &dl); if (vr == VGZ_END && !VGZ_IbufEmpty(vg)) return (VFP_Error(vc, "Junk after gzip data")); if (vr < VGZ_OK) return (VFP_Error(vc, "Invalid Gzip data: %s", vgz_msg(vg))); if (dl > 0) { *lp = dl; assert(dp == p); return (VFP_OK); } } AN(VGZ_IbufEmpty(vg)); } while (vp == VFP_OK); if (vr != VGZ_END) return (VFP_Error(vc, "Gunzip error at the very end")); return (vp); } /*-------------------------------------------------------------------- * VFP_GZIP * * A VFP for gzip'ing an object as we receive it from the backend */ static enum vfp_status v_matchproto_(vfp_pull_f) vfp_gzip_pull(struct vfp_ctx *vc, struct vfp_entry *vfe, void *p, ssize_t *lp) { ssize_t l; struct vgz *vg; enum vgzret_e vr = VGZ_ERROR; const void *dp; ssize_t dl; enum vfp_status vp = VFP_ERROR; CHECK_OBJ_NOTNULL(vc, VFP_CTX_MAGIC); CHECK_OBJ_NOTNULL(vfe, VFP_ENTRY_MAGIC); CAST_OBJ_NOTNULL(vg, vfe->priv1, VGZ_MAGIC); AN(p); AN(lp); l = *lp; *lp = 0; VGZ_Obuf(vg, p, l); do { if (VGZ_IbufEmpty(vg)) { l = vg->m_sz; vp = VFP_Suck(vc, vg->m_buf, &l); if (vp == VFP_ERROR) break; if (vp == VFP_END) vg->flag = VGZ_FINISH; VGZ_Ibuf(vg, vg->m_buf, l); } if (!VGZ_IbufEmpty(vg) || vg->flag == VGZ_FINISH) { vr = VGZ_Gzip(vg, &dp, &dl, vg->flag); if (vr < VGZ_OK) return (VFP_Error(vc, "Gzip failed")); if (dl > 0) { VGZ_UpdateObj(vc, vg, VUA_UPDATE); *lp = dl; assert(dp == p); if (vr != VGZ_END || !VGZ_IbufEmpty(vg)) return (VFP_OK); } } AN(VGZ_IbufEmpty(vg)); } while (vg->flag != VGZ_FINISH); if (vr != VGZ_END) return (VFP_Error(vc, "Gzip failed")); VGZ_UpdateObj(vc, vg, VUA_END_GZIP); return (VFP_END); } /*-------------------------------------------------------------------- * VFP_TESTGZIP * * A VFP for testing that received gzip data is valid, and for * collecting the magic bits while we're at it. */ static enum vfp_status v_matchproto_(vfp_pull_f) vfp_testgunzip_pull(struct vfp_ctx *vc, struct vfp_entry *vfe, void *p, ssize_t *lp) { struct vgz *vg; enum vgzret_e vr = VGZ_ERROR; const void *dp; ssize_t dl; enum vfp_status vp; CHECK_OBJ_NOTNULL(vc, VFP_CTX_MAGIC); CHECK_OBJ_NOTNULL(vfe, VFP_ENTRY_MAGIC); CAST_OBJ_NOTNULL(vg, vfe->priv1, VGZ_MAGIC); AN(p); AN(lp); vp = VFP_Suck(vc, p, lp); if (vp == VFP_ERROR) return (vp); if (*lp > 0 || vp == VFP_END) { VGZ_Ibuf(vg, p, *lp); do { VGZ_Obuf(vg, vg->m_buf, vg->m_sz); vr = VGZ_Gunzip(vg, &dp, &dl); if (vr == VGZ_END && !VGZ_IbufEmpty(vg)) return (VFP_Error(vc, "Junk after gzip data")); if (vr < VGZ_OK) return (VFP_Error(vc, "Invalid Gzip data: %s", vgz_msg(vg))); } while (!VGZ_IbufEmpty(vg)); } VGZ_UpdateObj(vc, vg, VUA_UPDATE); if (vp == VFP_END) { if (vr != VGZ_END) return (VFP_Error(vc, "tGunzip failed")); VGZ_UpdateObj(vc, vg, VUA_END_GUNZIP); } return (vp); } /*--------------------------------------------------------------------*/ static void v_matchproto_(vfp_fini_f) vfp_gzip_fini(struct vfp_ctx *vc, struct vfp_entry *vfe) { struct vgz *vg; CHECK_OBJ_NOTNULL(vc, VFP_CTX_MAGIC); CHECK_OBJ_NOTNULL(vfe, VFP_ENTRY_MAGIC); if (vfe->priv1 != NULL) { TAKE_OBJ_NOTNULL(vg, &vfe->priv1, VGZ_MAGIC); (void)VGZ_Destroy(&vg); } } /*--------------------------------------------------------------------*/ const struct vfp VFP_gunzip = { .name = "gunzip", .init = vfp_gzip_init, .pull = vfp_gunzip_pull, .fini = vfp_gzip_fini, .priv1 = "U F -", }; const struct vfp VFP_gzip = { .name = "gzip", .init = vfp_gzip_init, .pull = vfp_gzip_pull, .fini = vfp_gzip_fini, .priv1 = "G F -", }; const struct vfp VFP_testgunzip = { .name = "testgunzip", .init = vfp_gzip_init, .pull = vfp_testgunzip_pull, .fini = vfp_gzip_fini, .priv1 = "u F -", }; varnish-7.5.0/bin/varnishd/cache/cache_hash.c000066400000000000000000000713411457605730600210630ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * This is the central hash-table code, it relies on a chosen hash * implementation only for the actual hashing, all the housekeeping * happens here. * * We have two kinds of structures, objecthead and object. An objecthead * corresponds to a given (Host:, URL) tupple, and the objects hung from * the objecthead may represent various variations (ie: Vary: header, * different TTL etc) instances of that web-entity. * * Each objecthead has a mutex which locks both its own fields, the * list of objects and fields in the objects. * * The hash implementation must supply a reference count facility on * the objecthead, and return with a reference held after a lookup. * * Lookups in the hash implementation returns with a ref held and each * object hung from the objhead holds a ref as well. * * Objects have refcounts which are locked by the objecthead mutex. * * New objects are always marked busy, and they can go from busy to * not busy only once. */ #include "config.h" #include #include #include "cache_varnishd.h" #include "cache/cache_objhead.h" #include "cache/cache_transport.h" #include "hash/hash_slinger.h" #include "vsha256.h" struct rush { unsigned magic; #define RUSH_MAGIC 0xa1af5f01 VTAILQ_HEAD(,req) reqs; }; static const struct hash_slinger *hash; static struct objhead *private_oh; static void hsh_rush1(const struct worker *, struct objhead *, struct rush *, int); static void hsh_rush2(struct worker *, struct rush *); static int hsh_deref_objhead(struct worker *wrk, struct objhead **poh); static int hsh_deref_objhead_unlock(struct worker *wrk, struct objhead **poh, int); /*---------------------------------------------------------------------*/ #define VCF_RETURN(x) const struct vcf_return VCF_##x[1] = { \ { .name = #x, } \ }; VCF_RETURNS() #undef VCF_RETURN /*---------------------------------------------------------------------*/ static struct objhead * hsh_newobjhead(void) { struct objhead *oh; ALLOC_OBJ(oh, OBJHEAD_MAGIC); XXXAN(oh); oh->refcnt = 1; VTAILQ_INIT(&oh->objcs); VTAILQ_INIT(&oh->waitinglist); Lck_New(&oh->mtx, lck_objhdr); return (oh); } /*---------------------------------------------------------------------*/ /* Precreate an objhead and object for later use */ static void hsh_prealloc(struct worker *wrk) { CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); if (wrk->wpriv->nobjcore == NULL) wrk->wpriv->nobjcore = ObjNew(wrk); CHECK_OBJ_NOTNULL(wrk->wpriv->nobjcore, OBJCORE_MAGIC); if (wrk->wpriv->nobjhead == NULL) { wrk->wpriv->nobjhead = hsh_newobjhead(); wrk->stats->n_objecthead++; } CHECK_OBJ_NOTNULL(wrk->wpriv->nobjhead, OBJHEAD_MAGIC); if (hash->prep != NULL) hash->prep(wrk); } /*---------------------------------------------------------------------*/ struct objcore * HSH_Private(const struct worker *wrk) { struct objcore *oc; CHECK_OBJ_NOTNULL(private_oh, OBJHEAD_MAGIC); oc = ObjNew(wrk); AN(oc); oc->refcnt = 1; oc->objhead = private_oh; oc->flags |= OC_F_PRIVATE; Lck_Lock(&private_oh->mtx); VTAILQ_INSERT_TAIL(&private_oh->objcs, oc, hsh_list); private_oh->refcnt++; Lck_Unlock(&private_oh->mtx); return (oc); } /*---------------------------------------------------------------------*/ void HSH_Cleanup(const struct worker *wrk) { CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(wrk->wpriv, WORKER_PRIV_MAGIC); if (wrk->wpriv->nobjcore != NULL) ObjDestroy(wrk, &wrk->wpriv->nobjcore); if (wrk->wpriv->nobjhead != NULL) { CHECK_OBJ(wrk->wpriv->nobjhead, OBJHEAD_MAGIC); Lck_Delete(&wrk->wpriv->nobjhead->mtx); FREE_OBJ(wrk->wpriv->nobjhead); wrk->stats->n_objecthead--; } if (wrk->wpriv->nhashpriv != NULL) { /* XXX: If needed, add slinger method for this */ free(wrk->wpriv->nhashpriv); wrk->wpriv->nhashpriv = NULL; } } void HSH_DeleteObjHead(const struct worker *wrk, struct objhead *oh) { CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(oh, OBJHEAD_MAGIC); AZ(oh->refcnt); assert(VTAILQ_EMPTY(&oh->objcs)); assert(VTAILQ_EMPTY(&oh->waitinglist)); Lck_Delete(&oh->mtx); wrk->stats->n_objecthead--; FREE_OBJ(oh); } void HSH_AddString(struct req *req, void *ctx, const char *str) { CHECK_OBJ_NOTNULL(req, REQ_MAGIC); AN(ctx); if (str != NULL) { VSHA256_Update(ctx, str, strlen(str)); VSLbs(req->vsl, SLT_Hash, TOSTRAND(str)); } else VSHA256_Update(ctx, &str, 1); } /*--------------------------------------------------------------------- * This is a debugging hack to enable testing of boundary conditions * in the hash algorithm. * We trap the first 9 different digests and translate them to different * digests with edge bit conditions */ static struct hsh_magiclist { unsigned char was[VSHA256_LEN]; unsigned char now[VSHA256_LEN]; } hsh_magiclist[] = { { .now = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 } }, { .now = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01 } }, { .now = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02 } }, { .now = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40 } }, { .now = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x80 } }, { .now = { 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 } }, { .now = { 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 } }, { .now = { 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 } }, { .now = { 0x40, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 } }, }; #define HSH_NMAGIC (sizeof hsh_magiclist / sizeof hsh_magiclist[0]) static void hsh_testmagic(void *result) { size_t i, j; static size_t nused = 0; for (i = 0; i < nused; i++) if (!memcmp(hsh_magiclist[i].was, result, VSHA256_LEN)) break; if (i == nused && i < HSH_NMAGIC) memcpy(hsh_magiclist[nused++].was, result, VSHA256_LEN); if (i == nused) return; assert(i < HSH_NMAGIC); fprintf(stderr, "HASHMAGIC: <"); for (j = 0; j < VSHA256_LEN; j++) fprintf(stderr, "%02x", ((unsigned char*)result)[j]); fprintf(stderr, "> -> <"); memcpy(result, hsh_magiclist[i].now, VSHA256_LEN); for (j = 0; j < VSHA256_LEN; j++) fprintf(stderr, "%02x", ((unsigned char*)result)[j]); fprintf(stderr, ">\n"); } /*--------------------------------------------------------------------- * Insert an object which magically appears out of nowhere or, more likely, * comes off some persistent storage device. * Insert it with a reference held. */ void HSH_Insert(struct worker *wrk, const void *digest, struct objcore *oc, struct ban *ban) { struct objhead *oh; struct rush rush; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(wrk->wpriv, WORKER_PRIV_MAGIC); AN(digest); CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); AN(ban); AN(oc->flags & OC_F_BUSY); AZ(oc->flags & OC_F_PRIVATE); assert(oc->refcnt == 1); INIT_OBJ(&rush, RUSH_MAGIC); hsh_prealloc(wrk); AN(wrk->wpriv->nobjhead); oh = hash->lookup(wrk, digest, &wrk->wpriv->nobjhead); CHECK_OBJ_NOTNULL(oh, OBJHEAD_MAGIC); Lck_AssertHeld(&oh->mtx); assert(oh->refcnt > 0); /* Mark object busy and insert (precreated) objcore in objecthead. The new object inherits our objhead reference. */ oc->objhead = oh; VTAILQ_INSERT_TAIL(&oh->objcs, oc, hsh_list); EXP_RefNewObjcore(oc); Lck_Unlock(&oh->mtx); BAN_RefBan(oc, ban); AN(oc->ban); /* Move the object first in the oh list, unbusy it and run the waitinglist if necessary */ Lck_Lock(&oh->mtx); VTAILQ_REMOVE(&oh->objcs, oc, hsh_list); VTAILQ_INSERT_HEAD(&oh->objcs, oc, hsh_list); oc->flags &= ~OC_F_BUSY; if (!VTAILQ_EMPTY(&oh->waitinglist)) hsh_rush1(wrk, oh, &rush, HSH_RUSH_POLICY); Lck_Unlock(&oh->mtx); hsh_rush2(wrk, &rush); EXP_Insert(wrk, oc); } /*--------------------------------------------------------------------- */ static struct objcore * hsh_insert_busyobj(const struct worker *wrk, struct objhead *oh) { struct objcore *oc; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(wrk->wpriv, WORKER_PRIV_MAGIC); CHECK_OBJ_NOTNULL(oh, OBJHEAD_MAGIC); Lck_AssertHeld(&oh->mtx); oc = wrk->wpriv->nobjcore; wrk->wpriv->nobjcore = NULL; CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); AN(oc->flags & OC_F_BUSY); oc->refcnt = 1; /* Owned by busyobj */ oc->objhead = oh; VTAILQ_INSERT_TAIL(&oh->objcs, oc, hsh_list); return (oc); } /*--------------------------------------------------------------------- */ enum lookup_e HSH_Lookup(struct req *req, struct objcore **ocp, struct objcore **bocp) { struct worker *wrk; struct objhead *oh; struct objcore *oc; struct objcore *exp_oc; const struct vcf_return *vr; vtim_real exp_t_origin; int busy_found; const uint8_t *vary; intmax_t boc_progress; unsigned xid = 0; float dttl = 0.0; AN(ocp); *ocp = NULL; AN(bocp); *bocp = NULL; CHECK_OBJ_NOTNULL(req, REQ_MAGIC); wrk = req->wrk; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(wrk->wpriv, WORKER_PRIV_MAGIC); CHECK_OBJ_NOTNULL(req->http, HTTP_MAGIC); CHECK_OBJ_ORNULL(req->vcf, VCF_MAGIC); AN(hash); hsh_prealloc(wrk); if (DO_DEBUG(DBG_HASHEDGE)) hsh_testmagic(req->digest); if (req->hash_objhead != NULL) { /* * This req came off the waiting list, and brings an * oh refcnt with it. */ CHECK_OBJ_NOTNULL(req->hash_objhead, OBJHEAD_MAGIC); oh = req->hash_objhead; Lck_Lock(&oh->mtx); req->hash_objhead = NULL; } else { AN(wrk->wpriv->nobjhead); oh = hash->lookup(wrk, req->digest, &wrk->wpriv->nobjhead); } CHECK_OBJ_NOTNULL(oh, OBJHEAD_MAGIC); Lck_AssertHeld(&oh->mtx); if (req->hash_always_miss) { /* XXX: should we do predictive Vary in this case ? */ /* Insert new objcore in objecthead and release mutex */ *bocp = hsh_insert_busyobj(wrk, oh); /* NB: no deref of objhead, new object inherits reference */ Lck_Unlock(&oh->mtx); return (HSH_MISS); } assert(oh->refcnt > 0); busy_found = 0; exp_oc = NULL; exp_t_origin = 0.0; VTAILQ_FOREACH(oc, &oh->objcs, hsh_list) { /* Must be at least our own ref + the objcore we examine */ assert(oh->refcnt > 1); CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); assert(oc->objhead == oh); assert(oc->refcnt > 0); if (oc->flags & OC_F_DYING) continue; if (oc->flags & OC_F_FAILED) continue; CHECK_OBJ_ORNULL(oc->boc, BOC_MAGIC); if (oc->flags & OC_F_BUSY) { if (req->hash_ignore_busy) continue; if (oc->boc && oc->boc->vary != NULL && !req->hash_ignore_vary && !VRY_Match(req, oc->boc->vary)) { wrk->strangelove++; continue; } busy_found = 1; continue; } if (oc->ttl <= 0.) continue; if (BAN_CheckObject(wrk, oc, req)) { oc->flags |= OC_F_DYING; EXP_Remove(oc, NULL); continue; } if (!req->hash_ignore_vary && ObjHasAttr(wrk, oc, OA_VARY)) { vary = ObjGetAttr(wrk, oc, OA_VARY, NULL); AN(vary); if (!VRY_Match(req, vary)) { wrk->strangelove++; continue; } } if (req->vcf != NULL) { vr = req->vcf->func(req, &oc, &exp_oc, 0); if (vr == VCF_CONTINUE) continue; if (vr == VCF_MISS) { oc = NULL; break; } if (vr == VCF_HIT) break; assert(vr == VCF_DEFAULT); } if (EXP_Ttl(req, oc) > req->t_req) { assert(oh->refcnt > 1); assert(oc->objhead == oh); break; } if (EXP_Ttl(NULL, oc) < req->t_req && /* ignore req.ttl */ oc->t_origin > exp_t_origin) { /* record the newest object */ exp_oc = oc; exp_t_origin = oc->t_origin; assert(oh->refcnt > 1); assert(exp_oc->objhead == oh); } } if (req->vcf != NULL) (void)req->vcf->func(req, &oc, &exp_oc, 1); if (oc != NULL && oc->flags & OC_F_HFP) { xid = VXID(ObjGetXID(wrk, oc)); dttl = EXP_Dttl(req, oc); AN(hsh_deref_objhead_unlock(wrk, &oh, HSH_RUSH_POLICY)); wrk->stats->cache_hitpass++; VSLb(req->vsl, SLT_HitPass, "%u %.6f", xid, dttl); return (HSH_HITPASS); } if (oc != NULL) { *ocp = oc; oc->refcnt++; if (oc->flags & OC_F_HFM) { xid = VXID(ObjGetXID(wrk, oc)); dttl = EXP_Dttl(req, oc); *bocp = hsh_insert_busyobj(wrk, oh); Lck_Unlock(&oh->mtx); wrk->stats->cache_hitmiss++; VSLb(req->vsl, SLT_HitMiss, "%u %.6f", xid, dttl); return (HSH_HITMISS); } oc->hits++; boc_progress = oc->boc == NULL ? -1 : oc->boc->fetched_so_far; AN(hsh_deref_objhead_unlock(wrk, &oh, HSH_RUSH_POLICY)); Req_LogHit(wrk, req, oc, boc_progress); return (HSH_HIT); } if (exp_oc != NULL && exp_oc->flags & OC_F_HFM) { /* * expired HFM ("grace/keep HFM") * * XXX should HFM objects actually have grace/keep ? * XXX also: why isn't *ocp = exp_oc ? */ xid = VXID(ObjGetXID(wrk, exp_oc)); dttl = EXP_Dttl(req, exp_oc); *bocp = hsh_insert_busyobj(wrk, oh); Lck_Unlock(&oh->mtx); wrk->stats->cache_hitmiss++; VSLb(req->vsl, SLT_HitMiss, "%u %.6f", xid, dttl); return (HSH_HITMISS); } if (exp_oc != NULL && exp_oc->boc != NULL) boc_progress = exp_oc->boc->fetched_so_far; else boc_progress = -1; if (!busy_found) { *bocp = hsh_insert_busyobj(wrk, oh); if (exp_oc != NULL) { exp_oc->refcnt++; *ocp = exp_oc; if (EXP_Ttl_grace(req, exp_oc) >= req->t_req) { exp_oc->hits++; Lck_Unlock(&oh->mtx); Req_LogHit(wrk, req, exp_oc, boc_progress); return (HSH_GRACE); } } Lck_Unlock(&oh->mtx); return (HSH_MISS); } AN(busy_found); if (exp_oc != NULL && EXP_Ttl_grace(req, exp_oc) >= req->t_req) { /* we do not wait on the busy object if in grace */ exp_oc->refcnt++; *ocp = exp_oc; exp_oc->hits++; AN(hsh_deref_objhead_unlock(wrk, &oh, 0)); Req_LogHit(wrk, req, exp_oc, boc_progress); return (HSH_GRACE); } /* There are one or more busy objects, wait for them */ VTAILQ_INSERT_TAIL(&oh->waitinglist, req, w_list); AZ(req->hash_ignore_busy); /* * The objhead reference transfers to the sess, we get it * back when the sess comes off the waiting list and * calls us again */ req->hash_objhead = oh; req->wrk = NULL; req->waitinglist = 1; if (DO_DEBUG(DBG_WAITINGLIST)) VSLb(req->vsl, SLT_Debug, "on waiting list <%p>", oh); Lck_Unlock(&oh->mtx); wrk->stats->busy_sleep++; return (HSH_BUSY); } /*--------------------------------------------------------------------- * Pick the req's we are going to rush from the waiting list */ static void hsh_rush1(const struct worker *wrk, struct objhead *oh, struct rush *r, int max) { int i; struct req *req; if (max == 0) return; if (max == HSH_RUSH_POLICY) max = cache_param->rush_exponent; assert(max > 0); CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(oh, OBJHEAD_MAGIC); CHECK_OBJ_NOTNULL(r, RUSH_MAGIC); VTAILQ_INIT(&r->reqs); Lck_AssertHeld(&oh->mtx); for (i = 0; i < max; i++) { req = VTAILQ_FIRST(&oh->waitinglist); if (req == NULL) break; CHECK_OBJ_NOTNULL(req, REQ_MAGIC); wrk->stats->busy_wakeup++; AZ(req->wrk); VTAILQ_REMOVE(&oh->waitinglist, req, w_list); VTAILQ_INSERT_TAIL(&r->reqs, req, w_list); req->waitinglist = 0; } } /*--------------------------------------------------------------------- * Rush req's that came from waiting list. */ static void hsh_rush2(struct worker *wrk, struct rush *r) { struct req *req; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(r, RUSH_MAGIC); while (!VTAILQ_EMPTY(&r->reqs)) { req = VTAILQ_FIRST(&r->reqs); CHECK_OBJ_NOTNULL(req, REQ_MAGIC); VTAILQ_REMOVE(&r->reqs, req, w_list); DSL(DBG_WAITINGLIST, req->vsl->wid, "off waiting list"); if (req->transport->reembark != NULL) { // For ESI includes req->transport->reembark(wrk, req); } else { /* * We ignore the queue limits which apply to new * requests because if we fail to reschedule there * may be vmod_privs to cleanup and we need a proper * workerthread for that. */ AZ(Pool_Task(req->sp->pool, req->task, TASK_QUEUE_RUSH)); } } } /*--------------------------------------------------------------------- * Purge an entire objhead */ unsigned HSH_Purge(struct worker *wrk, struct objhead *oh, vtim_real ttl_now, vtim_dur ttl, vtim_dur grace, vtim_dur keep) { struct objcore *oc, *oc_nows[2], **ocp; unsigned i, j, n, n_max, total = 0; int is_purge; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(oh, OBJHEAD_MAGIC); is_purge = (ttl == 0 && grace == 0 && keep == 0); n_max = WS_ReserveLumps(wrk->aws, sizeof *ocp); if (n_max < 2) { /* No space on the workspace. Give it a stack buffer of 2 * elements, which is the minimum for the algorithm * below. */ ocp = oc_nows; n_max = 2; } else ocp = WS_Reservation(wrk->aws); AN(ocp); /* Note: This algorithm uses OC references in the list as * bookmarks, in order to know how far into the list we were when * releasing the mutex partway through and want to resume * again. This relies on the list not being reordered while we are * not holding the mutex. The only place where that happens is in * HSH_Unbusy(), where an OC_F_BUSY OC is moved first in the * list. This does not cause problems because we skip OC_F_BUSY * OCs. */ Lck_Lock(&oh->mtx); oc = VTAILQ_FIRST(&oh->objcs); n = 0; while (1) { for (; n < n_max && oc != NULL; oc = VTAILQ_NEXT(oc, hsh_list)) { CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); assert(oc->objhead == oh); if (oc->flags & OC_F_BUSY) { /* We cannot purge busy objects here, because * their owners have special rights to them, * and may nuke them without concern for the * refcount, which by definition always must * be one, so they don't check. */ continue; } if (oc->flags & OC_F_DYING) continue; if (is_purge) oc->flags |= OC_F_DYING; oc->refcnt++; ocp[n++] = oc; } Lck_Unlock(&oh->mtx); if (n == 0) { /* No eligible objcores found. We are finished. */ break; } j = n; if (oc != NULL) { /* There are more objects on the objhead that we * have not yet looked at, but no more space on * the objcore reference list. Do not process the * last one, it will be used as the bookmark into * the objcore list for the next iteration of the * outer loop. */ j--; assert(j >= 1); /* True because n_max >= 2 */ } for (i = 0; i < j; i++) { CHECK_OBJ_NOTNULL(ocp[i], OBJCORE_MAGIC); if (is_purge) EXP_Remove(ocp[i], NULL); else EXP_Rearm(ocp[i], ttl_now, ttl, grace, keep); (void)HSH_DerefObjCore(wrk, &ocp[i], 0); AZ(ocp[i]); total++; } if (j == n) { /* No bookmark set, that means we got to the end * of the objcore list in the previous run and are * finished. */ break; } Lck_Lock(&oh->mtx); /* Move the bookmark first and continue scanning the * objcores */ CHECK_OBJ_NOTNULL(ocp[j], OBJCORE_MAGIC); ocp[0] = ocp[j]; n = 1; oc = VTAILQ_NEXT(ocp[0], hsh_list); CHECK_OBJ_ORNULL(oc, OBJCORE_MAGIC); } WS_Release(wrk->aws, 0); if (is_purge) Pool_PurgeStat(total); return (total); } /*--------------------------------------------------------------------- * Fail an objcore */ void HSH_Fail(struct objcore *oc) { struct objhead *oh; CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); oh = oc->objhead; CHECK_OBJ(oh, OBJHEAD_MAGIC); /* * We have to have either a busy bit, so that HSH_Lookup * will not consider this oc, or an object hung of the oc * so that it can consider it. */ assert((oc->flags & OC_F_BUSY) || (oc->stobj->stevedore != NULL)); Lck_Lock(&oh->mtx); oc->flags |= OC_F_FAILED; Lck_Unlock(&oh->mtx); } /*--------------------------------------------------------------------- * Mark a fetch we will not need as cancelled */ static void hsh_cancel(struct objcore *oc) { struct objhead *oh; CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); oh = oc->objhead; CHECK_OBJ(oh, OBJHEAD_MAGIC); Lck_Lock(&oh->mtx); oc->flags |= OC_F_CANCEL; Lck_Unlock(&oh->mtx); } /*--------------------------------------------------------------------- * Cancel a fetch when the client does not need it any more */ void HSH_Cancel(struct worker *wrk, struct objcore *oc, struct boc *boc) { struct boc *bocref = NULL; CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); if ((oc->flags & (OC_F_PRIVATE | OC_F_HFM | OC_F_HFP)) == 0) return; /* * NB: we use two distinct variables to only release the reference if * we had to acquire one. The caller-provided boc is optional. */ if (boc == NULL) bocref = boc = HSH_RefBoc(oc); CHECK_OBJ_ORNULL(boc, BOC_MAGIC); if (oc->flags & OC_F_HFP) AN(oc->flags & OC_F_HFM); if (boc != NULL) { hsh_cancel(oc); ObjWaitState(oc, BOS_FINISHED); } if (bocref != NULL) HSH_DerefBoc(wrk, oc); ObjSlim(wrk, oc); } /*--------------------------------------------------------------------- * Unbusy an objcore when the object is completely fetched. */ void HSH_Unbusy(struct worker *wrk, struct objcore *oc) { struct objhead *oh; struct rush rush; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); CHECK_OBJ_NOTNULL(oc->boc, BOC_MAGIC); oh = oc->objhead; CHECK_OBJ(oh, OBJHEAD_MAGIC); INIT_OBJ(&rush, RUSH_MAGIC); AN(oc->stobj->stevedore); AN(oc->flags & OC_F_BUSY); assert(oh->refcnt > 0); assert(oc->refcnt > 0); if (!(oc->flags & OC_F_PRIVATE)) { BAN_NewObjCore(oc); AN(oc->ban); } /* XXX: pretouch neighbors on oh->objcs to prevent page-on under mtx */ Lck_Lock(&oh->mtx); assert(oh->refcnt > 0); assert(oc->refcnt > 0); if (!(oc->flags & OC_F_PRIVATE)) EXP_RefNewObjcore(oc); /* Takes a ref for expiry */ /* XXX: strictly speaking, we should sort in Date: order. */ VTAILQ_REMOVE(&oh->objcs, oc, hsh_list); VTAILQ_INSERT_HEAD(&oh->objcs, oc, hsh_list); oc->flags &= ~OC_F_BUSY; if (!VTAILQ_EMPTY(&oh->waitinglist)) { assert(oh->refcnt > 1); hsh_rush1(wrk, oh, &rush, HSH_RUSH_POLICY); } Lck_Unlock(&oh->mtx); EXP_Insert(wrk, oc); /* Does nothing unless EXP_RefNewObjcore was * called */ hsh_rush2(wrk, &rush); } /*==================================================================== * HSH_Kill() * * It's dead Jim, kick it... */ void HSH_Kill(struct objcore *oc) { HSH_Replace(oc, NULL); } void HSH_Replace(struct objcore *oc, const struct objcore *new_oc) { CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); CHECK_OBJ_NOTNULL(oc->objhead, OBJHEAD_MAGIC); if (new_oc != NULL) { CHECK_OBJ(new_oc, OBJCORE_MAGIC); assert(oc->objhead == new_oc->objhead); } Lck_Lock(&oc->objhead->mtx); oc->flags |= OC_F_DYING; Lck_Unlock(&oc->objhead->mtx); EXP_Remove(oc, new_oc); } /*==================================================================== * HSH_Snipe() * * If objcore is idle, gain a ref and mark it dead. */ int HSH_Snipe(const struct worker *wrk, struct objcore *oc) { int retval = 0; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); CHECK_OBJ_NOTNULL(oc->objhead, OBJHEAD_MAGIC); if (oc->refcnt == 1 && !Lck_Trylock(&oc->objhead->mtx)) { if (oc->refcnt == 1 && !(oc->flags & OC_F_DYING)) { oc->flags |= OC_F_DYING; oc->refcnt++; retval = 1; } Lck_Unlock(&oc->objhead->mtx); } if (retval) EXP_Remove(oc, NULL); return (retval); } /*--------------------------------------------------------------------- * Gain a reference on an objcore */ void HSH_Ref(struct objcore *oc) { struct objhead *oh; CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); oh = oc->objhead; CHECK_OBJ_NOTNULL(oh, OBJHEAD_MAGIC); Lck_Lock(&oh->mtx); assert(oc->refcnt > 0); oc->refcnt++; Lck_Unlock(&oh->mtx); } /*--------------------------------------------------------------------- * Gain a reference on the busyobj, if the objcore has one */ struct boc * HSH_RefBoc(const struct objcore *oc) { struct objhead *oh; struct boc *boc; CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); oh = oc->objhead; CHECK_OBJ_NOTNULL(oh, OBJHEAD_MAGIC); if (oc->boc == NULL) return (NULL); Lck_Lock(&oh->mtx); assert(oc->refcnt > 0); boc = oc->boc; CHECK_OBJ_ORNULL(boc, BOC_MAGIC); if (boc != NULL) { assert(boc->refcount > 0); if (boc->state < BOS_FINISHED) boc->refcount++; else boc = NULL; } Lck_Unlock(&oh->mtx); return (boc); } void HSH_DerefBoc(struct worker *wrk, struct objcore *oc) { struct boc *boc; unsigned r; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); boc = oc->boc; CHECK_OBJ_NOTNULL(boc, BOC_MAGIC); Lck_Lock(&oc->objhead->mtx); assert(oc->refcnt > 0); assert(boc->refcount > 0); r = --boc->refcount; if (r == 0) oc->boc = NULL; Lck_Unlock(&oc->objhead->mtx); if (r == 0) ObjBocDone(wrk, oc, &boc); } /*-------------------------------------------------------------------- * Dereference objcore * * Returns zero if target was destroyed. */ int HSH_DerefObjCore(struct worker *wrk, struct objcore **ocp, int rushmax) { struct objcore *oc; struct objhead *oh; struct rush rush; int r; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); TAKE_OBJ_NOTNULL(oc, ocp, OBJCORE_MAGIC); assert(oc->refcnt > 0); INIT_OBJ(&rush, RUSH_MAGIC); oh = oc->objhead; CHECK_OBJ_NOTNULL(oh, OBJHEAD_MAGIC); Lck_Lock(&oh->mtx); assert(oh->refcnt > 0); r = --oc->refcnt; if (!r) VTAILQ_REMOVE(&oh->objcs, oc, hsh_list); if (!VTAILQ_EMPTY(&oh->waitinglist)) { assert(oh->refcnt > 1); hsh_rush1(wrk, oh, &rush, rushmax); } Lck_Unlock(&oh->mtx); hsh_rush2(wrk, &rush); if (r != 0) return (r); AZ(oc->exp_flags); BAN_DestroyObj(oc); AZ(oc->ban); if (oc->stobj->stevedore != NULL) ObjFreeObj(wrk, oc); ObjDestroy(wrk, &oc); /* Drop our ref on the objhead */ assert(oh->refcnt > 0); (void)hsh_deref_objhead(wrk, &oh); return (0); } static int hsh_deref_objhead_unlock(struct worker *wrk, struct objhead **poh, int max) { struct objhead *oh; struct rush rush; int r; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); TAKE_OBJ_NOTNULL(oh, poh, OBJHEAD_MAGIC); Lck_AssertHeld(&oh->mtx); if (oh == private_oh) { assert(VTAILQ_EMPTY(&oh->waitinglist)); assert(oh->refcnt > 1); oh->refcnt--; Lck_Unlock(&oh->mtx); return (1); } INIT_OBJ(&rush, RUSH_MAGIC); if (!VTAILQ_EMPTY(&oh->waitinglist)) { assert(oh->refcnt > 1); hsh_rush1(wrk, oh, &rush, max); } if (oh->refcnt == 1) assert(VTAILQ_EMPTY(&oh->waitinglist)); assert(oh->refcnt > 0); r = hash->deref(wrk, oh); /* Unlocks oh->mtx */ hsh_rush2(wrk, &rush); return (r); } static int hsh_deref_objhead(struct worker *wrk, struct objhead **poh) { struct objhead *oh; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); TAKE_OBJ_NOTNULL(oh, poh, OBJHEAD_MAGIC); Lck_Lock(&oh->mtx); return (hsh_deref_objhead_unlock(wrk, &oh, 0)); } void HSH_Init(const struct hash_slinger *slinger) { assert(DIGEST_LEN == VSHA256_LEN); /* avoid #include pollution */ hash = slinger; if (hash->start != NULL) hash->start(); private_oh = hsh_newobjhead(); private_oh->refcnt = 1; } varnish-7.5.0/bin/varnishd/cache/cache_http.c000066400000000000000000001065101457605730600211140ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2017 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * HTTP request storage and manipulation */ #include "config.h" #include "cache_varnishd.h" #include #include #include "common/heritage.h" #include "vct.h" #include "vend.h" #include "vnum.h" #include "vtim.h" #define BODYSTATUS(U, l, n, a, k) \ const struct body_status BS_##U[1] = {{ \ .name = #l, \ .nbr = n, \ .avail = a, \ .length_known = k \ }}; #include "tbl/body_status.h" #define HTTPH(a, b, c) char b[] = "*" a ":"; #include "tbl/http_headers.h" const char H__Status[] = "\010:status:"; const char H__Proto[] = "\007:proto:"; const char H__Reason[] = "\010:reason:"; static char * via_hdr; /*-------------------------------------------------------------------- * Perfect hash to rapidly recognize headers from tbl/http_headers.h * which have non-zero flags. * * A suitable algorithm can be found with `gperf`: * * tr '" ,' ' ' < include/tbl/http_headers.h | * awk '$1 == "H(" {print $2}' | * gperf --ignore-case * */ #define GPERF_MIN_WORD_LENGTH 2 #define GPERF_MAX_WORD_LENGTH 19 #define GPERF_MAX_HASH_VALUE 79 static const unsigned char http_asso_values[256] = { 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 0, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 5, 80, 20, 0, 0, 5, 10, 5, 5, 80, 0, 15, 0, 20, 80, 40, 80, 0, 35, 10, 20, 55, 45, 0, 0, 80, 80, 80, 80, 80, 80, 80, 5, 80, 20, 0, 0, 5, 10, 5, 5, 80, 0, 15, 0, 20, 80, 40, 80, 0, 35, 10, 20, 55, 45, 0, 0, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80 }; static struct http_hdrflg { char *hdr; unsigned flag; } http_hdrflg[GPERF_MAX_HASH_VALUE + 1] = { { NULL }, { NULL }, { NULL }, { NULL }, { H_Date }, { H_Range }, { NULL }, { H_Referer }, { H_Age }, { H_From }, { H_Keep_Alive }, { H_Retry_After }, { H_TE }, { H_If_Range }, { H_ETag }, { H_X_Forwarded_For }, { H_Expect }, { H_Trailer }, { H_If_Match }, { H_Host }, { H_Accept_Language }, { H_Accept }, { H_If_Modified_Since }, { H_If_None_Match }, { H_If_Unmodified_Since }, { NULL }, { H_Cookie }, { H_Upgrade }, { H_Last_Modified }, { H_Accept_Charset }, { H_Accept_Encoding }, { H_Content_MD5 }, { H_Content_Type }, { H_Content_Range }, { NULL }, { NULL }, { H_Content_Language }, { H_Transfer_Encoding }, { H_Authorization }, { H_Content_Length }, { H_User_Agent }, { H_Server }, { H_Expires }, { H_Location }, { NULL }, { H_Set_Cookie }, { H_Content_Encoding }, { H_Max_Forwards }, { H_Cache_Control }, { NULL }, { H_Connection }, { H_Pragma }, { NULL }, { H_Accept_Ranges }, { H_HTTP2_Settings }, { H_Allow }, { H_Content_Location }, { NULL }, { H_Proxy_Authenticate }, { H_Vary }, { NULL }, { H_WWW_Authenticate }, { H_Warning }, { H_Via }, { NULL }, { NULL }, { NULL }, { NULL }, { NULL }, { NULL }, { NULL }, { NULL }, { NULL }, { NULL }, { NULL }, { NULL }, { NULL }, { NULL }, { NULL }, { H_Proxy_Authorization } }; static struct http_hdrflg * http_hdr_flags(const char *b, const char *e) { unsigned u; struct http_hdrflg *retval; if (b == NULL || e == NULL) return (NULL); assert(b <= e); u = (unsigned)(e - b); assert(b + u == e); if (u < GPERF_MIN_WORD_LENGTH || u > GPERF_MAX_WORD_LENGTH) return (NULL); u += http_asso_values[(uint8_t)(e[-1])] + http_asso_values[(uint8_t)(b[0])]; if (u > GPERF_MAX_HASH_VALUE) return (NULL); retval = &http_hdrflg[u]; if (retval->hdr == NULL) return (NULL); if (!http_hdr_at(retval->hdr + 1, b, e - b)) return (NULL); return (retval); } /*--------------------------------------------------------------------*/ static void http_init_hdr(char *hdr, int flg) { struct http_hdrflg *f; hdr[0] = strlen(hdr + 1); f = http_hdr_flags(hdr + 1, hdr + hdr[0]); AN(f); assert(f->hdr == hdr); f->flag = flg; } void HTTP_Init(void) { struct vsb *vsb; #define HTTPH(a, b, c) http_init_hdr(b, c); #include "tbl/http_headers.h" vsb = VSB_new_auto(); AN(vsb); VSB_printf(vsb, "1.1 %s (Varnish/" PACKAGE_BRANCH ")", heritage.identity); AZ(VSB_finish(vsb)); REPLACE(via_hdr, VSB_data(vsb)); VSB_destroy(&vsb); } /*-------------------------------------------------------------------- * These two functions are in an incestuous relationship with the * order of macros in include/tbl/vsl_tags_http.h * * The http->logtag is the SLT_*Method enum, and we add to that, to * get the SLT_ to use. */ static void http_VSLH(const struct http *hp, unsigned hdr) { int i; if (hp->vsl != NULL) { assert(VXID_TAG(hp->vsl->wid)); i = hdr; if (i > HTTP_HDR_FIRST) i = HTTP_HDR_FIRST; i += hp->logtag; VSLbt(hp->vsl, (enum VSL_tag_e)i, hp->hd[hdr]); } } static void http_VSLH_del(const struct http *hp, unsigned hdr) { int i; if (hp->vsl != NULL) { /* We don't support unsetting stuff in the first line */ assert (hdr >= HTTP_HDR_FIRST); assert(VXID_TAG(hp->vsl->wid)); i = (HTTP_HDR_UNSET - HTTP_HDR_METHOD); i += hp->logtag; VSLbt(hp->vsl, (enum VSL_tag_e)i, hp->hd[hdr]); } } /*--------------------------------------------------------------------*/ void http_VSL_log(const struct http *hp) { unsigned u; for (u = 0; u < hp->nhd; u++) if (hp->hd[u].b != NULL) http_VSLH(hp, u); } /*--------------------------------------------------------------------*/ static void http_fail(const struct http *hp) { char id[WS_ID_SIZE]; VSC_C_main->losthdr++; WS_Id(hp->ws, id); VSLb(hp->vsl, SLT_Error, "out of workspace (%s)", id); WS_MarkOverflow(hp->ws); } /*-------------------------------------------------------------------- * List of canonical HTTP response code names from RFC2616 */ static struct http_msg { unsigned nbr; const char *status; const char *txt; } http_msg[] = { #define HTTP_RESP(n, t) { n, #n, t}, #include "tbl/http_response.h" { 0, "0", NULL } }; const char * http_Status2Reason(unsigned status, const char **sstr) { struct http_msg *mp; status %= 1000; assert(status >= 100); for (mp = http_msg; mp->nbr != 0 && mp->nbr <= status; mp++) if (mp->nbr == status) { if (sstr) *sstr = mp->status; return (mp->txt); } return ("Unknown HTTP Status"); } /*--------------------------------------------------------------------*/ unsigned HTTP_estimate(unsigned nhttp) { /* XXX: We trust the structs to size-aligned as necessary */ return (PRNDUP(sizeof(struct http) + sizeof(txt) * nhttp + nhttp)); } struct http * HTTP_create(void *p, uint16_t nhttp, unsigned len) { struct http *hp; hp = p; hp->magic = HTTP_MAGIC; hp->hd = (void*)(hp + 1); hp->shd = nhttp; hp->hdf = (void*)(hp->hd + nhttp); assert((unsigned char*)p + len == hp->hdf + PRNDUP(nhttp)); return (hp); } /*--------------------------------------------------------------------*/ void HTTP_Setup(struct http *hp, struct ws *ws, struct vsl_log *vsl, enum VSL_tag_e whence) { http_Teardown(hp); hp->nhd = HTTP_HDR_FIRST; hp->logtag = whence; hp->ws = ws; hp->vsl = vsl; } /*-------------------------------------------------------------------- * http_Teardown() is a safety feature, we use it to zap all http * structs once we're done with them, to minimize the risk that * old stale pointers exist to no longer valid stuff. */ void http_Teardown(struct http *hp) { CHECK_OBJ_NOTNULL(hp, HTTP_MAGIC); AN(hp->shd); memset(&hp->nhd, 0, sizeof *hp - offsetof(struct http, nhd)); memset(hp->hd, 0, sizeof *hp->hd * hp->shd); memset(hp->hdf, 0, sizeof *hp->hdf * hp->shd); } /*-------------------------------------------------------------------- * Duplicate the http content into another http * We cannot just memcpy the struct because the hd & hdf are private * storage to the struct http. */ void HTTP_Dup(struct http *to, const struct http * fm) { assert(fm->nhd <= to->shd); memcpy(to->hd, fm->hd, fm->nhd * sizeof *to->hd); memcpy(to->hdf, fm->hdf, fm->nhd * sizeof *to->hdf); to->nhd = fm->nhd; to->logtag = fm->logtag; to->status = fm->status; to->protover = fm->protover; } /*-------------------------------------------------------------------- * Clone the entire http structure, including vsl & ws */ void HTTP_Clone(struct http *to, const struct http * const fm) { HTTP_Dup(to, fm); to->vsl = fm->vsl; to->ws = fm->ws; } /*--------------------------------------------------------------------*/ void http_Proto(struct http *to) { const char *fm; fm = to->hd[HTTP_HDR_PROTO].b; if (fm != NULL && (fm[0] == 'H' || fm[0] == 'h') && (fm[1] == 'T' || fm[1] == 't') && (fm[2] == 'T' || fm[2] == 't') && (fm[3] == 'P' || fm[3] == 'p') && fm[4] == '/' && vct_isdigit(fm[5]) && fm[6] == '.' && vct_isdigit(fm[7]) && fm[8] == '\0') { to->protover = 10 * (fm[5] - '0') + (fm[7] - '0'); } else { to->protover = 0; } } /*--------------------------------------------------------------------*/ void http_SetH(struct http *to, unsigned n, const char *header) { assert(n < to->nhd); AN(header); to->hd[n].b = TRUST_ME(header); to->hd[n].e = strchr(to->hd[n].b, '\0'); to->hdf[n] = 0; http_VSLH(to, n); if (n == HTTP_HDR_PROTO) http_Proto(to); } /*--------------------------------------------------------------------*/ static void http_PutField(struct http *to, int field, const char *string) { const char *p; CHECK_OBJ_NOTNULL(to, HTTP_MAGIC); p = WS_Copy(to->ws, string, -1); if (p == NULL) { http_fail(to); VSLbs(to->vsl, SLT_LostHeader, TOSTRAND(string)); return; } http_SetH(to, field, p); } /*--------------------------------------------------------------------*/ int http_IsHdr(const txt *hh, hdr_t hdr) { unsigned l; Tcheck(*hh); AN(hdr); l = hdr[0]; assert(l == strlen(hdr + 1)); assert(hdr[l] == ':'); hdr++; return (http_hdr_at(hdr, hh->b, l)); } /*--------------------------------------------------------------------*/ static unsigned http_findhdr(const struct http *hp, unsigned l, const char *hdr) { unsigned u; for (u = HTTP_HDR_FIRST; u < hp->nhd; u++) { Tcheck(hp->hd[u]); if (hp->hd[u].e < hp->hd[u].b + l + 1) continue; if (hp->hd[u].b[l] != ':') continue; if (!http_hdr_at(hdr, hp->hd[u].b, l)) continue; return (u); } return (0); } /*-------------------------------------------------------------------- * Count how many instances we have of this header */ unsigned http_CountHdr(const struct http *hp, hdr_t hdr) { unsigned retval = 0; unsigned u; CHECK_OBJ_NOTNULL(hp, HTTP_MAGIC); for (u = HTTP_HDR_FIRST; u < hp->nhd; u++) { Tcheck(hp->hd[u]); if (http_IsHdr(&hp->hd[u], hdr)) retval++; } return (retval); } /*-------------------------------------------------------------------- * This function collapses multiple header lines of the same name. * The lines are joined with a comma, according to [rfc2616, 4.2bot, p32] */ void http_CollectHdr(struct http *hp, hdr_t hdr) { http_CollectHdrSep(hp, hdr, NULL); } /*-------------------------------------------------------------------- * You may prefer to collapse header fields using a different separator. * For Cookie headers, the separator is "; " for example. That's probably * the only example too. */ void http_CollectHdrSep(struct http *hp, hdr_t hdr, const char *sep) { unsigned u, l, lsep, ml, f, x, d; char *b = NULL, *e = NULL; const char *v; CHECK_OBJ_NOTNULL(hp, HTTP_MAGIC); if (WS_Overflowed(hp->ws)) return; if (sep == NULL || *sep == '\0') sep = ", "; lsep = strlen(sep); l = hdr[0]; assert(l == strlen(hdr + 1)); assert(hdr[l] == ':'); f = http_findhdr(hp, l - 1, hdr + 1); if (f == 0) return; for (d = u = f + 1; u < hp->nhd; u++) { Tcheck(hp->hd[u]); if (!http_IsHdr(&hp->hd[u], hdr)) { if (d != u) { hp->hd[d] = hp->hd[u]; hp->hdf[d] = hp->hdf[u]; } d++; continue; } if (b == NULL) { /* Found second header, start our collection */ ml = WS_ReserveAll(hp->ws); b = WS_Reservation(hp->ws); e = b + ml; x = Tlen(hp->hd[f]); if (b + x >= e) { http_fail(hp); VSLbs(hp->vsl, SLT_LostHeader, TOSTRAND(hdr + 1)); WS_Release(hp->ws, 0); return; } memcpy(b, hp->hd[f].b, x); b += x; } AN(b); AN(e); /* Append the Nth header we found */ x = Tlen(hp->hd[u]) - l; v = hp->hd[u].b + *hdr; while (vct_issp(*v)) { v++; x--; } if (b + lsep + x >= e) { http_fail(hp); VSLbs(hp->vsl, SLT_LostHeader, TOSTRAND(hdr + 1)); WS_Release(hp->ws, 0); return; } memcpy(b, sep, lsep); b += lsep; memcpy(b, v, x); b += x; } if (b == NULL) return; hp->nhd = (uint16_t)d; AN(e); *b = '\0'; hp->hd[f].b = WS_Reservation(hp->ws); hp->hd[f].e = b; WS_ReleaseP(hp->ws, b + 1); } /*--------------------------------------------------------------------*/ int http_GetHdr(const struct http *hp, hdr_t hdr, const char **ptr) { unsigned u, l; const char *p; l = hdr[0]; assert(l == strlen(hdr + 1)); assert(hdr[l] == ':'); hdr++; u = http_findhdr(hp, l - 1, hdr); if (u == 0) { if (ptr != NULL) *ptr = NULL; return (0); } if (ptr != NULL) { p = hp->hd[u].b + l; while (vct_issp(*p)) p++; *ptr = p; } return (1); } /*----------------------------------------------------------------------------- * Split source string at any of the separators, return pointer to first * and last+1 char of substrings, with whitespace trimmed at both ends. * If sep being an empty string is shorthand for VCT::SP * If stop is NULL, src is NUL terminated. */ static int http_split(const char **src, const char *stop, const char *sep, const char **b, const char **e) { const char *p, *q; AN(src); AN(*src); AN(sep); AN(b); AN(e); if (stop == NULL) stop = strchr(*src, '\0'); for (p = *src; p < stop && (vct_issp(*p) || strchr(sep, *p)); p++) continue; if (p >= stop) { *b = NULL; *e = NULL; return (0); } *b = p; if (*sep == '\0') { for (q = p + 1; q < stop && !vct_issp(*q); q++) continue; *e = q; *src = q; return (1); } for (q = p + 1; q < stop && !strchr(sep, *q); q++) continue; *src = q; while (q > p && vct_issp(q[-1])) q--; *e = q; return (1); } /*----------------------------------------------------------------------------- * Comparison rule for tokens: * if target string starts with '"', we use memcmp() and expect closing * double quote as well * otherwise we use http_tok_at() * * On match we increment *bp past the token name. */ static int http_istoken(const char **bp, const char *e, const char *token) { int fl = strlen(token); const char *b; AN(bp); AN(e); AN(token); b = *bp; if (b + fl + 2 <= e && *b == '"' && !memcmp(b + 1, token, fl) && b[fl + 1] == '"') { *bp += fl + 2; return (1); } if (b + fl <= e && http_tok_at(b, token, fl) && (b + fl == e || !vct_istchar(b[fl]))) { *bp += fl; return (1); } return (0); } /*----------------------------------------------------------------------------- * Find a given data element (token) in a header according to RFC2616's #rule * (section 2.1, p15) * * On case sensitivity: * * Section 4.2 (Messages Headers) defines field (header) name as case * insensitive, but the field (header) value/content may be case-sensitive. * * http_GetHdrToken looks up a token in a header value and the rfc does not say * explicitly if tokens are to be compared with or without respect to case. * * But all examples and specific statements regarding tokens follow the rule * that unquoted tokens are to be matched case-insensitively and quoted tokens * case-sensitively. * * The optional pb and pe arguments will point to the token content start and * end+1, white space trimmed on both sides. */ int http_GetHdrToken(const struct http *hp, hdr_t hdr, const char *token, const char **pb, const char **pe) { const char *h, *b, *e; if (pb != NULL) *pb = NULL; if (pe != NULL) *pe = NULL; if (!http_GetHdr(hp, hdr, &h)) return (0); AN(h); while (http_split(&h, NULL, ",", &b, &e)) if (http_istoken(&b, e, token)) break; if (b == NULL) return (0); if (pb != NULL) { for (; vct_islws(*b); b++) continue; if (b == e) { b = NULL; e = NULL; } *pb = b; if (pe != NULL) *pe = e; } return (1); } /*-------------------------------------------------------------------- * Find a given header field's quality value (qvalue). */ double http_GetHdrQ(const struct http *hp, hdr_t hdr, const char *field) { const char *hb, *he, *b, *e; int i; double a, f; i = http_GetHdrToken(hp, hdr, field, &hb, &he); if (!i) return (0.); if (hb == NULL) return (1.); while (http_split(&hb, he, ";", &b, &e)) { if (*b != 'q') continue; for (b++; b < e && vct_issp(*b); b++) continue; if (b == e || *b != '=') continue; break; } if (b == NULL) return (1.); for (b++; b < e && vct_issp(*b); b++) continue; if (b == e || (*b != '.' && !vct_isdigit(*b))) return (0.); a = 0; while (b < e && vct_isdigit(*b)) { a *= 10.; a += *b - '0'; b++; } if (b == e || *b++ != '.') return (a); f = .1; while (b < e && vct_isdigit(*b)) { a += f * (*b - '0'); f *= .1; b++; } return (a); } /*-------------------------------------------------------------------- * Find a given header field's value. */ int http_GetHdrField(const struct http *hp, hdr_t hdr, const char *field, const char **ptr) { const char *h; int i; if (ptr != NULL) *ptr = NULL; h = NULL; i = http_GetHdrToken(hp, hdr, field, &h, NULL); if (!i) return (i); if (ptr != NULL && h != NULL) { /* Skip whitespace, looking for '=' */ while (*h && vct_issp(*h)) h++; if (*h == '=') { h++; while (*h && vct_issp(*h)) h++; *ptr = h; } } return (i); } /*--------------------------------------------------------------------*/ ssize_t http_GetContentLength(const struct http *hp) { ssize_t cl; const char *b; CHECK_OBJ_NOTNULL(hp, HTTP_MAGIC); if (!http_GetHdr(hp, H_Content_Length, &b)) return (-1); cl = VNUM_uint(b, NULL, &b); if (cl < 0) return (-2); while (vct_islws(*b)) b++; if (*b != '\0') return (-2); return (cl); } ssize_t http_GetContentRange(const struct http *hp, ssize_t *lo, ssize_t *hi) { ssize_t tmp, cl; const char *b, *t; CHECK_OBJ_NOTNULL(hp, HTTP_MAGIC); if (lo == NULL) lo = &tmp; if (hi == NULL) hi = &tmp; *lo = *hi = -1; if (!http_GetHdr(hp, H_Content_Range, &b)) return (-1); t = strchr(b, ' '); if (t == NULL) return (-2); // Missing space after range unit if (!http_range_at(b, bytes, t - b)) return (-1); // Unknown range unit, ignore b = t + 1; if (*b == '*') { // Content-Range: bytes */123 *lo = *hi = -1; b++; } else { // Content-Range: bytes 1-2/3 *lo = VNUM_uint(b, NULL, &b); if (*lo < 0) return (-2); if (*b != '-') return (-2); *hi = VNUM_uint(b + 1, NULL, &b); if (*hi < 0) return (-2); } if (*b != '/') return (-2); if (b[1] == '*') { // Content-Range: bytes 1-2/* cl = -1; b += 2; } else { cl = VNUM_uint(b + 1, NULL, &b); if (cl <= 0) return (-2); } while (vct_islws(*b)) b++; if (*b != '\0') return (-2); if (*lo > *hi) return (-2); assert(cl >= -1); if (*lo >= cl || *hi >= cl) return (-2); AN(cl); return (cl); } const char * http_GetRange(const struct http *hp, ssize_t *lo, ssize_t *hi, ssize_t len) { ssize_t tmp_lo, tmp_hi; const char *b, *t; CHECK_OBJ_NOTNULL(hp, HTTP_MAGIC); if (lo == NULL) lo = &tmp_lo; if (hi == NULL) hi = &tmp_hi; *lo = *hi = -1; if (!http_GetHdr(hp, H_Range, &b)) return (NULL); t = strchr(b, '='); if (t == NULL) return ("Missing '='"); if (!http_range_at(b, bytes, t - b)) return ("Not Bytes"); b = t + 1; *lo = VNUM_uint(b, NULL, &b); if (*lo == -2) return ("Low number too big"); if (*b++ != '-') return ("Missing hyphen"); *hi = VNUM_uint(b, NULL, &b); if (*hi == -2) return ("High number too big"); if (*lo == -1 && *hi == -1) return ("Neither high nor low"); if (*lo == -1 && *hi == 0) return ("No low, high is zero"); if (*hi >= 0 && *hi < *lo) return ("high smaller than low"); while (vct_islws(*b)) b++; if (*b != '\0') return ("Trailing stuff"); assert(*lo >= -1); assert(*hi >= -1); if (len <= 0) return (NULL); // Allow 200 response if (*lo < 0) { assert(*hi > 0); *lo = len - *hi; if (*lo < 0) *lo = 0; *hi = len - 1; } else if (len >= 0 && (*hi >= len || *hi < 0)) { *hi = len - 1; } if (*lo >= len) return ("low range beyond object"); return (NULL); } /*-------------------------------------------------------------------- */ stream_close_t http_DoConnection(struct http *hp, stream_close_t sc_close) { const char *h, *b, *e; stream_close_t retval; unsigned u, v; struct http_hdrflg *f; CHECK_OBJ_NOTNULL(hp, HTTP_MAGIC); assert(sc_close == SC_REQ_CLOSE || sc_close == SC_RESP_CLOSE); if (hp->protover == 10) retval = SC_REQ_HTTP10; else retval = SC_NULL; http_CollectHdr(hp, H_Connection); if (!http_GetHdr(hp, H_Connection, &h)) return (retval); AN(h); while (http_split(&h, NULL, ",", &b, &e)) { u = pdiff(b, e); if (u == 5 && http_hdr_at(b, "close", u)) retval = sc_close; if (u == 10 && http_hdr_at(b, "keep-alive", u)) retval = SC_NULL; /* Refuse removal of well-known-headers if they would pass. */ f = http_hdr_flags(b, e); if (f != NULL && !(f->flag & HTTPH_R_PASS)) return (SC_RX_BAD); for (v = HTTP_HDR_FIRST; v < hp->nhd; v++) { Tcheck(hp->hd[v]); if (hp->hd[v].e < hp->hd[v].b + u + 1) continue; if (hp->hd[v].b[u] != ':') continue; if (!http_hdr_at(b, hp->hd[v].b, u)) continue; hp->hdf[v] |= HDF_FILTER; } } CHECK_OBJ_NOTNULL(retval, STREAM_CLOSE_MAGIC); return (retval); } /*--------------------------------------------------------------------*/ int http_HdrIs(const struct http *hp, hdr_t hdr, const char *val) { const char *p; if (!http_GetHdr(hp, hdr, &p)) return (0); AN(p); if (http_tok_eq(p, val)) return (1); return (0); } /*--------------------------------------------------------------------*/ uint16_t http_GetStatus(const struct http *hp) { CHECK_OBJ_NOTNULL(hp, HTTP_MAGIC); return (hp->status); } int http_IsStatus(const struct http *hp, int val) { CHECK_OBJ_NOTNULL(hp, HTTP_MAGIC); assert(val >= 100 && val <= 999); return (val == (hp->status % 1000)); } /*-------------------------------------------------------------------- * Setting the status will also set the Reason appropriately */ void http_SetStatus(struct http *to, uint16_t status, const char *reason) { char buf[4]; const char *sstr = NULL; CHECK_OBJ_NOTNULL(to, HTTP_MAGIC); /* * We allow people to use top digits for internal VCL * signalling, but strip them from the ASCII version. */ to->status = status; status %= 1000; assert(status >= 100); if (reason == NULL) reason = http_Status2Reason(status, &sstr); else (void)http_Status2Reason(status, &sstr); if (sstr) { http_SetH(to, HTTP_HDR_STATUS, sstr); } else { bprintf(buf, "%03d", status); http_PutField(to, HTTP_HDR_STATUS, buf); } http_SetH(to, HTTP_HDR_REASON, reason); } /*--------------------------------------------------------------------*/ const char * http_GetMethod(const struct http *hp) { CHECK_OBJ_NOTNULL(hp, HTTP_MAGIC); Tcheck(hp->hd[HTTP_HDR_METHOD]); return (hp->hd[HTTP_HDR_METHOD].b); } /*-------------------------------------------------------------------- * Force a particular header field to a particular value */ void http_ForceField(struct http *to, unsigned n, const char *t) { int i; CHECK_OBJ_NOTNULL(to, HTTP_MAGIC); assert(n < HTTP_HDR_FIRST); assert(n == HTTP_HDR_METHOD || n == HTTP_HDR_PROTO); AN(t); /* NB: method names and protocol versions are case-sensitive. */ if (to->hd[n].b == NULL || strcmp(to->hd[n].b, t)) { i = (HTTP_HDR_UNSET - HTTP_HDR_METHOD); i += to->logtag; /* XXX: this is a dead branch */ if (n >= HTTP_HDR_FIRST) VSLbt(to->vsl, (enum VSL_tag_e)i, to->hd[n]); http_SetH(to, n, t); } } /*--------------------------------------------------------------------*/ void http_PutResponse(struct http *to, const char *proto, uint16_t status, const char *reason) { CHECK_OBJ_NOTNULL(to, HTTP_MAGIC); if (proto != NULL) http_SetH(to, HTTP_HDR_PROTO, proto); http_SetStatus(to, status, reason); } /*-------------------------------------------------------------------- * check if header is filterd by the dynamic marker or the static * definitions in http_headers.h */ static inline int http_isfiltered(const struct http *fm, unsigned u, unsigned how) { const char *e; const struct http_hdrflg *f; if (fm->hdf[u] & HDF_FILTER) return (1); if (u < HTTP_HDR_FIRST) return (0); e = strchr(fm->hd[u].b, ':'); if (e == NULL) return (0); f = http_hdr_flags(fm->hd[u].b, e); return (f != NULL && f->flag & how); } int http_IsFiltered(const struct http *fm, unsigned u, unsigned how) { return (http_isfiltered(fm, u, how)); } /*-------------------------------------------------------------------- * Estimate how much workspace we need to Filter this header according * to 'how'. */ unsigned http_EstimateWS(const struct http *fm, unsigned how) { unsigned u, l; l = 4; CHECK_OBJ_NOTNULL(fm, HTTP_MAGIC); for (u = 0; u < fm->nhd; u++) { if (u == HTTP_HDR_METHOD || u == HTTP_HDR_URL) continue; Tcheck(fm->hd[u]); if (http_isfiltered(fm, u, how)) continue; l += Tlen(fm->hd[u]) + 1L; } return (PRNDUP(l + 1L)); } /*-------------------------------------------------------------------- * Encode http struct as byte string. * * XXX: We could save considerable special-casing below by encoding also * XXX: H__Status, H__Reason and H__Proto into the string, but it would * XXX: add 26-30 bytes to all encoded objects to save a little code. * XXX: It could possibly be a good idea for later HTTP versions. */ void HTTP_Encode(const struct http *fm, uint8_t *p0, unsigned l, unsigned how) { unsigned u, w; uint16_t n; uint8_t *p, *e; AN(p0); AN(l); p = p0; e = p + l; assert(p + 5 <= e); assert(fm->nhd <= fm->shd); n = HTTP_HDR_FIRST - 3; vbe16enc(p + 2, fm->status); p += 4; CHECK_OBJ_NOTNULL(fm, HTTP_MAGIC); for (u = 0; u < fm->nhd; u++) { if (u == HTTP_HDR_METHOD || u == HTTP_HDR_URL) continue; Tcheck(fm->hd[u]); if (http_isfiltered(fm, u, how)) continue; http_VSLH(fm, u); w = Tlen(fm->hd[u]) + 1L; assert(p + w + 1 <= e); memcpy(p, fm->hd[u].b, w); p += w; n++; } *p++ = '\0'; assert(p <= e); vbe16enc(p0, n + 1); } /*-------------------------------------------------------------------- * Decode byte string into http struct */ int HTTP_Decode(struct http *to, const uint8_t *fm) { CHECK_OBJ_NOTNULL(to, HTTP_MAGIC); AN(to->vsl); AN(fm); if (vbe16dec(fm) <= to->shd) { to->status = vbe16dec(fm + 2); fm += 4; for (to->nhd = 0; to->nhd < to->shd; to->nhd++) { if (to->nhd == HTTP_HDR_METHOD || to->nhd == HTTP_HDR_URL) { to->hd[to->nhd].b = NULL; to->hd[to->nhd].e = NULL; continue; } if (*fm == '\0') return (0); to->hd[to->nhd].b = (const void*)fm; fm = (const void*)strchr((const void*)fm, '\0'); to->hd[to->nhd].e = (const void*)fm; fm++; http_VSLH(to, to->nhd); } } VSLb(to->vsl, SLT_Error, "Too many headers to Decode object (%u vs. %u)", vbe16dec(fm), to->shd); return (-1); } /*--------------------------------------------------------------------*/ uint16_t HTTP_GetStatusPack(struct worker *wrk, struct objcore *oc) { const char *ptr; ptr = ObjGetAttr(wrk, oc, OA_HEADERS, NULL); AN(ptr); return (vbe16dec(ptr + 2)); } /*--------------------------------------------------------------------*/ /* Get the first packed header */ int HTTP_IterHdrPack(struct worker *wrk, struct objcore *oc, const char **p) { const char *ptr; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); AN(p); if (*p == NULL) { ptr = ObjGetAttr(wrk, oc, OA_HEADERS, NULL); AN(ptr); ptr += 4; /* Skip nhd and status */ ptr = strchr(ptr, '\0') + 1; /* Skip :proto: */ ptr = strchr(ptr, '\0') + 1; /* Skip :status: */ ptr = strchr(ptr, '\0') + 1; /* Skip :reason: */ *p = ptr; } else { *p = strchr(*p, '\0') + 1; /* Skip to next header */ } if (**p == '\0') return (0); return (1); } const char * HTTP_GetHdrPack(struct worker *wrk, struct objcore *oc, hdr_t hdr) { const char *ptr; unsigned l; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); AN(hdr); l = hdr[0]; assert(l > 0); assert(l == strlen(hdr + 1)); assert(hdr[l] == ':'); hdr++; if (hdr[0] == ':') { /* Special cases */ ptr = ObjGetAttr(wrk, oc, OA_HEADERS, NULL); AN(ptr); ptr += 4; /* Skip nhd and status */ /* XXX: should we also have h2_hdr_eq() ? */ if (!strcmp(hdr, ":proto:")) return (ptr); ptr = strchr(ptr, '\0') + 1; if (!strcmp(hdr, ":status:")) return (ptr); ptr = strchr(ptr, '\0') + 1; if (!strcmp(hdr, ":reason:")) return (ptr); WRONG("Unknown magic packed header"); } HTTP_FOREACH_PACK(wrk, oc, ptr) { if (http_hdr_at(ptr, hdr, l)) { ptr += l; while (vct_islws(*ptr)) ptr++; return (ptr); } } return (NULL); } /*-------------------------------------------------------------------- * Merge any headers in the oc->OA_HEADER into the struct http if they * are not there already. */ void HTTP_Merge(struct worker *wrk, struct objcore *oc, struct http *to) { const char *ptr; unsigned u; const char *p; unsigned nhd_before_merge; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); CHECK_OBJ_NOTNULL(to, HTTP_MAGIC); ptr = ObjGetAttr(wrk, oc, OA_HEADERS, NULL); AN(ptr); to->status = vbe16dec(ptr + 2); ptr += 4; for (u = 0; u < HTTP_HDR_FIRST; u++) { if (u == HTTP_HDR_METHOD || u == HTTP_HDR_URL) continue; http_SetH(to, u, ptr); ptr = strchr(ptr, '\0') + 1; } nhd_before_merge = to->nhd; while (*ptr != '\0') { p = strchr(ptr, ':'); AN(p); u = http_findhdr(to, p - ptr, ptr); if (u == 0 || u >= nhd_before_merge) http_SetHeader(to, ptr); ptr = strchr(ptr, '\0') + 1; } } /*--------------------------------------------------------------------*/ static void http_linkh(const struct http *to, const struct http *fm, unsigned n) { assert(n < HTTP_HDR_FIRST); Tcheck(fm->hd[n]); to->hd[n] = fm->hd[n]; to->hdf[n] = fm->hdf[n]; http_VSLH(to, n); } /*--------------------------------------------------------------------*/ void http_FilterReq(struct http *to, const struct http *fm, unsigned how) { unsigned u; CHECK_OBJ_NOTNULL(to, HTTP_MAGIC); CHECK_OBJ_NOTNULL(fm, HTTP_MAGIC); http_linkh(to, fm, HTTP_HDR_METHOD); http_linkh(to, fm, HTTP_HDR_URL); http_linkh(to, fm, HTTP_HDR_PROTO); to->protover = fm->protover; to->status = fm->status; to->nhd = HTTP_HDR_FIRST; for (u = HTTP_HDR_FIRST; u < fm->nhd; u++) { Tcheck(fm->hd[u]); if (http_isfiltered(fm, u, how)) continue; assert (to->nhd < to->shd); to->hd[to->nhd] = fm->hd[u]; to->hdf[to->nhd] = 0; http_VSLH(to, to->nhd); to->nhd++; } } /*-------------------------------------------------------------------- * This function copies any header fields which reference foreign * storage into our own WS. */ void http_CopyHome(const struct http *hp) { unsigned u, l; const char *p; for (u = 0; u < hp->nhd; u++) { if (hp->hd[u].b == NULL) { assert(u < HTTP_HDR_FIRST); continue; } l = Tlen(hp->hd[u]); if (WS_Allocated(hp->ws, hp->hd[u].b, l)) continue; p = WS_Copy(hp->ws, hp->hd[u].b, l + 1L); if (p == NULL) { http_fail(hp); VSLbs(hp->vsl, SLT_LostHeader, TOSTRAND(hp->hd[u].b)); return; } hp->hd[u].b = p; hp->hd[u].e = p + l; } } /*--------------------------------------------------------------------*/ void http_SetHeader(struct http *to, const char *header) { CHECK_OBJ_NOTNULL(to, HTTP_MAGIC); if (to->nhd >= to->shd) { VSLbs(to->vsl, SLT_LostHeader, TOSTRAND(header)); http_fail(to); return; } http_SetH(to, to->nhd++, header); } /*--------------------------------------------------------------------*/ void http_ForceHeader(struct http *to, hdr_t hdr, const char *val) { CHECK_OBJ_NOTNULL(to, HTTP_MAGIC); if (http_HdrIs(to, hdr, val)) return; http_Unset(to, hdr); http_PrintfHeader(to, "%s %s", hdr + 1, val); } void http_AppendHeader(struct http *to, hdr_t hdr, const char *val) { const char *old; http_CollectHdr(to, hdr); if (http_GetHdr(to, hdr, &old)) { http_Unset(to, hdr); http_PrintfHeader(to, "%s %s, %s", hdr + 1, old, val); } else { http_PrintfHeader(to, "%s %s", hdr + 1, val); } } void http_PrintfHeader(struct http *to, const char *fmt, ...) { va_list ap, ap2; struct vsb vsb[1]; size_t sz; char *p; CHECK_OBJ_NOTNULL(to, HTTP_MAGIC); va_start(ap, fmt); va_copy(ap2, ap); WS_VSB_new(vsb, to->ws); VSB_vprintf(vsb, fmt, ap); p = WS_VSB_finish(vsb, to->ws, &sz); if (p == NULL || to->nhd >= to->shd) { http_fail(to); VSLbv(to->vsl, SLT_LostHeader, fmt, ap2); } else { http_SetH(to, to->nhd++, p); } va_end(ap); va_end(ap2); } void http_TimeHeader(struct http *to, const char *fmt, vtim_real now) { char *p; CHECK_OBJ_NOTNULL(to, HTTP_MAGIC); if (to->nhd >= to->shd) { VSLbs(to->vsl, SLT_LostHeader, TOSTRAND(fmt)); http_fail(to); return; } p = WS_Alloc(to->ws, strlen(fmt) + VTIM_FORMAT_SIZE); if (p == NULL) { http_fail(to); VSLbs(to->vsl, SLT_LostHeader, TOSTRAND(fmt)); return; } strcpy(p, fmt); VTIM_format(now, strchr(p, '\0')); http_SetH(to, to->nhd++, p); } const char * http_ViaHeader(void) { return (via_hdr); } /*--------------------------------------------------------------------*/ void http_Unset(struct http *hp, hdr_t hdr) { uint16_t u, v; for (v = u = HTTP_HDR_FIRST; u < hp->nhd; u++) { Tcheck(hp->hd[u]); if (http_IsHdr(&hp->hd[u], hdr)) { http_VSLH_del(hp, u); continue; } if (v != u) { memcpy(&hp->hd[v], &hp->hd[u], sizeof *hp->hd); memcpy(&hp->hdf[v], &hp->hdf[u], sizeof *hp->hdf); } v++; } hp->nhd = v; } varnish-7.5.0/bin/varnishd/cache/cache_lck.c000066400000000000000000000200301457605730600206760ustar00rootroot00000000000000/*- * Copyright (c) 2008-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * The geniuses who came up with pthreads did not think operations like * pthread_assert_mutex_held() were important enough to include them in * the API. * * Build our own locks on top of pthread mutexes and hope that the next * civilization is better at such crucial details than this one. */ #include "config.h" #include "cache_varnishd.h" #include #include #include "vtim.h" #include "VSC_lck.h" struct ilck { unsigned magic; #define ILCK_MAGIC 0x7b86c8a5 int held; pthread_mutex_t mtx; pthread_t owner; const char *w; struct VSC_lck *stat; }; /*--------------------------------------------------------------------*/ static void Lck_Witness_Lock(const struct ilck *il, const char *p, int l, const char *attempt) { char *q, t[10]; //lint -e429 int emit; AN(p); q = pthread_getspecific(witness_key); if (q == NULL) { q = calloc(1, 1024); AN(q); PTOK(pthread_setspecific(witness_key, q)); } emit = *q != '\0'; strcat(q, " "); strcat(q, il->w); strcat(q, attempt); strcat(q, ","); strcat(q, p); strcat(q, ","); bprintf(t, "%d", l); strcat(q, t); if (emit) VSL(SLT_Witness, NO_VXID, "%s", q); } static void Lck_Witness_Unlock(const struct ilck *il) { char *q, *r; q = pthread_getspecific(witness_key); if (q == NULL) return; r = strrchr(q, ' '); if (r == NULL) r = q; else *r++ = '\0'; if (memcmp(r, il->w, strlen(il->w))) VSL(SLT_Witness, NO_VXID, "Unlock %s @ %s <%s>", il->w, r, q); else *r = '\0'; } /*--------------------------------------------------------------------*/ void v_matchproto_() Lck__Lock(struct lock *lck, const char *p, int l) { struct ilck *ilck; int r = EINVAL; AN(lck); CAST_OBJ_NOTNULL(ilck, lck->priv, ILCK_MAGIC); if (DO_DEBUG(DBG_WITNESS)) Lck_Witness_Lock(ilck, p, l, ""); if (DO_DEBUG(DBG_LCK)) { r = pthread_mutex_trylock(&ilck->mtx); assert(r == 0 || r == EBUSY); } if (r) PTOK(pthread_mutex_lock(&ilck->mtx)); AZ(ilck->held); if (r == EBUSY) ilck->stat->dbg_busy++; ilck->stat->locks++; ilck->owner = pthread_self(); ilck->held = 1; } void v_matchproto_() Lck__Unlock(struct lock *lck, const char *p, int l) { struct ilck *ilck; (void)p; (void)l; AN(lck); CAST_OBJ_NOTNULL(ilck, lck->priv, ILCK_MAGIC); assert(pthread_equal(ilck->owner, pthread_self())); AN(ilck->held); ilck->held = 0; /* * #ifdef POSIX_STUPIDITY: * The pthread_t type has no defined assignment or comparison * operators, this is why pthread_equal() is necessary. * Unfortunately POSIX forgot to define a NULL value for pthread_t * so you can never unset a pthread_t variable. * We hack it and fill it with zero bits, hoping for sane * implementations of pthread. * #endif */ #ifdef PTHREAD_NULL ilck->owner = PTHREAD_NULL; #else memset(&ilck->owner, 0, sizeof ilck->owner); #endif PTOK(pthread_mutex_unlock(&ilck->mtx)); if (DO_DEBUG(DBG_WITNESS)) Lck_Witness_Unlock(ilck); } int v_matchproto_() Lck__Trylock(struct lock *lck, const char *p, int l) { struct ilck *ilck; int r; AN(lck); CAST_OBJ_NOTNULL(ilck, lck->priv, ILCK_MAGIC); if (DO_DEBUG(DBG_WITNESS)) Lck_Witness_Lock(ilck, p, l, "?"); r = pthread_mutex_trylock(&ilck->mtx); assert(r == 0 || r == EBUSY); if (r == 0) { AZ(ilck->held); ilck->held = 1; ilck->stat->locks++; ilck->owner = pthread_self(); } else if (DO_DEBUG(DBG_LCK)) ilck->stat->dbg_try_fail++; return (r); } int Lck__Held(const struct lock *lck) { struct ilck *ilck; AN(lck); CAST_OBJ_NOTNULL(ilck, lck->priv, ILCK_MAGIC); return (ilck->held); } int Lck__Owned(const struct lock *lck) { struct ilck *ilck; AN(lck); CAST_OBJ_NOTNULL(ilck, lck->priv, ILCK_MAGIC); AN(ilck->held); return (pthread_equal(ilck->owner, pthread_self())); } int v_matchproto_() Lck_CondWait(pthread_cond_t *cond, struct lock *lck) { return (Lck_CondWaitUntil(cond, lck, INFINITY)); } int v_matchproto_() Lck_CondWaitTimeout(pthread_cond_t *cond, struct lock *lck, vtim_dur timeout) { if (isinf(timeout)) return (Lck_CondWaitUntil(cond, lck, INFINITY)); assert(timeout >= 0); assert(timeout <= 3600); timeout = vmax(timeout, 1e-3); return (Lck_CondWaitUntil(cond, lck, VTIM_real() + timeout)); } int v_matchproto_() Lck_CondWaitUntil(pthread_cond_t *cond, struct lock *lck, vtim_real when) { struct ilck *ilck; struct timespec ts; AN(lck); CAST_OBJ_NOTNULL(ilck, lck->priv, ILCK_MAGIC); AN(ilck->held); assert(pthread_equal(ilck->owner, pthread_self())); ilck->held = 0; if (isinf(when)) { errno = pthread_cond_wait(cond, &ilck->mtx); AZ(errno); } else { assert(when > 1e9); ts = VTIM_timespec(when); assert(ts.tv_nsec >= 0 && ts.tv_nsec <= 999999999); errno = pthread_cond_timedwait(cond, &ilck->mtx, &ts); #if defined (__APPLE__) /* * I hate woo-doo programming in all it's forms and all it's * manifestations, but for reasons I utterly fail to isolate, * OSX sometimes throws an EINVAL. * * I have tried very hard to determine if any of the three * arguments are in fact invalid, and found nothing which * even hints that it might be the case. * * So far I have yet to see a failure if the exact same * call is repeated after a very short sleep. * * Calling pthread_yield_np() instead of sleaping /mostly/ * works as well, but still fails sometimes. * * Env: * Darwin Kernel Version 20.5.0: * Sat May 8 05:10:31 PDT 2021; * root:xnu-7195.121.3~9/RELEASE_ARM64_T8101 arm64 * * 20220329 /phk */ if (errno == EINVAL) { usleep(100); errno = pthread_cond_timedwait(cond, &ilck->mtx, &ts); } #endif /* We should never observe EINTR, but we have in the past. For * example when sanitizers are enabled. */ assert(errno == 0 || errno == ETIMEDOUT || errno == EINTR); } AZ(ilck->held); ilck->held = 1; ilck->owner = pthread_self(); return (errno); } void Lck__New(struct lock *lck, struct VSC_lck *st, const char *w) { struct ilck *ilck; AN(st); AN(w); AN(lck); AZ(lck->priv); ALLOC_OBJ(ilck, ILCK_MAGIC); AN(ilck); ilck->w = w; ilck->stat = st; ilck->stat->creat++; PTOK(pthread_mutex_init(&ilck->mtx, &mtxattr_errorcheck)); lck->priv = ilck; } void Lck_Delete(struct lock *lck) { struct ilck *ilck; AN(lck); TAKE_OBJ_NOTNULL(ilck, &lck->priv, ILCK_MAGIC); ilck->stat->destroy++; PTOK(pthread_mutex_destroy(&ilck->mtx)); FREE_OBJ(ilck); } struct VSC_lck * Lck_CreateClass(struct vsc_seg **sg, const char *name) { return (VSC_lck_New(NULL, sg, name)); } void Lck_DestroyClass(struct vsc_seg **sg) { VSC_lck_Destroy(sg); } #define LOCK(nam) struct VSC_lck *lck_##nam; #include "tbl/locks.h" void LCK_Init(void) { #define LOCK(nam) lck_##nam = Lck_CreateClass(NULL, #nam); #include "tbl/locks.h" } varnish-7.5.0/bin/varnishd/cache/cache_main.c000066400000000000000000000250001457605730600210530ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include "config.h" #include "cache_varnishd.h" #include #include #include #ifdef HAVE_SIGALTSTACK # include #endif #ifdef HAVE_PTHREAD_NP_H # include #endif #include "common/heritage.h" #include "vcli_serve.h" #include "vnum.h" #include "vtim.h" #include "vrnd.h" #include "hash/hash_slinger.h" volatile struct params *cache_param; static pthread_mutex_t cache_vrnd_mtx; static vtim_dur shutdown_delay = 0; pthread_mutexattr_t mtxattr_errorcheck; static void cache_vrnd_lock(void) { PTOK(pthread_mutex_lock(&cache_vrnd_mtx)); } static void cache_vrnd_unlock(void) { PTOK(pthread_mutex_unlock(&cache_vrnd_mtx)); } /*-------------------------------------------------------------------- * Per thread storage for the session currently being processed by * the thread. This is used for panic messages. */ static pthread_key_t req_key; static pthread_key_t bo_key; static pthread_key_t wrk_key; pthread_key_t witness_key; pthread_key_t panic_key; void THR_SetBusyobj(const struct busyobj *bo) { PTOK(pthread_setspecific(bo_key, bo)); } struct busyobj * THR_GetBusyobj(void) { return (pthread_getspecific(bo_key)); } void THR_SetRequest(const struct req *req) { PTOK(pthread_setspecific(req_key, req)); } struct req * THR_GetRequest(void) { return (pthread_getspecific(req_key)); } void THR_SetWorker(const struct worker *wrk) { PTOK(pthread_setspecific(wrk_key, wrk)); } struct worker * THR_GetWorker(void) { return (pthread_getspecific(wrk_key)); } /*-------------------------------------------------------------------- * Name threads if our pthreads implementation supports it. */ static pthread_key_t name_key; void THR_SetName(const char *name) { PTOK(pthread_setspecific(name_key, name)); #if defined(HAVE_PTHREAD_SET_NAME_NP) pthread_set_name_np(pthread_self(), name); #elif defined(HAVE_PTHREAD_SETNAME_NP) #if defined(__APPLE__) (void)pthread_setname_np(name); #elif defined(__NetBSD__) (void)pthread_setname_np(pthread_self(), "%s", (char *)(uintptr_t)name); #else (void)pthread_setname_np(pthread_self(), name); #endif #endif } const char * THR_GetName(void) { return (pthread_getspecific(name_key)); } /*-------------------------------------------------------------------- * Generic setup all our threads should call */ #ifdef HAVE_SIGALTSTACK static stack_t altstack; #endif void THR_Init(void) { #ifdef HAVE_SIGALTSTACK if (altstack.ss_sp != NULL) AZ(sigaltstack(&altstack, NULL)); #endif } /*-------------------------------------------------------------------- * VXID's are unique transaction numbers allocated with a minimum of * locking overhead via pools in the worker threads. * * VXID's are mostly for use in VSL and for that reason we never return * zero vxid, in order to reserve that for "unassociated" VSL records. */ static uint64_t vxid_base = 1; static uint32_t vxid_chunk = 32768; static struct lock vxid_lock; vxid_t VXID_Get(const struct worker *wrk, uint64_t mask) { struct vxid_pool *v; vxid_t retval; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(wrk->wpriv, WORKER_PRIV_MAGIC); v = wrk->wpriv->vxid_pool; AZ(mask & VSL_IDENTMASK); while (v->count == 0 || v->next >= VRT_INTEGER_MAX) { Lck_Lock(&vxid_lock); v->next = vxid_base; v->count = vxid_chunk; vxid_base += v->count; if (vxid_base >= VRT_INTEGER_MAX) vxid_base = 1; Lck_Unlock(&vxid_lock); } v->count--; assert(v->next > 0); assert(v->next < VSL_CLIENTMARKER); retval.vxid = v->next | mask; v->next++; return (retval); } /*-------------------------------------------------------------------- * Debugging aids */ /* * Dumb down the VXID allocation to make it predictable for * varnishtest cases */ static void v_matchproto_(cli_func_t) cli_debug_xid(struct cli *cli, const char * const *av, void *priv) { (void)priv; if (av[2] != NULL) { vxid_base = strtoull(av[2], NULL, 0); vxid_chunk = 0; if (av[3] != NULL) vxid_chunk = strtoul(av[3], NULL, 0); if (vxid_chunk == 0) vxid_chunk = 1; } VCLI_Out(cli, "XID is %ju chunk %u", (uintmax_t)vxid_base, vxid_chunk); } /* * Artificially slow down the process shutdown. */ static void v_matchproto_(cli_func_t) cli_debug_shutdown_delay(struct cli *cli, const char * const *av, void *priv) { (void)cli; (void)priv; shutdown_delay = VNUM_duration(av[2]); } /* * Default to seed=1, this is the only seed value POSIXl guarantees will * result in a reproducible random number sequence. */ static void v_matchproto_(cli_func_t) cli_debug_srandom(struct cli *cli, const char * const *av, void *priv) { unsigned seed = 1; (void)priv; (void)cli; if (av[2] != NULL) seed = strtoul(av[2], NULL, 0); VRND_SeedTestable(seed); } static struct cli_proto debug_cmds[] = { { CLICMD_DEBUG_XID, "d", cli_debug_xid }, { CLICMD_DEBUG_SHUTDOWN_DELAY, "d", cli_debug_shutdown_delay }, { CLICMD_DEBUG_SRANDOM, "d", cli_debug_srandom }, { NULL } }; /*-------------------------------------------------------------------- * XXX: Think more about which order we start things */ #if defined(__FreeBSD__) && __FreeBSD_version >= 1000000 static void child_malloc_fail(void *p, const char *s) { VSL(SLT_Error, NO_VXID, "MALLOC ERROR: %s (%p)", s, p); fprintf(stderr, "MALLOC ERROR: %s (%p)\n", s, p); WRONG("Malloc Error"); } #endif /*===================================================================== * signal handler for child process */ static void v_noreturn_ v_matchproto_() child_signal_handler(int s, siginfo_t *si, void *c) { char buf[1024]; struct sigaction sa; struct req *req; const char *a, *p, *info = NULL; (void)c; /* Don't come back */ memset(&sa, 0, sizeof sa); sa.sa_handler = SIG_DFL; (void)sigaction(SIGSEGV, &sa, NULL); (void)sigaction(SIGBUS, &sa, NULL); (void)sigaction(SIGABRT, &sa, NULL); while (s == SIGSEGV || s == SIGBUS) { req = THR_GetRequest(); if (req == NULL || req->wrk == NULL) break; a = TRUST_ME(si->si_addr); p = TRUST_ME(req->wrk); p += sizeof *req->wrk; // rough safe estimate - top of stack if (a > p + cache_param->wthread_stacksize) break; if (a < p - 2 * cache_param->wthread_stacksize) break; info = "\nTHIS PROBABLY IS A STACK OVERFLOW - " "check thread_pool_stack parameter"; break; } bprintf(buf, "Signal %d (%s) received at %p si_code %d%s", s, strsignal(s), si->si_addr, si->si_code, info ? info : ""); VAS_Fail(__func__, __FILE__, __LINE__, buf, VAS_WRONG); } /*===================================================================== * Magic for panicking properly on signals */ static void child_sigmagic(size_t altstksz) { struct sigaction sa; memset(&sa, 0, sizeof sa); #ifdef HAVE_SIGALTSTACK size_t sz = vmax_t(size_t, SIGSTKSZ + 4096, altstksz); altstack.ss_sp = mmap(NULL, sz, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); AN(altstack.ss_sp != MAP_FAILED); AN(altstack.ss_sp); altstack.ss_size = sz; altstack.ss_flags = 0; sa.sa_flags |= SA_ONSTACK; #else (void)altstksz; #endif THR_Init(); sa.sa_sigaction = child_signal_handler; sa.sa_flags |= SA_SIGINFO; (void)sigaction(SIGBUS, &sa, NULL); (void)sigaction(SIGABRT, &sa, NULL); (void)sigaction(SIGSEGV, &sa, NULL); } static void cli_quit(int sig) { if (!IS_CLI()) { PTOK(pthread_kill(cli_thread, sig)); return; } WRONG("It's time for the big quit"); } /*===================================================================== * Run the child process */ void child_main(int sigmagic, size_t altstksz) { if (sigmagic) child_sigmagic(altstksz); (void)signal(SIGINT, SIG_DFL); (void)signal(SIGTERM, SIG_DFL); (void)signal(SIGQUIT, cli_quit); #if defined(__FreeBSD__) && __FreeBSD_version >= 1000000 malloc_message = child_malloc_fail; #endif /* Before anything uses pthreads in anger */ PTOK(pthread_mutexattr_init(&mtxattr_errorcheck)); PTOK(pthread_mutexattr_settype(&mtxattr_errorcheck, PTHREAD_MUTEX_ERRORCHECK)); cache_param = heritage.param; PTOK(pthread_key_create(&req_key, NULL)); PTOK(pthread_key_create(&bo_key, NULL)); PTOK(pthread_key_create(&wrk_key, NULL)); PTOK(pthread_key_create(&witness_key, free)); PTOK(pthread_key_create(&name_key, NULL)); PTOK(pthread_key_create(&panic_key, NULL)); THR_SetName("cache-main"); PTOK(pthread_mutex_init(&cache_vrnd_mtx, &mtxattr_errorcheck)); VRND_Lock = cache_vrnd_lock; VRND_Unlock = cache_vrnd_unlock; VSM_Init(); /* First, LCK needs it. */ LCK_Init(); /* Second, locking */ Lck_New(&vxid_lock, lck_vxid); CLI_Init(); PAN_Init(); VFP_Init(); ObjInit(); WRK_Init(); VCL_Init(); VCL_VRT_Init(); HTTP_Init(); VBO_Init(); VCP_Init(); VBP_Init(); VDI_Init(); VBE_InitCfg(); Pool_Init(); V1P_Init(); V2D_Init(); EXP_Init(); HSH_Init(heritage.hash); BAN_Init(); VCA_Init(); STV_open(); VMOD_Init(); BAN_Compile(); VRND_SeedAll(); CLI_AddFuncs(debug_cmds); #if WITH_PERSISTENT_STORAGE /* Wait for persistent storage to load if asked to */ if (FEATURE(FEATURE_WAIT_SILO)) SMP_Ready(); #endif CLI_Run(); if (shutdown_delay > 0) VTIM_sleep(shutdown_delay); VCA_Shutdown(); BAN_Shutdown(); EXP_Shutdown(); STV_close(); printf("Child dies\n"); } varnish-7.5.0/bin/varnishd/cache/cache_mempool.c000066400000000000000000000206051457605730600216050ustar00rootroot00000000000000/*- * Copyright (c) 2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Generic memory pool */ #include "config.h" #include "cache_varnishd.h" #include #include #include "vtim.h" #include "VSC_mempool.h" struct memitem { unsigned magic; #define MEMITEM_MAGIC 0x42e55401 unsigned size; VTAILQ_ENTRY(memitem) list; vtim_real touched; // XXX -> mono? }; VTAILQ_HEAD(memhead_s, memitem); struct mempool { unsigned magic; #define MEMPOOL_MAGIC 0x37a75a8d char name[12]; struct memhead_s list; struct memhead_s surplus; struct lock mtx; volatile struct poolparam *param; volatile unsigned *cur_size; uint64_t live; struct vsc_seg *vsc_seg; struct VSC_mempool *vsc; unsigned n_pool; pthread_t thread; vtim_real t_now; // XXX -> mono? int self_destruct; }; /*--------------------------------------------------------------------- */ static struct memitem * mpl_alloc(const struct mempool *mpl) { unsigned tsz; struct memitem *mi; CHECK_OBJ_NOTNULL(mpl, MEMPOOL_MAGIC); tsz = *mpl->cur_size; mi = calloc(1, tsz); AN(mi); mi->magic = MEMITEM_MAGIC; mi->size = tsz; mpl->vsc->sz_wanted = tsz; mpl->vsc->sz_actual = tsz - sizeof *mi; return (mi); } /*--------------------------------------------------------------------- * Pool-guard * Attempt to keep number of free items in pool inside bounds with * minimum locking activity, and keep an eye on items at the tail * of the list not getting too old. */ static void * mpl_guard(void *priv) { struct mempool *mpl; struct memitem *mi = NULL; vtim_dur v_statevariable_(mpl_slp); vtim_real last = 0; CAST_OBJ_NOTNULL(mpl, priv, MEMPOOL_MAGIC); THR_SetName(mpl->name); THR_Init(); mpl_slp = 0.15; // random while (1) { VTIM_sleep(mpl_slp); mpl_slp = 0.814; // random mpl->t_now = VTIM_real(); if (mi != NULL && (mpl->n_pool > mpl->param->max_pool || mi->size < *mpl->cur_size)) { CHECK_OBJ(mi, MEMITEM_MAGIC); FREE_OBJ(mi); } if (mi == NULL && mpl->n_pool < mpl->param->min_pool) mi = mpl_alloc(mpl); if (mpl->n_pool < mpl->param->min_pool && mi != NULL) { /* can do */ } else if (mpl->n_pool > mpl->param->max_pool && mi == NULL) { /* can do */ } else if (!VTAILQ_EMPTY(&mpl->surplus)) { /* can do */ } else if (last + .1 * mpl->param->max_age < mpl->t_now) { /* should do */ } else if (mpl->self_destruct) { /* can do */ } else { continue; /* nothing to do */ } mpl_slp = 0.314; // random if (Lck_Trylock(&mpl->mtx)) continue; if (mpl->self_destruct) { AZ(mpl->live); while (1) { if (mi == NULL) { mi = VTAILQ_FIRST(&mpl->list); if (mi != NULL) { mpl->vsc->pool = --mpl->n_pool; VTAILQ_REMOVE(&mpl->list, mi, list); } } if (mi == NULL) { mi = VTAILQ_FIRST(&mpl->surplus); if (mi != NULL) VTAILQ_REMOVE(&mpl->surplus, mi, list); } if (mi == NULL) break; CHECK_OBJ(mi, MEMITEM_MAGIC); FREE_OBJ(mi); } VSC_mempool_Destroy(&mpl->vsc_seg); Lck_Unlock(&mpl->mtx); Lck_Delete(&mpl->mtx); FREE_OBJ(mpl); break; } if (mpl->n_pool < mpl->param->min_pool && mi != NULL && mi->size >= *mpl->cur_size) { CHECK_OBJ(mi, MEMITEM_MAGIC); mpl->vsc->pool = ++mpl->n_pool; mi->touched = mpl->t_now; VTAILQ_INSERT_HEAD(&mpl->list, mi, list); mi = NULL; mpl_slp = .01; // random } if (mpl->n_pool > mpl->param->max_pool && mi == NULL) { mi = VTAILQ_FIRST(&mpl->list); CHECK_OBJ_NOTNULL(mi, MEMITEM_MAGIC); mpl->vsc->pool = --mpl->n_pool; mpl->vsc->surplus++; VTAILQ_REMOVE(&mpl->list, mi, list); mpl_slp = .01; // random } if (mi == NULL) { mi = VTAILQ_FIRST(&mpl->surplus); if (mi != NULL) { CHECK_OBJ(mi, MEMITEM_MAGIC); VTAILQ_REMOVE(&mpl->surplus, mi, list); mpl_slp = .01; // random } } if (mi == NULL && mpl->n_pool > mpl->param->min_pool) { mi = VTAILQ_LAST(&mpl->list, memhead_s); CHECK_OBJ_NOTNULL(mi, MEMITEM_MAGIC); if (mi->touched + mpl->param->max_age < mpl->t_now) { mpl->vsc->pool = --mpl->n_pool; mpl->vsc->timeout++; VTAILQ_REMOVE(&mpl->list, mi, list); mpl_slp = .01; // random } else { mi = NULL; last = mpl->t_now; } } else if (mpl->n_pool <= mpl->param->min_pool) { last = mpl->t_now; } Lck_Unlock(&mpl->mtx); if (mi != NULL) { CHECK_OBJ(mi, MEMITEM_MAGIC); FREE_OBJ(mi); } } return (NULL); } /*--------------------------------------------------------------------- * Create a new memory pool, and start the guard thread for it. */ struct mempool * MPL_New(const char *name, volatile struct poolparam *pp, volatile unsigned *cur_size) { struct mempool *mpl; ALLOC_OBJ(mpl, MEMPOOL_MAGIC); AN(mpl); bprintf(mpl->name, "MPL_%s", name); mpl->param = pp; mpl->cur_size = cur_size; VTAILQ_INIT(&mpl->list); VTAILQ_INIT(&mpl->surplus); Lck_New(&mpl->mtx, lck_mempool); /* XXX: prealloc min_pool */ mpl->vsc = VSC_mempool_New(NULL, &mpl->vsc_seg, mpl->name + 4); AN(mpl->vsc); PTOK(pthread_create(&mpl->thread, NULL, mpl_guard, mpl)); PTOK(pthread_detach(mpl->thread)); return (mpl); } /*--------------------------------------------------------------------- * Destroy a memory pool. There must be no live items, and we cheat * and leave all the hard work to the guard thread. */ void MPL_Destroy(struct mempool **mpp) { struct mempool *mpl; TAKE_OBJ_NOTNULL(mpl, mpp, MEMPOOL_MAGIC); Lck_Lock(&mpl->mtx); AZ(mpl->live); mpl->self_destruct = 1; Lck_Unlock(&mpl->mtx); } /*--------------------------------------------------------------------- */ void * MPL_Get(struct mempool *mpl, unsigned *size) { struct memitem *mi; CHECK_OBJ_NOTNULL(mpl, MEMPOOL_MAGIC); AN(size); Lck_Lock(&mpl->mtx); mpl->vsc->allocs++; mpl->vsc->live = ++mpl->live; do { mi = VTAILQ_FIRST(&mpl->list); if (mi == NULL) { mpl->vsc->randry++; break; } mpl->vsc->pool = --mpl->n_pool; CHECK_OBJ(mi, MEMITEM_MAGIC); VTAILQ_REMOVE(&mpl->list, mi, list); if (mi->size < *mpl->cur_size) { mpl->vsc->toosmall++; VTAILQ_INSERT_HEAD(&mpl->surplus, mi, list); mi = NULL; } else { mpl->vsc->recycle++; } } while (mi == NULL); Lck_Unlock(&mpl->mtx); if (mi == NULL) mi = mpl_alloc(mpl); *size = mi->size - sizeof *mi; CHECK_OBJ(mi, MEMITEM_MAGIC); /* Throw away sizeof info for FlexeLint: */ return ((void *)(uintptr_t)(mi + 1)); } void MPL_Free(struct mempool *mpl, void *item) { struct memitem *mi; CHECK_OBJ_NOTNULL(mpl, MEMPOOL_MAGIC); AN(item); mi = (void*)((uintptr_t)item - sizeof(*mi)); CHECK_OBJ_NOTNULL(mi, MEMITEM_MAGIC); memset(item, 0, mi->size - sizeof *mi); Lck_Lock(&mpl->mtx); mpl->vsc->frees++; mpl->vsc->live = --mpl->live; if (mi->size < *mpl->cur_size) { mpl->vsc->toosmall++; VTAILQ_INSERT_HEAD(&mpl->surplus, mi, list); } else { mpl->vsc->pool = ++mpl->n_pool; mi->touched = mpl->t_now; VTAILQ_INSERT_HEAD(&mpl->list, mi, list); } Lck_Unlock(&mpl->mtx); } void MPL_AssertSane(const void *item) { struct memitem *mi; mi = (void*)((uintptr_t)item - sizeof(*mi)); CHECK_OBJ_NOTNULL(mi, MEMITEM_MAGIC); } varnish-7.5.0/bin/varnishd/cache/cache_obj.c000066400000000000000000000436631457605730600207200ustar00rootroot00000000000000/*- * Copyright (c) 2013-2016 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Lifetime of an objcore: * phase 0 - nonexistent * phase 1 - created, but no stevedore associated * phase 2 - stevedore associated, being filled out * phase 3 - stable, no changes happening * phase 4 - unavailable, being dismantled * phase 5 - stevedore disassociated * phase 6 - nonexistent * * 0->1 ObjNew() creates objcore * * 1->2 STV_NewObject() associates a stevedore * * 2 ObjSetState() sets state * 2 ObjWaitState() waits for particular state * INVALID->REQ_DONE->STREAM->FINISHED->FAILED * * 2 ObjGetSpace() allocates space * 2 ObjExtend() commits content * 2 ObjWaitExtend() waits for content - used to implement ObjIterate()) * * 2 ObjSetAttr() * 2 ObjCopyAttr() * 2 ObjSetFlag() * 2 ObjSetDouble() * 2 ObjSetU32() * 2 ObjSetU64() * * 2->3 ObjBocDone() Boc removed from OC, clean it up * * 23 ObjHasAttr() * 23 ObjGetAttr() * 23 ObjCheckFlag() * 23 ObjGetDouble() * 23 ObjGetU32() * 23 ObjGetU64() * 23 ObjGetLen() * 23 ObjGetXID() * * 23 ObjIterate() ... over body * * 23 ObjTouch() Signal to LRU(-like) facilities * * 3->4 HSH_Snipe() kill if not in use * 3->4 HSH_Kill() make unavailable * * 234 ObjSlim() Release body storage (but retain attribute storage) * * 4->5 ObjFreeObj() disassociates stevedore * * 5->6 FREE_OBJ() ...in HSH_DerefObjCore() */ #include "config.h" #include #include "cache_varnishd.h" #include "cache_obj.h" #include "vend.h" #include "storage/storage.h" static const struct obj_methods * obj_getmethods(const struct objcore *oc) { CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); CHECK_OBJ_NOTNULL(oc->stobj->stevedore, STEVEDORE_MAGIC); AN(oc->stobj->stevedore->methods); return (oc->stobj->stevedore->methods); } static struct boc * obj_newboc(void) { struct boc *boc; ALLOC_OBJ(boc, BOC_MAGIC); AN(boc); Lck_New(&boc->mtx, lck_busyobj); PTOK(pthread_cond_init(&boc->cond, NULL)); boc->refcount = 1; boc->transit_buffer = cache_param->transit_buffer; return (boc); } static void obj_deleteboc(struct boc **p) { struct boc *boc; TAKE_OBJ_NOTNULL(boc, p, BOC_MAGIC); Lck_Delete(&boc->mtx); PTOK(pthread_cond_destroy(&boc->cond)); free(boc->vary); FREE_OBJ(boc); } /*==================================================================== * ObjNew() * */ struct objcore * ObjNew(const struct worker *wrk) { struct objcore *oc; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); ALLOC_OBJ(oc, OBJCORE_MAGIC); AN(oc); wrk->stats->n_objectcore++; oc->last_lru = NAN; oc->flags = OC_F_BUSY; oc->boc = obj_newboc(); return (oc); } /*==================================================================== * ObjDestroy() * */ void ObjDestroy(const struct worker *wrk, struct objcore **p) { struct objcore *oc; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); TAKE_OBJ_NOTNULL(oc, p, OBJCORE_MAGIC); if (oc->boc != NULL) obj_deleteboc(&oc->boc); FREE_OBJ(oc); wrk->stats->n_objectcore--; } /*==================================================================== * ObjIterate() * */ int ObjIterate(struct worker *wrk, struct objcore *oc, void *priv, objiterate_f *func, int final) { const struct obj_methods *om = obj_getmethods(oc); CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); AN(func); AN(om->objiterator); return (om->objiterator(wrk, oc, priv, func, final)); } /*==================================================================== * ObjGetSpace() * * This function returns a pointer and length of free space. If there * is no free space, some will be added first. * * The "sz" argument is an input hint of how much space is desired. * 0 means "unknown", return some default size (maybe fetch_chunksize) */ int ObjGetSpace(struct worker *wrk, struct objcore *oc, ssize_t *sz, uint8_t **ptr) { const struct obj_methods *om = obj_getmethods(oc); CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(oc->boc, BOC_MAGIC); AN(sz); AN(ptr); assert(*sz >= 0); AN(om->objgetspace); return (om->objgetspace(wrk, oc, sz, ptr)); } /*==================================================================== * ObjExtend() * * This function extends the used part of the object a number of bytes * into the last space returned by ObjGetSpace() * * The final flag must be set on the last call, and it will release any * surplus space allocated. */ static void obj_extend_condwait(const struct objcore *oc) { if (oc->boc->transit_buffer == 0) return; assert(oc->flags & (OC_F_PRIVATE | OC_F_HFM | OC_F_HFP)); /* NB: strictly signaling progress both ways would be prone to * deadlocks, so instead we wait for signals from the client side * when delivered_so_far so far is updated, but in case the fetch * thread was not waiting at the time of the signal, we will see * updates to delivered_so_far after timing out. */ while (!(oc->flags & OC_F_CANCEL) && oc->boc->fetched_so_far > oc->boc->delivered_so_far + oc->boc->transit_buffer) (void)Lck_CondWait(&oc->boc->cond, &oc->boc->mtx); } void ObjExtend(struct worker *wrk, struct objcore *oc, ssize_t l, int final) { const struct obj_methods *om = obj_getmethods(oc); CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(oc->boc, BOC_MAGIC); AN(om->objextend); assert(l >= 0); Lck_Lock(&oc->boc->mtx); if (l > 0) { obj_extend_condwait(oc); om->objextend(wrk, oc, l); oc->boc->fetched_so_far += l; PTOK(pthread_cond_broadcast(&oc->boc->cond)); } Lck_Unlock(&oc->boc->mtx); assert(oc->boc == NULL || oc->boc->state < BOS_FINISHED); if (final && om->objtrimstore != NULL) om->objtrimstore(wrk, oc); } /*==================================================================== */ uint64_t ObjWaitExtend(const struct worker *wrk, const struct objcore *oc, uint64_t l) { uint64_t rv; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); CHECK_OBJ_NOTNULL(oc->boc, BOC_MAGIC); Lck_Lock(&oc->boc->mtx); while (1) { rv = oc->boc->fetched_so_far; assert(l <= rv || oc->boc->state == BOS_FAILED); if (oc->boc->transit_buffer > 0) { assert(oc->flags & (OC_F_PRIVATE | OC_F_HFM | OC_F_HFP)); /* Signal the new client position */ oc->boc->delivered_so_far = l; PTOK(pthread_cond_signal(&oc->boc->cond)); } if (rv > l || oc->boc->state >= BOS_FINISHED) break; (void)Lck_CondWait(&oc->boc->cond, &oc->boc->mtx); } rv = oc->boc->fetched_so_far; Lck_Unlock(&oc->boc->mtx); return (rv); } /*==================================================================== */ void ObjSetState(struct worker *wrk, const struct objcore *oc, enum boc_state_e next) { const struct obj_methods *om; CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); CHECK_OBJ_NOTNULL(oc->boc, BOC_MAGIC); assert(next > oc->boc->state); CHECK_OBJ_ORNULL(oc->stobj->stevedore, STEVEDORE_MAGIC); assert(next != BOS_STREAM || oc->boc->state == BOS_PREP_STREAM); assert(next != BOS_FINISHED || (oc->oa_present & (1 << OA_LEN))); if (oc->stobj->stevedore != NULL) { om = oc->stobj->stevedore->methods; if (om->objsetstate != NULL) om->objsetstate(wrk, oc, next); } Lck_Lock(&oc->boc->mtx); oc->boc->state = next; PTOK(pthread_cond_broadcast(&oc->boc->cond)); Lck_Unlock(&oc->boc->mtx); } /*==================================================================== */ void ObjWaitState(const struct objcore *oc, enum boc_state_e want) { CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); CHECK_OBJ_NOTNULL(oc->boc, BOC_MAGIC); Lck_Lock(&oc->boc->mtx); /* wake up obj_extend_condwait() */ if (oc->flags & OC_F_CANCEL) PTOK(pthread_cond_signal(&oc->boc->cond)); while (1) { if (oc->boc->state >= want) break; (void)Lck_CondWait(&oc->boc->cond, &oc->boc->mtx); } Lck_Unlock(&oc->boc->mtx); } /*==================================================================== * ObjGetlen() * * This is a separate function because it may need locking */ uint64_t ObjGetLen(struct worker *wrk, struct objcore *oc) { uint64_t len; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); AZ(ObjGetU64(wrk, oc, OA_LEN, &len)); return (len); } /*==================================================================== * ObjSlim() * * Free the whatever storage can be freed, without freeing the actual * object yet. */ void ObjSlim(struct worker *wrk, struct objcore *oc) { const struct obj_methods *om = obj_getmethods(oc); CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); if (om->objslim != NULL) om->objslim(wrk, oc); } /*==================================================================== * Called when the boc used to populate the objcore is going away. * Useful for releasing any leftovers from Trim. */ void ObjBocDone(struct worker *wrk, struct objcore *oc, struct boc **boc) { const struct obj_methods *m; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); AN(boc); CHECK_OBJ_NOTNULL(*boc, BOC_MAGIC); CHECK_OBJ_ORNULL(oc->stobj->stevedore, STEVEDORE_MAGIC); if (oc->stobj->stevedore != NULL) { m = obj_getmethods(oc); if (m->objbocdone != NULL) m->objbocdone(wrk, oc, *boc); } obj_deleteboc(boc); } /*==================================================================== */ void ObjFreeObj(struct worker *wrk, struct objcore *oc) { const struct obj_methods *m = obj_getmethods(oc); CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); AN(m->objfree); m->objfree(wrk, oc); AZ(oc->stobj->stevedore); } /*==================================================================== * ObjHasAttr() * * Check if object has this attribute */ int ObjHasAttr(struct worker *wrk, struct objcore *oc, enum obj_attr attr) { CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); if (oc->oa_present) return (oc->oa_present & (1 << attr)); /* resurrected persistent objects don't have oa_present set */ return (ObjGetAttr(wrk, oc, attr, NULL) != NULL ? 1 : 0); } /*==================================================================== * ObjGetAttr() * * Get an attribute of the object. * * Returns NULL on unset or zero length attributes and len set to * zero. Returns Non-NULL otherwise and len is updated with the attributes * length. */ const void * ObjGetAttr(struct worker *wrk, struct objcore *oc, enum obj_attr attr, ssize_t *len) { const struct obj_methods *om = obj_getmethods(oc); CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); AN(om->objgetattr); return (om->objgetattr(wrk, oc, attr, len)); } /*==================================================================== * ObjSetAttr() * * Setting fixed size attributes always succeeds. * * Setting a variable size attribute asserts if the combined size of the * variable attributes exceeds the total variable attribute space set at * object creation. If there is space it always succeeds. * * Setting an auxiliary attribute can fail. * * Resetting any variable asserts if the new length does not match the * previous length exactly. * * If ptr is Non-NULL, it points to the new content which is copied into * the attribute. Otherwise the caller will have to do the copying. * * Return value is non-NULL on success and NULL on failure. If ptr was * non-NULL, it is an error to use the returned pointer to set the * attribute data, it is only a success indicator in that case. */ void * ObjSetAttr(struct worker *wrk, struct objcore *oc, enum obj_attr attr, ssize_t len, const void *ptr) { const struct obj_methods *om = obj_getmethods(oc); void *r; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(oc->boc, BOC_MAGIC); AN(om->objsetattr); assert((int)attr < 16); r = om->objsetattr(wrk, oc, attr, len, ptr); if (r) oc->oa_present |= (1 << attr); return (r); } /*==================================================================== * ObjTouch() */ void ObjTouch(struct worker *wrk, struct objcore *oc, vtim_real now) { const struct obj_methods *om = obj_getmethods(oc); CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); if (om->objtouch != NULL) om->objtouch(wrk, oc, now); } /*==================================================================== * Utility functions which work on top of the previous ones */ int ObjCopyAttr(struct worker *wrk, struct objcore *oc, struct objcore *ocs, enum obj_attr attr) { const void *vps; void *vpd; ssize_t l; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); CHECK_OBJ_NOTNULL(oc->boc, BOC_MAGIC); CHECK_OBJ_NOTNULL(ocs, OBJCORE_MAGIC); vps = ObjGetAttr(wrk, ocs, attr, &l); // XXX: later we want to have zero-length OA's too if (vps == NULL || l <= 0) return (-1); vpd = ObjSetAttr(wrk, oc, attr, l, vps); if (vpd == NULL) return (-1); return (0); } int ObjSetXID(struct worker *wrk, struct objcore *oc, vxid_t xid) { uint64_t u; u = VXID(xid); AZ(ObjSetU64(wrk, oc, OA_VXID, u)); return (0); } vxid_t ObjGetXID(struct worker *wrk, struct objcore *oc) { vxid_t u; AZ(ObjGetU64(wrk, oc, OA_VXID, &u.vxid)); return (u); } /*-------------------------------------------------------------------- * There is no well-defined byteorder for IEEE-754 double and the * correct solution (frexp(3) and manual encoding) is more work * than our (weak) goal of being endian-agnostic requires at this point. * We give it a shot by memcpy'ing doubles over a uint64_t and then * BE encode that. */ int ObjSetDouble(struct worker *wrk, struct objcore *oc, enum obj_attr a, double t) { void *vp; uint64_t u; assert(sizeof t == sizeof u); memcpy(&u, &t, sizeof u); vp = ObjSetAttr(wrk, oc, a, sizeof u, NULL); if (vp == NULL) return (-1); vbe64enc(vp, u); return (0); } int ObjGetDouble(struct worker *wrk, struct objcore *oc, enum obj_attr a, double *d) { const void *vp; uint64_t u; ssize_t l; assert(sizeof *d == sizeof u); vp = ObjGetAttr(wrk, oc, a, &l); if (vp == NULL) return (-1); if (d != NULL) { assert(l == sizeof u); u = vbe64dec(vp); memcpy(d, &u, sizeof *d); } return (0); } /*-------------------------------------------------------------------- */ int ObjSetU64(struct worker *wrk, struct objcore *oc, enum obj_attr a, uint64_t t) { void *vp; vp = ObjSetAttr(wrk, oc, a, sizeof t, NULL); if (vp == NULL) return (-1); vbe64enc(vp, t); return (0); } int ObjGetU64(struct worker *wrk, struct objcore *oc, enum obj_attr a, uint64_t *d) { const void *vp; ssize_t l; vp = ObjGetAttr(wrk, oc, a, &l); if (vp == NULL || l != sizeof *d) return (-1); if (d != NULL) *d = vbe64dec(vp); return (0); } /*-------------------------------------------------------------------- */ int ObjCheckFlag(struct worker *wrk, struct objcore *oc, enum obj_flags of) { const uint8_t *fp; fp = ObjGetAttr(wrk, oc, OA_FLAGS, NULL); AN(fp); return ((*fp) & of); } void ObjSetFlag(struct worker *wrk, struct objcore *oc, enum obj_flags of, int val) { uint8_t *fp; fp = ObjSetAttr(wrk, oc, OA_FLAGS, 1, NULL); AN(fp); if (val) (*fp) |= of; else (*fp) &= ~of; } /*==================================================================== * Object event subscribtion mechanism. * * XXX: it is extremely unclear what the locking circumstances are here. */ struct oev_entry { unsigned magic; #define OEV_MAGIC 0xb0b7c5a1 unsigned mask; obj_event_f *func; void *priv; VTAILQ_ENTRY(oev_entry) list; }; static VTAILQ_HEAD(,oev_entry) oev_list; static pthread_rwlock_t oev_rwl; static unsigned oev_mask; /* * NB: ObjSubscribeEvents() is not atomic: * oev_mask is checked optimistically in ObjSendEvent() */ uintptr_t ObjSubscribeEvents(obj_event_f *func, void *priv, unsigned mask) { struct oev_entry *oev; AN(func); AZ(mask & ~OEV_MASK); ALLOC_OBJ(oev, OEV_MAGIC); AN(oev); oev->func = func; oev->priv = priv; oev->mask = mask; PTOK(pthread_rwlock_wrlock(&oev_rwl)); VTAILQ_INSERT_TAIL(&oev_list, oev, list); oev_mask |= mask; PTOK(pthread_rwlock_unlock(&oev_rwl)); return ((uintptr_t)oev); } void ObjUnsubscribeEvents(uintptr_t *handle) { struct oev_entry *oev, *oev2 = NULL; unsigned newmask = 0; AN(handle); AN(*handle); PTOK(pthread_rwlock_wrlock(&oev_rwl)); VTAILQ_FOREACH(oev, &oev_list, list) { CHECK_OBJ_NOTNULL(oev, OEV_MAGIC); if ((uintptr_t)oev == *handle) oev2 = oev; else newmask |= oev->mask; } AN(oev2); VTAILQ_REMOVE(&oev_list, oev2, list); oev_mask = newmask; AZ(newmask & ~OEV_MASK); PTOK(pthread_rwlock_unlock(&oev_rwl)); FREE_OBJ(oev2); *handle = 0; } void ObjSendEvent(struct worker *wrk, struct objcore *oc, unsigned event) { struct oev_entry *oev; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); AN(event & OEV_MASK); AZ(event & ~OEV_MASK); if (!(event & oev_mask)) return; PTOK(pthread_rwlock_rdlock(&oev_rwl)); VTAILQ_FOREACH(oev, &oev_list, list) { CHECK_OBJ_NOTNULL(oev, OEV_MAGIC); if (event & oev->mask) oev->func(wrk, oev->priv, oc, event); } PTOK(pthread_rwlock_unlock(&oev_rwl)); } void ObjInit(void) { VTAILQ_INIT(&oev_list); PTOK(pthread_rwlock_init(&oev_rwl, NULL)); } varnish-7.5.0/bin/varnishd/cache/cache_obj.h000066400000000000000000000053721457605730600207200ustar00rootroot00000000000000/*- * Copyright (c) 2015-2016 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ /* Methods on objcore ------------------------------------------------*/ typedef void objfree_f(struct worker *, struct objcore *); typedef void objsetstate_f(struct worker *, const struct objcore *, enum boc_state_e); typedef int objiterator_f(struct worker *, struct objcore *, void *priv, objiterate_f *func, int final); typedef int objgetspace_f(struct worker *, struct objcore *, ssize_t *sz, uint8_t **ptr); typedef void objextend_f(struct worker *, struct objcore *, ssize_t l); typedef void objtrimstore_f(struct worker *, struct objcore *); typedef void objbocdone_f(struct worker *, struct objcore *, struct boc *); typedef void objslim_f(struct worker *, struct objcore *); typedef const void *objgetattr_f(struct worker *, struct objcore *, enum obj_attr attr, ssize_t *len); typedef void *objsetattr_f(struct worker *, struct objcore *, enum obj_attr attr, ssize_t len, const void *ptr); typedef void objtouch_f(struct worker *, struct objcore *, vtim_real now); struct obj_methods { /* required */ objfree_f *objfree; objiterator_f *objiterator; objgetspace_f *objgetspace; objextend_f *objextend; objgetattr_f *objgetattr; objsetattr_f *objsetattr; /* optional */ objtrimstore_f *objtrimstore; objbocdone_f *objbocdone; objslim_f *objslim; objtouch_f *objtouch; objsetstate_f *objsetstate; }; varnish-7.5.0/bin/varnishd/cache/cache_objhead.h000066400000000000000000000061051457605730600215350ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ struct hash_slinger; struct objhead { unsigned magic; #define OBJHEAD_MAGIC 0x1b96615d int refcnt; struct lock mtx; VTAILQ_HEAD(,objcore) objcs; uint8_t digest[DIGEST_LEN]; VTAILQ_HEAD(, req) waitinglist; /*---------------------------------------------------- * The fields below are for the sole private use of * the hash implementation(s). */ union { struct { VTAILQ_ENTRY(objhead) u_n_hoh_list; void *u_n_hoh_head; } n; } _u; #define hoh_list _u.n.u_n_hoh_list #define hoh_head _u.n.u_n_hoh_head }; enum lookup_e { HSH_MISS, HSH_HITMISS, HSH_HITPASS, HSH_HIT, HSH_GRACE, HSH_BUSY, }; void HSH_Fail(struct objcore *); void HSH_Kill(struct objcore *); void HSH_Replace(struct objcore *, const struct objcore *); void HSH_Insert(struct worker *, const void *hash, struct objcore *, struct ban *); void HSH_Unbusy(struct worker *, struct objcore *); int HSH_Snipe(const struct worker *, struct objcore *); struct boc *HSH_RefBoc(const struct objcore *); void HSH_DerefBoc(struct worker *wrk, struct objcore *); void HSH_DeleteObjHead(const struct worker *, struct objhead *); int HSH_DerefObjCore(struct worker *, struct objcore **, int rushmax); #define HSH_RUSH_POLICY -1 enum lookup_e HSH_Lookup(struct req *, struct objcore **, struct objcore **); void HSH_Ref(struct objcore *o); void HSH_AddString(struct req *, void *ctx, const char *str); unsigned HSH_Purge(struct worker *, struct objhead *, vtim_real ttl_now, vtim_dur ttl, vtim_dur grace, vtim_dur keep); struct objcore *HSH_Private(const struct worker *wrk); void HSH_Cancel(struct worker *, struct objcore *, struct boc *); varnish-7.5.0/bin/varnishd/cache/cache_panic.c000066400000000000000000000547251457605730600212410ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Dag-Erling Smørgrav * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include "config.h" #ifdef WITH_UNWIND # include #else # include #endif #include #include #include #include "cache_varnishd.h" #include "cache_transport.h" #include "cache_filter.h" #include "common/heritage.h" #include "waiter/waiter.h" #include "storage/storage.h" #include "vcli_serve.h" #include "vtim.h" #include "vcs.h" #include "vtcp.h" #include "vsa.h" /* * The panic string is constructed in a VSB, then copied to the * shared memory. * * It can be extracted post-mortem from a core dump using gdb: * * (gdb) p *(char **)((char *)pan_vsb+8) * */ static struct vsb pan_vsb_storage, *pan_vsb; static pthread_mutex_t panicstr_mtx; static void pan_sess(struct vsb *, const struct sess *); static void pan_req(struct vsb *, const struct req *); /*--------------------------------------------------------------------*/ static const char * boc_state_2str(enum boc_state_e e) { switch (e) { #define BOC_STATE(U,l) case BOS_##U: return(#l); #include "tbl/boc_state.h" default: return ("?"); } } /*--------------------------------------------------------------------*/ static void pan_stream_close(struct vsb *vsb, stream_close_t sc) { if (sc != NULL && sc->magic == STREAM_CLOSE_MAGIC) VSB_printf(vsb, "%s(%s)", sc->name, sc->desc); else VSB_printf(vsb, "%p", sc); } /*--------------------------------------------------------------------*/ static void pan_storage(struct vsb *vsb, const char *n, const struct stevedore *stv) { if (stv != NULL && stv->magic == STEVEDORE_MAGIC) VSB_printf(vsb, "%s = %s(%s,%s),\n", n, stv->name, stv->ident, stv->vclname); else VSB_printf(vsb, "%s = %p,\n", n, stv); } /*--------------------------------------------------------------------*/ #define N_ALREADY 256 static const void *already_list[N_ALREADY]; static int already_idx; int PAN__DumpStruct(struct vsb *vsb, int block, int track, const void *ptr, const char *smagic, unsigned magic, const char *fmt, ...) { va_list ap; const unsigned *uptr; int i; AN(vsb); va_start(ap, fmt); VSB_vprintf(vsb, fmt, ap); va_end(ap); if (ptr == NULL) { VSB_cat(vsb, " = NULL\n"); return (-1); } VSB_printf(vsb, " = %p {", ptr); if (block) VSB_putc(vsb, '\n'); if (track) { for (i = 0; i < already_idx; i++) { if (already_list[i] == ptr) { VSB_cat(vsb, " [Already dumped, see above]"); if (block) VSB_putc(vsb, '\n'); VSB_cat(vsb, "},\n"); return (-2); } } if (already_idx < N_ALREADY) already_list[already_idx++] = ptr; } uptr = ptr; if (*uptr != magic) { VSB_printf(vsb, " .magic = 0x%08x", *uptr); VSB_printf(vsb, " EXPECTED: %s=0x%08x", smagic, magic); if (block) VSB_putc(vsb, '\n'); VSB_cat(vsb, "}\n"); return (-3); } if (block) VSB_indent(vsb, 2); return (0); } /*--------------------------------------------------------------------*/ static void pan_htc(struct vsb *vsb, const struct http_conn *htc) { if (PAN_dump_struct(vsb, htc, HTTP_CONN_MAGIC, "http_conn")) return; if (htc->rfd != NULL) VSB_printf(vsb, "fd = %d (@%p),\n", *htc->rfd, htc->rfd); VSB_cat(vsb, "doclose = "); pan_stream_close(vsb, htc->doclose); VSB_cat(vsb, "\n"); WS_Panic(vsb, htc->ws); VSB_printf(vsb, "{rxbuf_b, rxbuf_e} = {%p, %p},\n", htc->rxbuf_b, htc->rxbuf_e); VSB_printf(vsb, "{pipeline_b, pipeline_e} = {%p, %p},\n", htc->pipeline_b, htc->pipeline_e); VSB_printf(vsb, "content_length = %jd,\n", (intmax_t)htc->content_length); VSB_printf(vsb, "body_status = %s,\n", htc->body_status ? htc->body_status->name : "NULL"); VSB_printf(vsb, "first_byte_timeout = %f,\n", htc->first_byte_timeout); VSB_printf(vsb, "between_bytes_timeout = %f,\n", htc->between_bytes_timeout); VSB_indent(vsb, -2); VSB_cat(vsb, "},\n"); } /*--------------------------------------------------------------------*/ static void pan_http(struct vsb *vsb, const char *id, const struct http *h) { int i; if (PAN_dump_struct(vsb, h, HTTP_MAGIC, "http[%s]", id)) return; WS_Panic(vsb, h->ws); VSB_cat(vsb, "hdrs {\n"); VSB_indent(vsb, 2); for (i = 0; i < h->nhd; ++i) { if (h->hd[i].b == NULL && h->hd[i].e == NULL) continue; VSB_printf(vsb, "\"%.*s\",\n", (int)(h->hd[i].e - h->hd[i].b), h->hd[i].b); } VSB_indent(vsb, -2); VSB_cat(vsb, "},\n"); VSB_indent(vsb, -2); VSB_cat(vsb, "},\n"); } /*--------------------------------------------------------------------*/ static void pan_boc(struct vsb *vsb, const struct boc *boc) { if (PAN_dump_struct(vsb, boc, BOC_MAGIC, "boc")) return; VSB_printf(vsb, "refcnt = %u,\n", boc->refcount); VSB_printf(vsb, "state = %s,\n", boc_state_2str(boc->state)); VSB_printf(vsb, "vary = %p,\n", boc->vary); VSB_printf(vsb, "stevedore_priv = %p,\n", boc->stevedore_priv); VSB_indent(vsb, -2); VSB_cat(vsb, "},\n"); } /*--------------------------------------------------------------------*/ static void pan_objcore(struct vsb *vsb, const char *typ, const struct objcore *oc) { const char *p; if (PAN_dump_struct(vsb, oc, OBJCORE_MAGIC, "objcore[%s]", typ)) return; VSB_printf(vsb, "refcnt = %d,\n", oc->refcnt); VSB_cat(vsb, "flags = {"); /*lint -save -esym(438,p) -esym(838,p) -e539 */ p = ""; #define OC_FLAG(U, l, v) \ if (oc->flags & v) { VSB_printf(vsb, "%s" #l, p); p = ", "; } #include "tbl/oc_flags.h" VSB_cat(vsb, "},\n"); VSB_cat(vsb, "exp_flags = {"); p = ""; #define OC_EXP_FLAG(U, l, v) \ if (oc->exp_flags & v) { VSB_printf(vsb, "%s" #l, p); p = ", "; } #include "tbl/oc_exp_flags.h" /*lint -restore */ VSB_cat(vsb, "},\n"); if (oc->boc != NULL) pan_boc(vsb, oc->boc); VSB_printf(vsb, "exp = {%f, %f, %f, %f},\n", oc->t_origin, oc->ttl, oc->grace, oc->keep); VSB_printf(vsb, "objhead = %p,\n", oc->objhead); VSB_printf(vsb, "stevedore = %p", oc->stobj->stevedore); if (oc->stobj->stevedore != NULL) { VSB_printf(vsb, " (%s", oc->stobj->stevedore->name); if (strlen(oc->stobj->stevedore->ident)) VSB_printf(vsb, " %s", oc->stobj->stevedore->ident); VSB_cat(vsb, ")"); if (oc->stobj->stevedore->panic) { VSB_cat(vsb, " {\n"); VSB_indent(vsb, 2); oc->stobj->stevedore->panic(vsb, oc); VSB_indent(vsb, -2); VSB_cat(vsb, "}"); } } VSB_cat(vsb, ",\n"); VSB_indent(vsb, -2); VSB_cat(vsb, "},\n"); } /*--------------------------------------------------------------------*/ static void pan_wrk(struct vsb *vsb, const struct worker *wrk) { const char *hand; unsigned m, u; const char *p; if (PAN_dump_struct(vsb, wrk, WORKER_MAGIC, "worker")) return; WS_Panic(vsb, wrk->aws); m = wrk->cur_method; VSB_cat(vsb, "VCL::method = "); if (m == 0) { VSB_cat(vsb, "none,\n"); return; } if (!(m & 1)) VSB_cat(vsb, "inside "); m &= ~1; hand = VCL_Method_Name(m); if (hand != NULL) VSB_printf(vsb, "%s,\n", hand); else VSB_printf(vsb, "0x%x,\n", m); VSB_cat(vsb, "VCL::methods = {"); m = wrk->seen_methods; p = ""; for (u = 1; m ; u <<= 1) { if (m & u) { VSB_printf(vsb, "%s%s", p, VCL_Method_Name(u)); m &= ~u; p = ", "; } } VSB_cat(vsb, "},\n"); VSB_indent(vsb, -2); VSB_cat(vsb, "},\n"); } static void pan_vfp(struct vsb *vsb, const struct vfp_ctx *vfc) { struct vfp_entry *vfe; if (PAN_dump_struct(vsb, vfc, VFP_CTX_MAGIC, "vfc")) return; VSB_printf(vsb, "failed = %d,\n", vfc->failed); VSB_printf(vsb, "req = %p,\n", vfc->req); VSB_printf(vsb, "resp = %p,\n", vfc->resp); VSB_printf(vsb, "wrk = %p,\n", vfc->wrk); VSB_printf(vsb, "oc = %p,\n", vfc->oc); if (!VTAILQ_EMPTY(&vfc->vfp)) { VSB_cat(vsb, "filters = {\n"); VSB_indent(vsb, 2); VTAILQ_FOREACH(vfe, &vfc->vfp, list) { VSB_printf(vsb, "%s = %p {\n", vfe->vfp->name, vfe); VSB_indent(vsb, 2); VSB_printf(vsb, "priv1 = %p,\n", vfe->priv1); VSB_printf(vsb, "priv2 = %zd,\n", vfe->priv2); VSB_printf(vsb, "closed = %d\n", vfe->closed); VSB_indent(vsb, -2); VSB_cat(vsb, "},\n"); } VSB_indent(vsb, -2); VSB_cat(vsb, "},\n"); } VSB_printf(vsb, "obj_flags = 0x%x,\n", vfc->obj_flags); VSB_indent(vsb, -2); VSB_cat(vsb, "},\n"); } static void pan_busyobj(struct vsb *vsb, const struct busyobj *bo) { const char *p; const struct worker *wrk; if (PAN_dump_struct(vsb, bo, BUSYOBJ_MAGIC, "busyobj")) return; VSB_printf(vsb, "end = %p,\n", bo->end); VSB_printf(vsb, "retries = %u,\n", bo->retries); if (bo->req != NULL) pan_req(vsb, bo->req); if (bo->sp != NULL) pan_sess(vsb, bo->sp); wrk = bo->wrk; if (wrk != NULL) pan_wrk(vsb, wrk); if (bo->vfc != NULL) pan_vfp(vsb, bo->vfc); if (bo->vfp_filter_list != NULL) { VSB_printf(vsb, "vfp_filter_list = \"%s\",\n", bo->vfp_filter_list); } WS_Panic(vsb, bo->ws); VSB_printf(vsb, "ws_bo = %p,\n", (void *)bo->ws_bo); // bereq0 left out if (bo->bereq != NULL && bo->bereq->ws != NULL) pan_http(vsb, "bereq", bo->bereq); if (bo->beresp != NULL && bo->beresp->ws != NULL) pan_http(vsb, "beresp", bo->beresp); if (bo->stale_oc) pan_objcore(vsb, "stale_oc", bo->stale_oc); if (bo->fetch_objcore) pan_objcore(vsb, "fetch", bo->fetch_objcore); if (VALID_OBJ(bo->htc, HTTP_CONN_MAGIC)) pan_htc(vsb, bo->htc); // fetch_task left out VSB_cat(vsb, "flags = {"); p = ""; /*lint -save -esym(438,p) -e539 */ #define BERESP_FLAG(l, r, w, f, d) \ if (bo->l) { VSB_printf(vsb, "%s" #l, p); p = ", "; } #define BEREQ_FLAG(l, r, w, d) BERESP_FLAG(l, r, w, 0, d) #include "tbl/bereq_flags.h" #include "tbl/beresp_flags.h" /*lint -restore */ VSB_cat(vsb, "},\n"); // timeouts/timers/acct/storage left out pan_storage(vsb, "storage", bo->storage); VDI_Panic(bo->director_req, vsb, "director_req"); if (bo->director_resp == bo->director_req) VSB_cat(vsb, "director_resp = director_req,\n"); else VDI_Panic(bo->director_resp, vsb, "director_resp"); VCL_Panic(vsb, "vcl", bo->vcl); if (wrk != NULL) VPI_Panic(vsb, wrk->vpi, bo->vcl); VSB_indent(vsb, -2); VSB_cat(vsb, "},\n"); } /*--------------------------------------------------------------------*/ static void pan_top(struct vsb *vsb, const struct reqtop *top) { if (PAN_dump_struct(vsb, top, REQTOP_MAGIC, "top")) return; pan_req(vsb, top->topreq); pan_privs(vsb, top->privs); VCL_Panic(vsb, "vcl0", top->vcl0); VSB_indent(vsb, -2); VSB_cat(vsb, "},\n"); } /*--------------------------------------------------------------------*/ static void pan_req(struct vsb *vsb, const struct req *req) { const struct transport *xp; const struct worker *wrk; if (PAN_dump_struct(vsb, req, REQ_MAGIC, "req")) return; xp = req->transport; VSB_printf(vsb, "vxid = %ju, transport = %s", VXID(req->vsl->wid), xp == NULL ? "NULL" : xp->name); if (xp != NULL && xp->req_panic != NULL) { VSB_cat(vsb, " {\n"); VSB_indent(vsb, 2); xp->req_panic(vsb, req); VSB_indent(vsb, -2); VSB_cat(vsb, "}"); } VSB_cat(vsb, "\n"); if (req->req_step == NULL) VSB_cat(vsb, "step = R_STP_TRANSPORT\n"); else VSB_printf(vsb, "step = %s\n", req->req_step->name); VSB_printf(vsb, "req_body = %s,\n", req->req_body_status ? req->req_body_status->name : "NULL"); if (req->err_code) VSB_printf(vsb, "err_code = %d, err_reason = %s,\n", req->err_code, req->err_reason ? req->err_reason : "(null)"); VSB_printf(vsb, "restarts = %u, esi_level = %u,\n", req->restarts, req->esi_level); VSB_printf(vsb, "vary_b = %p, vary_e = %p,\n", req->vary_b, req->vary_e); VSB_printf(vsb, "d_ttl = %f, d_grace = %f,\n", req->d_ttl, req->d_grace); pan_storage(vsb, "storage", req->storage); VDI_Panic(req->director_hint, vsb, "director_hint"); if (req->sp != NULL) pan_sess(vsb, req->sp); wrk = req->wrk; if (wrk != NULL) pan_wrk(vsb, wrk); WS_Panic(vsb, req->ws); if (VALID_OBJ(req->htc, HTTP_CONN_MAGIC)) pan_htc(vsb, req->htc); pan_http(vsb, "req", req->http); if (req->resp != NULL && req->resp->ws != NULL) pan_http(vsb, "resp", req->resp); if (req->vdc != NULL) VDP_Panic(vsb, req->vdc); VCL_Panic(vsb, "vcl", req->vcl); if (wrk != NULL) VPI_Panic(vsb, wrk->vpi, req->vcl); if (req->body_oc != NULL) pan_objcore(vsb, "BODY", req->body_oc); if (req->objcore != NULL) pan_objcore(vsb, "REQ", req->objcore); VSB_cat(vsb, "flags = {\n"); VSB_indent(vsb, 2); #define REQ_FLAG(l, r, w, d) if (req->l) VSB_printf(vsb, #l ",\n"); #include "tbl/req_flags.h" VSB_indent(vsb, -2); VSB_cat(vsb, "},\n"); pan_privs(vsb, req->privs); if (req->top != NULL) pan_top(vsb, req->top); VSB_indent(vsb, -2); VSB_cat(vsb, "},\n"); } /*--------------------------------------------------------------------*/ #define pan_addr(vsb, sp, field) do { \ struct suckaddr *sa; \ char h[VTCP_ADDRBUFSIZE]; \ char p[VTCP_PORTBUFSIZE]; \ \ (void) SES_Get_##field##_addr((sp), &sa); \ if (! VSA_Sane(sa)) \ break; \ VTCP_name(sa, h, sizeof h, p, sizeof p); \ VSB_printf((vsb), "%s.ip = %s:%s,\n", #field, h, p); \ } while (0) static void pan_sess(struct vsb *vsb, const struct sess *sp) { const char *ci; const char *cp; const struct transport *xp; if (PAN_dump_struct(vsb, sp, SESS_MAGIC, "sess")) return; VSB_printf(vsb, "fd = %d, vxid = %ju,\n", sp->fd, VXID(sp->vxid)); VSB_printf(vsb, "t_open = %f,\n", sp->t_open); VSB_printf(vsb, "t_idle = %f,\n", sp->t_idle); if (! VALID_OBJ(sp, SESS_MAGIC)) { VSB_indent(vsb, -2); VSB_cat(vsb, "},\n"); return; } WS_Panic(vsb, sp->ws); xp = XPORT_ByNumber(sp->sattr[SA_TRANSPORT]); VSB_printf(vsb, "transport = %s", xp == NULL ? "" : xp->name); if (xp != NULL && xp->sess_panic != NULL) { VSB_cat(vsb, " {\n"); VSB_indent(vsb, 2); xp->sess_panic(vsb, sp); VSB_indent(vsb, -2); VSB_cat(vsb, "}"); } VSB_cat(vsb, "\n"); // duplicated below, remove ? ci = SES_Get_String_Attr(sp, SA_CLIENT_IP); cp = SES_Get_String_Attr(sp, SA_CLIENT_PORT); if (VALID_OBJ(sp->listen_sock, LISTEN_SOCK_MAGIC)) VSB_printf(vsb, "client = %s %s %s,\n", ci, cp, sp->listen_sock->endpoint); else VSB_printf(vsb, "client = %s %s \n", ci, cp); if (VALID_OBJ(sp->listen_sock, LISTEN_SOCK_MAGIC)) { VSB_printf(vsb, "local.endpoint = %s,\n", sp->listen_sock->endpoint); VSB_printf(vsb, "local.socket = %s,\n", sp->listen_sock->name); } pan_addr(vsb, sp, local); pan_addr(vsb, sp, remote); pan_addr(vsb, sp, server); pan_addr(vsb, sp, client); VSB_indent(vsb, -2); VSB_cat(vsb, "},\n"); } /*--------------------------------------------------------------------*/ #ifdef WITH_UNWIND static void pan_backtrace(struct vsb *vsb) { unw_cursor_t cursor; unw_context_t uc; unw_word_t ip, sp; unw_word_t offp; char fname[1024]; int ret; VSB_cat(vsb, "Backtrace:\n"); VSB_indent(vsb, 2); ret = unw_getcontext(&uc); if (ret != 0) { VSB_printf(vsb, "Backtrace not available " "(unw_getcontext returned %d)\n", ret); return; } ret = unw_init_local(&cursor, &uc); if (ret != 0) { VSB_printf(vsb, "Backtrace not available " "(unw_init_local returned %d)\n", ret); return; } while (unw_step(&cursor) > 0) { fname[0] = '\0'; if (!unw_get_reg(&cursor, UNW_REG_IP, &ip)) VSB_printf(vsb, "ip=0x%lx", (long) ip); if (!unw_get_reg(&cursor, UNW_REG_SP, &sp)) VSB_printf(vsb, " sp=0x%lx", (long) sp); if (!unw_get_proc_name(&cursor, fname, sizeof(fname), &offp)) VSB_printf(vsb, " <%s+0x%lx>", fname[0] ? fname : "", (long)offp); VSB_putc(vsb, '\n'); } VSB_indent(vsb, -2); } #else /* WITH_UNWIND */ #define BACKTRACE_LEVELS 20 static void pan_backtrace(struct vsb *vsb) { void *array[BACKTRACE_LEVELS]; size_t size; size_t i; char **strings; char *p; char buf[32]; size = backtrace (array, BACKTRACE_LEVELS); if (size > BACKTRACE_LEVELS) { VSB_printf(vsb, "Backtrace not available (ret=%zu)\n", size); return; } VSB_cat(vsb, "Backtrace:\n"); VSB_indent(vsb, 2); for (i = 0; i < size; i++) { bprintf(buf, "%p", array[i]); VSB_printf(vsb, "%s: ", buf); strings = backtrace_symbols(&array[i], 1); if (strings == NULL || strings[0] == NULL) { VSB_cat(vsb, "(?)"); } else { p = strings[0]; if (!memcmp(buf, p, strlen(buf))) { p += strlen(buf); if (*p == ':') p++; while (*p == ' ') p++; } VSB_printf(vsb, "%s", p); } VSB_cat(vsb, "\n"); free(strings); } VSB_indent(vsb, -2); } #endif /* WITH_UNWIND */ #ifdef HAVE_PTHREAD_GETATTR_NP static void pan_threadattr(struct vsb *vsb) { pthread_attr_t attr[1]; size_t sz; void *addr; if (pthread_getattr_np(pthread_self(), attr) != 0) return; VSB_cat(vsb, "pthread.attr = {\n"); VSB_indent(vsb, 2); if (pthread_attr_getguardsize(attr, &sz) == 0) VSB_printf(vsb, "guard = %zu,\n", sz); if (pthread_attr_getstack(attr, &addr, &sz) == 0) { VSB_printf(vsb, "stack_bottom = %p,\n", addr); VSB_printf(vsb, "stack_top = %p,\n", (char *)addr + sz); VSB_printf(vsb, "stack_size = %zu,\n", sz); } VSB_indent(vsb, -2); VSB_cat(vsb, "}\n"); (void) pthread_attr_destroy(attr); } #endif static void pan_argv(struct vsb *vsb) { int i; VSB_cat(pan_vsb, "argv = {\n"); VSB_indent(vsb, 2); for (i = 0; i < heritage.argc; i++) { VSB_printf(vsb, "[%d] = ", i); VSB_quote(vsb, heritage.argv[i], -1, VSB_QUOTE_CSTR); VSB_cat(vsb, ",\n"); } VSB_indent(vsb, -2); VSB_cat(vsb, "}\n"); } /*--------------------------------------------------------------------*/ static void __attribute__((__noreturn__)) pan_ic(const char *func, const char *file, int line, const char *cond, enum vas_e kind) { const char *q; struct req *req; struct busyobj *bo; struct worker *wrk; struct sigaction sa; int i, err = errno; if (pthread_getspecific(panic_key) != NULL) { VSB_cat(pan_vsb, "\n\nPANIC REENTRANCY\n\n"); abort(); } /* If we already panicing in another thread, do nothing */ do { i = pthread_mutex_trylock(&panicstr_mtx); if (i != 0) sleep (1); } while (i != 0); assert (VSB_len(pan_vsb) == 0); AZ(pthread_setspecific(panic_key, pan_vsb)); /* * should we trigger a SIGSEGV while handling a panic, our sigsegv * handler would hide the panic, so we need to reset the handler to * default */ memset(&sa, 0, sizeof sa); sa.sa_handler = SIG_DFL; (void)sigaction(SIGSEGV, &sa, NULL); /* Set SIGABRT back to default so the final abort() has the desired effect */ (void)sigaction(SIGABRT, &sa, NULL); switch (kind) { case VAS_WRONG: VSB_printf(pan_vsb, "Wrong turn at %s:%d:\n%s\n", file, line, cond); break; case VAS_VCL: VSB_printf(pan_vsb, "Panic from VCL:\n %s\n", cond); break; case VAS_MISSING: VSB_printf(pan_vsb, "Missing errorhandling code in %s(), %s line %d:\n" " Condition(%s) not true.\n", func, file, line, cond); break; case VAS_INCOMPLETE: VSB_printf(pan_vsb, "Incomplete code in %s(), %s line %d:\n", func, file, line); break; case VAS_ASSERT: default: VSB_printf(pan_vsb, "Assert error in %s(), %s line %d:\n" " Condition(%s) not true.\n", func, file, line, cond); break; } VSB_printf(pan_vsb, "version = %s, vrt api = %u.%u\n", VCS_String("V"), VRT_MAJOR_VERSION, VRT_MINOR_VERSION); VSB_printf(pan_vsb, "ident = %s,%s\n", heritage.ident, Waiter_GetName()); VSB_printf(pan_vsb, "now = %f (mono), %f (real)\n", VTIM_mono(), VTIM_real()); pan_backtrace(pan_vsb); if (err) VSB_printf(pan_vsb, "errno = %d (%s)\n", err, VAS_errtxt(err)); pan_argv(pan_vsb); VSB_printf(pan_vsb, "pthread.self = %p\n", TRUST_ME(pthread_self())); q = THR_GetName(); if (q != NULL) VSB_printf(pan_vsb, "pthread.name = (%s)\n", q); #ifdef HAVE_PTHREAD_GETATTR_NP pan_threadattr(pan_vsb); #endif if (!FEATURE(FEATURE_SHORT_PANIC)) { req = THR_GetRequest(); VSB_cat(pan_vsb, "thr."); pan_req(pan_vsb, req); if (req != NULL) VSL_Flush(req->vsl, 0); bo = THR_GetBusyobj(); VSB_cat(pan_vsb, "thr."); pan_busyobj(pan_vsb, bo); if (bo != NULL) VSL_Flush(bo->vsl, 0); wrk = THR_GetWorker(); VSB_cat(pan_vsb, "thr."); pan_wrk(pan_vsb, wrk); VMOD_Panic(pan_vsb); pan_pool(pan_vsb); } else { VSB_cat(pan_vsb, "Feature short panic suppressed details.\n"); } VSB_cat(pan_vsb, "\n"); VSB_putc(pan_vsb, '\0'); /* NUL termination */ v_gcov_flush(); /* * Do a little song and dance for static checkers which * are not smart enough to figure out that calling abort() * with a mutex held is OK and probably very intentional. */ if (pthread_getspecific(panic_key)) /* ie: always */ abort(); PTOK(pthread_mutex_unlock(&panicstr_mtx)); abort(); } /*--------------------------------------------------------------------*/ static void v_noreturn_ v_matchproto_(cli_func_t) ccf_panic(struct cli *cli, const char * const *av, void *priv) { (void)cli; (void)av; AZ(priv); AZ(strcmp("", "You asked for it")); /* NOTREACHED */ abort(); } /*--------------------------------------------------------------------*/ static struct cli_proto debug_cmds[] = { { CLICMD_DEBUG_PANIC_WORKER, "d", ccf_panic }, { NULL } }; /*--------------------------------------------------------------------*/ void PAN_Init(void) { PTOK(pthread_mutex_init(&panicstr_mtx, &mtxattr_errorcheck)); VAS_Fail_Func = pan_ic; pan_vsb = &pan_vsb_storage; AN(heritage.panic_str); AN(heritage.panic_str_len); AN(VSB_init(pan_vsb, heritage.panic_str, heritage.panic_str_len)); VSB_cat(pan_vsb, "This is a test\n"); AZ(VSB_finish(pan_vsb)); VSB_clear(pan_vsb); heritage.panic_str[0] = '\0'; CLI_AddFuncs(debug_cmds); } varnish-7.5.0/bin/varnishd/cache/cache_pool.c000066400000000000000000000163271457605730600211140ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * We maintain a number of worker thread pools, to spread lock contention. * * Pools can be added on the fly, as a means to mitigate lock contention, * but can only be removed again by a restart. (XXX: we could fix that) * */ #include "config.h" #include #include "cache_varnishd.h" #include "cache_pool.h" static pthread_t thr_pool_herder; static struct lock wstat_mtx; struct lock pool_mtx; static VTAILQ_HEAD(,pool) pools = VTAILQ_HEAD_INITIALIZER(pools); /*-------------------------------------------------------------------- * Summing of stats into global stats counters */ void Pool_Sumstat(const struct worker *wrk) { Lck_Lock(&wstat_mtx); VSC_main_Summ_wrk(VSC_C_main, wrk->stats); Lck_Unlock(&wstat_mtx); memset(wrk->stats, 0, sizeof *wrk->stats); } int Pool_TrySumstat(const struct worker *wrk) { if (Lck_Trylock(&wstat_mtx)) return (0); VSC_main_Summ_wrk(VSC_C_main, wrk->stats); Lck_Unlock(&wstat_mtx); memset(wrk->stats, 0, sizeof *wrk->stats); return (1); } /*-------------------------------------------------------------------- * Facility for scheduling a task on any convenient pool. */ int Pool_Task_Any(struct pool_task *task, enum task_prio prio) { struct pool *pp; Lck_Lock(&pool_mtx); pp = VTAILQ_FIRST(&pools); if (pp != NULL) { VTAILQ_REMOVE(&pools, pp, list); VTAILQ_INSERT_TAIL(&pools, pp, list); } Lck_Unlock(&pool_mtx); if (pp == NULL) return (-1); // NB: When we remove pools, is there a race here ? return (Pool_Task(pp, task, prio)); } /*-------------------------------------------------------------------- * Helper function to update stats for purges under lock */ void Pool_PurgeStat(unsigned nobj) { Lck_Lock(&wstat_mtx); VSC_C_main->n_purges++; VSC_C_main->n_obj_purged += nobj; Lck_Unlock(&wstat_mtx); } /*-------------------------------------------------------------------- * Special function to summ stats */ void v_matchproto_(task_func_t) pool_stat_summ(struct worker *wrk, void *priv) { struct VSC_main_wrk *src; struct pool *pp; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); pp = wrk->pool; CHECK_OBJ_NOTNULL(pp, POOL_MAGIC); AN(priv); src = priv; Lck_Lock(&wstat_mtx); VSC_main_Summ_wrk(VSC_C_main, src); Lck_Lock(&pp->mtx); VSC_main_Summ_pool(VSC_C_main, pp->stats); Lck_Unlock(&pp->mtx); memset(pp->stats, 0, sizeof pp->stats); Lck_Unlock(&wstat_mtx); memset(src, 0, sizeof *src); AZ(pp->b_stat); pp->b_stat = src; } /*-------------------------------------------------------------------- * Add a thread pool */ static struct pool * pool_mkpool(unsigned pool_no) { struct pool *pp; int i; ALLOC_OBJ(pp, POOL_MAGIC); if (pp == NULL) return (NULL); pp->a_stat = calloc(1, sizeof *pp->a_stat); AN(pp->a_stat); pp->b_stat = calloc(1, sizeof *pp->b_stat); AN(pp->b_stat); Lck_New(&pp->mtx, lck_perpool); VTAILQ_INIT(&pp->idle_queue); VTAILQ_INIT(&pp->poolsocks); for (i = 0; i < TASK_QUEUE_RESERVE; i++) VTAILQ_INIT(&pp->queues[i]); PTOK(pthread_cond_init(&pp->herder_cond, NULL)); PTOK(pthread_create(&pp->herder_thr, NULL, pool_herder, pp)); while (VTAILQ_EMPTY(&pp->idle_queue)) (void)usleep(10000); SES_NewPool(pp, pool_no); VCA_NewPool(pp); return (pp); } /*-------------------------------------------------------------------- * This thread adjusts the number of pools to match the parameter. * * NB: This is quite silly. The master should tell the child through * NB: CLI when parameters change and an appropriate call-out table * NB: be maintained for params which require action. */ static void * v_matchproto_() pool_poolherder(void *priv) { unsigned nwq; struct pool *pp, *ppx; uint64_t u; void *rvp; THR_SetName("pool_poolherder"); THR_Init(); (void)priv; nwq = 0; while (1) { if (nwq < cache_param->wthread_pools) { pp = pool_mkpool(nwq); if (pp != NULL) { Lck_Lock(&pool_mtx); VTAILQ_INSERT_TAIL(&pools, pp, list); Lck_Unlock(&pool_mtx); VSC_C_main->pools++; nwq++; continue; } } else if (nwq > cache_param->wthread_pools && EXPERIMENT(EXPERIMENT_DROP_POOLS)) { Lck_Lock(&pool_mtx); pp = VTAILQ_FIRST(&pools); CHECK_OBJ_NOTNULL(pp, POOL_MAGIC); VTAILQ_REMOVE(&pools, pp, list); VTAILQ_INSERT_TAIL(&pools, pp, list); if (!pp->die) nwq--; Lck_Unlock(&pool_mtx); if (!pp->die) { VSL(SLT_Debug, NO_VXID, "XXX Kill Pool %p", pp); pp->die = 1; VCA_DestroyPool(pp); PTOK(pthread_cond_signal(&pp->herder_cond)); } } (void)sleep(1); u = 0; ppx = NULL; Lck_Lock(&pool_mtx); VTAILQ_FOREACH(pp, &pools, list) { CHECK_OBJ_NOTNULL(pp, POOL_MAGIC); if (pp->die && pp->nthr == 0) ppx = pp; u += pp->lqueue; } if (ppx != NULL) { VTAILQ_REMOVE(&pools, ppx, list); PTOK(pthread_join(ppx->herder_thr, &rvp)); PTOK(pthread_cond_destroy(&ppx->herder_cond)); free(ppx->a_stat); free(ppx->b_stat); SES_DestroyPool(ppx); Lck_Delete(&ppx->mtx); FREE_OBJ(ppx); VSC_C_main->pools--; } Lck_Unlock(&pool_mtx); VSC_C_main->thread_queue_len = u; } NEEDLESS(return (NULL)); } /*--------------------------------------------------------------------*/ void pan_pool(struct vsb *vsb) { struct pool *pp; VSB_cat(vsb, "pools = {\n"); VSB_indent(vsb, 2); VTAILQ_FOREACH(pp, &pools, list) { if (PAN_dump_struct(vsb, pp, POOL_MAGIC, "pool")) continue; VSB_printf(vsb, "nidle = %u,\n", pp->nidle); VSB_printf(vsb, "nthr = %u,\n", pp->nthr); VSB_printf(vsb, "lqueue = %u\n", pp->lqueue); VSB_indent(vsb, -2); VSB_cat(vsb, "},\n"); } VSB_indent(vsb, -2); VSB_cat(vsb, "},\n"); } /*--------------------------------------------------------------------*/ void Pool_Init(void) { Lck_New(&wstat_mtx, lck_wstat); Lck_New(&pool_mtx, lck_wq); PTOK(pthread_create(&thr_pool_herder, NULL, pool_poolherder, NULL)); while (!VSC_C_main->pools) (void)usleep(10000); } varnish-7.5.0/bin/varnishd/cache/cache_pool.h000066400000000000000000000044171457605730600211160ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Private include file for the pool aware code. */ VTAILQ_HEAD(taskhead, pool_task); struct poolsock; struct pool { unsigned magic; #define POOL_MAGIC 0x606658fa VTAILQ_ENTRY(pool) list; VTAILQ_HEAD(,poolsock) poolsocks; int die; pthread_cond_t herder_cond; pthread_t herder_thr; struct lock mtx; unsigned nidle; struct taskhead idle_queue; struct taskhead queues[TASK_QUEUE_RESERVE]; unsigned nthr; unsigned lqueue; uintmax_t ndequeued; struct VSC_main_pool stats[1]; struct VSC_main_wrk *a_stat; struct VSC_main_wrk *b_stat; struct mempool *mpl_req; struct mempool *mpl_sess; struct waiter *waiter; }; void *pool_herder(void*); task_func_t pool_stat_summ; extern struct lock pool_mtx; void VCA_NewPool(struct pool *); void VCA_DestroyPool(struct pool *); varnish-7.5.0/bin/varnishd/cache/cache_range.c000066400000000000000000000200531457605730600212260ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include "config.h" #include "cache_varnishd.h" #include "cache_filter.h" #include "vct.h" #include /*--------------------------------------------------------------------*/ struct vrg_priv { unsigned magic; #define VRG_PRIV_MAGIC 0xb886e711 struct req *req; ssize_t range_low; ssize_t range_high; ssize_t range_off; }; static int v_matchproto_(vdp_fini_f) vrg_range_fini(struct vdp_ctx *vdc, void **priv) { struct vrg_priv *vrg_priv; CHECK_OBJ_NOTNULL(vdc, VDP_CTX_MAGIC); CAST_OBJ_NOTNULL(vrg_priv, *priv, VRG_PRIV_MAGIC); if (vrg_priv->req->resp_len >= 0 && vrg_priv->range_off < vrg_priv->range_high) { Req_Fail(vrg_priv->req, SC_RANGE_SHORT); vrg_priv->req->vdc->retval = -1; } *priv = NULL; /* struct on ws, no need to free */ return (0); } static int v_matchproto_(vdp_bytes_f) vrg_range_bytes(struct vdp_ctx *vdc, enum vdp_action act, void **priv, const void *ptr, ssize_t len) { int retval = 0; ssize_t l = 0; const char *p = ptr; struct vrg_priv *vrg_priv; CHECK_OBJ_NOTNULL(vdc, VDP_CTX_MAGIC); AN(priv); CAST_OBJ_NOTNULL(vrg_priv, *priv, VRG_PRIV_MAGIC); if (ptr != NULL) { l = vrg_priv->range_low - vrg_priv->range_off; if (l > 0) { if (l > len) l = len; vrg_priv->range_off += l; p += l; len -= l; } l = vmin(vrg_priv->range_high - vrg_priv->range_off, len); vrg_priv->range_off += len; if (vrg_priv->range_off >= vrg_priv->range_high) act = VDP_END; } if (l > 0) retval = VDP_bytes(vdc, act, p, l); else if (l == 0 && act > VDP_NULL) retval = VDP_bytes(vdc, act, p, 0); return (retval || act == VDP_END ? 1 : 0); } /*--------------------------------------------------------------------*/ static const char * vrg_dorange(struct req *req, void **priv) { ssize_t low, high; struct vrg_priv *vrg_priv; const char *err; err = http_GetRange(req->http, &low, &high, req->resp_len); if (err != NULL) return (err); if (low < 0 || high < 0) return (NULL); // Allow 200 response if (req->resp_len >= 0) { http_PrintfHeader(req->resp, "Content-Range: bytes %jd-%jd/%jd", (intmax_t)low, (intmax_t)high, (intmax_t)req->resp_len); req->resp_len = (intmax_t)(1 + high - low); } else http_PrintfHeader(req->resp, "Content-Range: bytes %jd-%jd/*", (intmax_t)low, (intmax_t)high); vrg_priv = WS_Alloc(req->ws, sizeof *vrg_priv); if (vrg_priv == NULL) return ("WS too small"); INIT_OBJ(vrg_priv, VRG_PRIV_MAGIC); vrg_priv->req = req; vrg_priv->range_off = 0; vrg_priv->range_low = low; vrg_priv->range_high = high + 1; *priv = vrg_priv; http_PutResponse(req->resp, "HTTP/1.1", 206, NULL); return (NULL); } /* * return 1 if range should be observed, based on if-range value * if-range can either be a date or an ETag [RFC7233 3.2 p8] */ static int vrg_ifrange(struct req *req) { const char *p, *e; vtim_real ims, lm, d; if (!http_GetHdr(req->http, H_If_Range, &p)) // rfc7233,l,455,456 return (1); /* strong validation needed */ if (p[0] == 'W' && p[1] == '/') // rfc7233,l,500,501 return (0); /* ETag */ if (p[0] == '"') { // rfc7233,l,512,514 if (!http_GetHdr(req->resp, H_ETag, &e)) return (0); if ((e[0] == 'W' && e[1] == '/')) // rfc7232,l,547,548 return (0); /* XXX: should we also have http_etag_cmp() ? */ return (strcmp(p, e) == 0); // rfc7232,l,548,548 } /* assume date, strong check [RFC7232 2.2.2 p7] */ ims = VTIM_parse(p); if (!ims) // rfc7233,l,502,512 return (0); /* the response needs a Date */ // rfc7232,l,439,440 if (!http_GetHdr(req->resp, H_Date, &p)) return (0); d = VTIM_parse(p); if (!d) return (0); /* grab the Last Modified value */ if (!http_GetHdr(req->resp, H_Last_Modified, &p)) return (0); lm = VTIM_parse(p); if (!lm) return (0); /* Last Modified must be 60 seconds older than Date */ if (lm > d + 60) // rfc7232,l,442,443 return (0); if (lm != ims) // rfc7233,l,455,456 return (0); return (1); } static int v_matchproto_(vdp_init_f) vrg_range_init(VRT_CTX, struct vdp_ctx *vdc, void **priv, struct objcore *oc) { const char *err; struct req *req; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(vdc, VDP_CTX_MAGIC); (void)oc; req = vdc->req; CHECK_OBJ_NOTNULL(req, REQ_MAGIC); if (!vrg_ifrange(req)) // rfc7233,l,455,456 return (1); err = vrg_dorange(req, priv); if (err == NULL) return (*priv == NULL ? 1 : 0); VSLb(vdc->vsl, SLT_Debug, "RANGE_FAIL %s", err); if (req->resp_len >= 0) http_PrintfHeader(req->resp, "Content-Range: bytes */%jd", (intmax_t)req->resp_len); http_PutResponse(req->resp, "HTTP/1.1", 416, NULL); /* * XXX: We ought to produce a body explaining things. * XXX: That really calls for us to hit vcl_synth{} */ req->resp_len = 0; return (1); } const struct vdp VDP_range = { .name = "range", .init = vrg_range_init, .bytes = vrg_range_bytes, .fini = vrg_range_fini, }; /*--------------------------------------------------------------------*/ int VRG_CheckBo(struct busyobj *bo) { ssize_t rlo, rhi, crlo, crhi, crlen, clen; const char *err; CHECK_OBJ_NOTNULL(bo, BUSYOBJ_MAGIC); if (!cache_param->http_range_support) return (0); err = http_GetRange(bo->bereq0, &rlo, &rhi, -1); clen = http_GetContentLength(bo->beresp); crlen = http_GetContentRange(bo->beresp, &crlo, &crhi); if (err != NULL) { VSLb(bo->vsl, SLT_Error, "Invalid range header (%s)", err); return (-1); } if (crlen < -1) { VSLb(bo->vsl, SLT_Error, "Invalid content-range header"); return (-1); } if (clen < -1) { VSLb(bo->vsl, SLT_Error, "Invalid content-length header"); return (-1); } if (crlo < 0 && crhi < 0 && crlen < 0) { AZ(http_GetHdr(bo->beresp, H_Content_Range, NULL)); return (0); } if (rlo < 0 && rhi < 0) { VSLb(bo->vsl, SLT_Error, "Unexpected content-range header"); return (-1); } if (crlo < 0) { // Content-Range: bytes */123 assert(crhi < 0); assert(crlen > 0); if (http_GetStatus(bo->beresp) == 416) return (0); crlo = 0; crhi = crlen - 1; } #define RANGE_CHECK(val, op, crval, what) \ do { \ if (val >= 0 && !(val op crval)) { \ VSLb(bo->vsl, SLT_Error, \ "Expected " what " %zd, got %zd", \ crval, val); \ return (-1); \ } \ } while (0) crlen = (crhi - crlo) + 1; RANGE_CHECK(clen, ==, crlen, "content length"); /* NB: if the client didn't specify a low range the high range * was adjusted based on the resource length, and a high range * is allowed to be out of bounds so at this point there is * nothing left to check. */ if (rlo < 0) return (0); RANGE_CHECK(rlo, ==, crlo, "low range"); RANGE_CHECK(rhi, >=, crhi, "minimum high range"); #undef RANGE_CHECK return (0); } varnish-7.5.0/bin/varnishd/cache/cache_req.c000066400000000000000000000177601457605730600207340ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Request management * */ #include "config.h" #include #include #include "cache_varnishd.h" #include "cache_filter.h" #include "cache_pool.h" #include "cache_transport.h" #include "common/heritage.h" #include "vtim.h" void Req_AcctLogCharge(struct VSC_main_wrk *ds, struct req *req) { struct acct_req *a; AN(ds); CHECK_OBJ_NOTNULL(req, REQ_MAGIC); a = &req->acct; if (!IS_NO_VXID(req->vsl->wid) && !(req->res_mode & RES_PIPE)) { VSLb(req->vsl, SLT_ReqAcct, "%ju %ju %ju %ju %ju %ju", (uintmax_t)a->req_hdrbytes, (uintmax_t)a->req_bodybytes, (uintmax_t)(a->req_hdrbytes + a->req_bodybytes), (uintmax_t)a->resp_hdrbytes, (uintmax_t)a->resp_bodybytes, (uintmax_t)(a->resp_hdrbytes + a->resp_bodybytes)); } if (IS_TOPREQ(req)) { #define ACCT(foo) ds->s_##foo += a->foo; #include "tbl/acct_fields_req.h" } memset(a, 0, sizeof *a); } void Req_LogHit(struct worker *wrk, struct req *req, struct objcore *oc, intmax_t fetch_progress) { const char *clen, *sep; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(req, REQ_MAGIC); CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); if (fetch_progress >= 0) { clen = HTTP_GetHdrPack(wrk, oc, H_Content_Length); if (clen == NULL) clen = sep = ""; else sep = " "; VSLb(req->vsl, SLT_Hit, "%ju %.6f %.6f %.6f %jd%s%s", VXID(ObjGetXID(wrk, oc)), EXP_Dttl(req, oc), oc->grace, oc->keep, fetch_progress, sep, clen); } else { VSLb(req->vsl, SLT_Hit, "%ju %.6f %.6f %.6f", VXID(ObjGetXID(wrk, oc)), EXP_Dttl(req, oc), oc->grace, oc->keep); } } const char * Req_LogStart(const struct worker *wrk, struct req *req) { const char *ci, *cp, *endpname; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(req, REQ_MAGIC); CHECK_OBJ_NOTNULL(req->sp, SESS_MAGIC); ci = SES_Get_String_Attr(req->sp, SA_CLIENT_IP); cp = SES_Get_String_Attr(req->sp, SA_CLIENT_PORT); CHECK_OBJ_NOTNULL(req->sp->listen_sock, LISTEN_SOCK_MAGIC); endpname = req->sp->listen_sock->name; AN(endpname); VSLb(req->vsl, SLT_ReqStart, "%s %s %s", ci, cp, endpname); return (ci); } /*-------------------------------------------------------------------- * Alloc/Free a request */ struct req * Req_New(struct sess *sp) { struct pool *pp; struct req *req; uint16_t nhttp; unsigned sz, hl; char *p, *e; CHECK_OBJ_NOTNULL(sp, SESS_MAGIC); pp = sp->pool; CHECK_OBJ_NOTNULL(pp, POOL_MAGIC); req = MPL_Get(pp->mpl_req, &sz); AN(req); req->magic = REQ_MAGIC; req->sp = sp; e = (char*)req + sz; p = (char*)(req + 1); p = (void*)PRNDUP(p); assert(p < e); nhttp = (uint16_t)cache_param->http_max_hdr; hl = HTTP_estimate(nhttp); req->http = HTTP_create(p, nhttp, hl); p += hl; p = (void*)PRNDUP(p); assert(p < e); req->http0 = HTTP_create(p, nhttp, hl); p += hl; p = (void*)PRNDUP(p); assert(p < e); req->resp = HTTP_create(p, nhttp, hl); p += hl; p = (void*)PRNDUP(p); assert(p < e); sz = cache_param->vsl_buffer; VSL_Setup(req->vsl, p, sz); p += sz; p = (void*)PRNDUP(p); req->vfc = (void*)p; INIT_OBJ(req->vfc, VFP_CTX_MAGIC); p = (void*)PRNDUP(p + sizeof(*req->vfc)); req->vdc = (void*)p; memset(req->vdc, 0, sizeof *req->vdc); p = (void*)PRNDUP(p + sizeof(*req->vdc)); req->htc = (void*)p; INIT_OBJ(req->htc, HTTP_CONN_MAGIC); req->htc->doclose = SC_NULL; p = (void*)PRNDUP(p + sizeof(*req->htc)); req->top = (void*)p; INIT_OBJ(req->top, REQTOP_MAGIC); req->top->topreq = req; p = (void*)PRNDUP(p + sizeof(*req->top)); assert(p < e); WS_Init(req->ws, "req", p, e - p); req->t_first = NAN; req->t_prev = NAN; req->t_req = NAN; req->req_step = R_STP_TRANSPORT; req->doclose = SC_NULL; return (req); } void Req_Release(struct req *req) { struct sess *sp; struct pool *pp; CHECK_OBJ_NOTNULL(req, REQ_MAGIC); /* Make sure the request counters have all been zeroed */ #define ACCT(foo) \ AZ(req->acct.foo); #include "tbl/acct_fields_req.h" AZ(req->vcl); if (!IS_NO_VXID(req->vsl->wid)) VSL_End(req->vsl); #ifdef ENABLE_WORKSPACE_EMULATOR WS_Rollback(req->ws, 0); #endif TAKE_OBJ_NOTNULL(sp, &req->sp, SESS_MAGIC); pp = sp->pool; CHECK_OBJ_NOTNULL(pp, POOL_MAGIC); CHECK_OBJ_NOTNULL(req, REQ_MAGIC); MPL_AssertSane(req); VSL_Flush(req->vsl, 0); MPL_Free(pp->mpl_req, req); } /*---------------------------------------------------------------------- * TODO: * - check for code duplication with cnt_recv_prep * - re-check if complete * - XXX should PRIV_TOP use vcl0? * - XXX PRIV_TOP does not get rolled back, should it for !IS_TOPREQ ? */ void Req_Rollback(VRT_CTX) { struct req *req; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); req = ctx->req; CHECK_OBJ_NOTNULL(req, REQ_MAGIC); if (IS_TOPREQ(req)) VCL_TaskLeave(ctx, req->top->privs); VCL_TaskLeave(ctx, req->privs); VCL_TaskEnter(req->privs); if (IS_TOPREQ(req)) VCL_TaskEnter(req->top->privs); HTTP_Clone(req->http, req->http0); req->vdp_filter_list = NULL; req->vcf = NULL; if (WS_Overflowed(req->ws)) req->wrk->stats->ws_client_overflow++; AN(req->ws_req); WS_Rollback(req->ws, req->ws_req); } /*---------------------------------------------------------------------- * TODO: remove code duplication with cnt_recv_prep */ void Req_Cleanup(struct sess *sp, struct worker *wrk, struct req *req) { CHECK_OBJ_NOTNULL(sp, SESS_MAGIC); CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(req, REQ_MAGIC); assert(sp == req->sp); if (IS_TOPREQ(req)) AZ(req->top->vcl0); AZ(req->director_hint); req->restarts = 0; if (req->vcl != NULL) VCL_Recache(wrk, &req->vcl); /* Charge and log byte counters */ if (!IS_NO_VXID(req->vsl->wid)) { Req_AcctLogCharge(wrk->stats, req); if (!IS_SAME_VXID(req->vsl->wid, sp->vxid)) VSL_End(req->vsl); else req->vsl->wid = NO_VXID; /* ending an h2 stream 0 */ } if (!isnan(req->t_prev) && req->t_prev > 0. && req->t_prev > sp->t_idle) sp->t_idle = req->t_prev; else sp->t_idle = W_TIM_real(wrk); req->t_first = NAN; req->t_prev = NAN; req->t_req = NAN; req->req_body_status = NULL; req->hash_always_miss = 0; req->hash_ignore_busy = 0; req->hash_ignore_vary = 0; req->esi_level = 0; req->is_hit = 0; req->req_step = R_STP_TRANSPORT; req->vcf = NULL; req->doclose = SC_NULL; req->htc->doclose = SC_NULL; if (WS_Overflowed(req->ws)) wrk->stats->ws_client_overflow++; wrk->seen_methods = 0; VDP_Fini(req->vdc); } /*---------------------------------------------------------------------- */ void v_matchproto_(vtr_req_fail_f) Req_Fail(struct req *req, stream_close_t reason) { CHECK_OBJ_NOTNULL(req, REQ_MAGIC); AN(req->transport->req_fail); req->transport->req_fail(req, reason); } varnish-7.5.0/bin/varnishd/cache/cache_req_body.c000066400000000000000000000210421457605730600217350ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #include "config.h" #include #include "cache_varnishd.h" #include "cache_filter.h" #include "cache_objhead.h" #include "cache_transport.h" #include "vtim.h" #include "storage/storage.h" /*---------------------------------------------------------------------- * Pull the req.body in via/into a objcore * * This can be called only once per request * */ static ssize_t vrb_pull(struct req *req, ssize_t maxsize, objiterate_f *func, void *priv) { ssize_t l, r = 0, yet; struct vrt_ctx ctx[1]; struct vfp_ctx *vfc; uint8_t *ptr; enum vfp_status vfps = VFP_ERROR; const struct stevedore *stv; ssize_t req_bodybytes = 0; unsigned flush = OBJ_ITER_FLUSH; CHECK_OBJ_NOTNULL(req, REQ_MAGIC); CHECK_OBJ_NOTNULL(req->htc, HTTP_CONN_MAGIC); CHECK_OBJ_NOTNULL(req->vfc, VFP_CTX_MAGIC); vfc = req->vfc; req->body_oc = HSH_Private(req->wrk); AN(req->body_oc); if (req->storage != NULL) stv = req->storage; else stv = stv_transient; req->storage = NULL; if (STV_NewObject(req->wrk, req->body_oc, stv, 0) == 0) { req->req_body_status = BS_ERROR; HSH_DerefBoc(req->wrk, req->body_oc); AZ(HSH_DerefObjCore(req->wrk, &req->body_oc, 0)); (void)VFP_Error(vfc, "Object allocation failed:" " Ran out of space in %s", stv->vclname); return (-1); } vfc->oc = req->body_oc; INIT_OBJ(ctx, VRT_CTX_MAGIC); VCL_Req2Ctx(ctx, req); if (VFP_Open(ctx, vfc) < 0) { req->req_body_status = BS_ERROR; HSH_DerefBoc(req->wrk, req->body_oc); AZ(HSH_DerefObjCore(req->wrk, &req->body_oc, 0)); return (-1); } AN(req->htc); yet = req->htc->content_length; if (yet != 0 && req->want100cont) { req->want100cont = 0; (void)req->transport->minimal_response(req, 100); } yet = vmax_t(ssize_t, yet, 0); do { AZ(vfc->failed); if (maxsize >= 0 && req_bodybytes > maxsize) { (void)VFP_Error(vfc, "Request body too big to cache"); break; } /* NB: only attempt a full allocation when caching. */ l = maxsize > 0 ? yet : 0; if (VFP_GetStorage(vfc, &l, &ptr) != VFP_OK) break; AZ(vfc->failed); AN(ptr); AN(l); vfps = VFP_Suck(vfc, ptr, &l); if (l > 0 && vfps != VFP_ERROR) { req_bodybytes += l; if (yet >= l) yet -= l; else if (yet > 0) yet = 0; if (func != NULL) { if (vfps == VFP_END) flush |= OBJ_ITER_END; r = func(priv, flush, ptr, l); if (r) break; } else { ObjExtend(req->wrk, req->body_oc, l, vfps == VFP_END ? 1 : 0); } } } while (vfps == VFP_OK); req->acct.req_bodybytes += VFP_Close(vfc); VSLb_ts_req(req, "ReqBody", VTIM_real()); if (func != NULL) { HSH_DerefBoc(req->wrk, req->body_oc); AZ(HSH_DerefObjCore(req->wrk, &req->body_oc, 0)); if (vfps == VFP_END && r == 0 && (flush & OBJ_ITER_END) == 0) r = func(priv, flush | OBJ_ITER_END, NULL, 0); if (vfps != VFP_END) { req->req_body_status = BS_ERROR; if (r == 0) r = -1; } return (r); } AZ(ObjSetU64(req->wrk, req->body_oc, OA_LEN, req_bodybytes)); HSH_DerefBoc(req->wrk, req->body_oc); if (vfps != VFP_END) { req->req_body_status = BS_ERROR; AZ(HSH_DerefObjCore(req->wrk, &req->body_oc, 0)); return (-1); } assert(req_bodybytes >= 0); if (req_bodybytes != req->htc->content_length) { /* We must update also the "pristine" req.* copy */ http_Unset(req->http0, H_Content_Length); http_Unset(req->http0, H_Transfer_Encoding); http_PrintfHeader(req->http0, "Content-Length: %ju", (uintmax_t)req_bodybytes); http_Unset(req->http, H_Content_Length); http_Unset(req->http, H_Transfer_Encoding); http_PrintfHeader(req->http, "Content-Length: %ju", (uintmax_t)req_bodybytes); } req->req_body_status = BS_CACHED; return (req_bodybytes); } /*---------------------------------------------------------------------- * Iterate over the req.body. * * This can be done exactly once if uncached, and multiple times if the * req.body is cached. * * return length or -1 on error */ ssize_t VRB_Iterate(struct worker *wrk, struct vsl_log *vsl, struct req *req, objiterate_f *func, void *priv) { int i; CHECK_OBJ_NOTNULL(req, REQ_MAGIC); AN(func); if (req->req_body_status == BS_CACHED) { AN(req->body_oc); if (ObjIterate(wrk, req->body_oc, priv, func, 0)) return (-1); return (0); } if (req->req_body_status == BS_NONE) return (0); if (req->req_body_status == BS_TAKEN) { VSLb(vsl, SLT_VCL_Error, "Uncached req.body can only be consumed once."); return (-1); } if (req->req_body_status == BS_ERROR) { VSLb(vsl, SLT_FetchError, "Had failed reading req.body before."); return (-1); } Lck_Lock(&req->sp->mtx); if (req->req_body_status->avail > 0) { req->req_body_status = BS_TAKEN; i = 0; } else i = -1; Lck_Unlock(&req->sp->mtx); if (i) { VSLb(vsl, SLT_VCL_Error, "Multiple attempts to access non-cached req.body"); return (i); } return (vrb_pull(req, -1, func, priv)); } /*---------------------------------------------------------------------- * VRB_Ignore() is a dedicated function, because we might * be able to disuade or terminate its transmission in some protocols. * * For HTTP1, we do nothing if we are going to close the connection anyway or * just iterate it into oblivion. */ static int v_matchproto_(objiterate_f) httpq_req_body_discard(void *priv, unsigned flush, const void *ptr, ssize_t len) { (void)priv; (void)flush; (void)ptr; (void)len; return (0); } int VRB_Ignore(struct req *req) { CHECK_OBJ_NOTNULL(req, REQ_MAGIC); if (req->doclose != SC_NULL) return (0); if (req->req_body_status->avail > 0) (void)VRB_Iterate(req->wrk, req->vsl, req, httpq_req_body_discard, NULL); if (req->req_body_status == BS_ERROR) req->doclose = SC_RX_BODY; return (0); } /*---------------------------------------------------------------------- */ void VRB_Free(struct req *req) { int r; CHECK_OBJ_NOTNULL(req, REQ_MAGIC); if (req->body_oc == NULL) return; r = HSH_DerefObjCore(req->wrk, &req->body_oc, 0); // each busyobj may have gained a reference assert (r >= 0); assert ((unsigned)r <= req->restarts + 1); } /*---------------------------------------------------------------------- * Cache the req.body if it is smaller than the given size * * This function must be called before any backend fetches are kicked * off to prevent parallelism. */ ssize_t VRB_Cache(struct req *req, ssize_t maxsize) { uint64_t u; CHECK_OBJ_NOTNULL(req, REQ_MAGIC); assert (req->req_step == R_STP_RECV); assert(maxsize >= 0); /* * We only allow caching to happen the first time through vcl_recv{} * where we know we will have no competition or conflicts for the * updates to req.http.* etc. */ if (req->restarts > 0 && req->req_body_status != BS_CACHED) { VSLb(req->vsl, SLT_VCL_Error, "req.body must be cached before restarts"); return (-1); } if (req->req_body_status == BS_CACHED) { AZ(ObjGetU64(req->wrk, req->body_oc, OA_LEN, &u)); return (u); } if (req->req_body_status->avail <= 0) return (req->req_body_status->avail); if (req->htc->content_length > maxsize) { req->req_body_status = BS_ERROR; (void)VFP_Error(req->vfc, "Request body too big to cache"); return (-1); } return (vrb_pull(req, maxsize, NULL, NULL)); } varnish-7.5.0/bin/varnishd/cache/cache_req_fsm.c000066400000000000000000000761501457605730600215770ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2017 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * This file contains the request-handling state engine, which is intended to * (over time) be(come) protocol agnostic. * We already use this now with ESI:includes, which are for all relevant * purposes a different "protocol" * * A special complication is the fact that we can suspend processing of * a request when hash-lookup finds a busy objhdr. * */ #include "config.h" #include "cache_varnishd.h" #include "cache_filter.h" #include "cache_objhead.h" #include "cache_transport.h" #include "vcc_interface.h" #include "http1/cache_http1.h" #include "storage/storage.h" #include "vcl.h" #include "vct.h" #include "vsha256.h" #include "vtim.h" #define REQ_STEPS \ REQ_STEP(transport, TRANSPORT, ) \ REQ_STEP(restart, RESTART, static) \ REQ_STEP(recv, RECV, ) \ REQ_STEP(pipe, PIPE, static) \ REQ_STEP(pass, PASS, static) \ REQ_STEP(lookup, LOOKUP, static) \ REQ_STEP(purge, PURGE, static) \ REQ_STEP(miss, MISS, static) \ REQ_STEP(fetch, FETCH, static) \ REQ_STEP(deliver, DELIVER, static) \ REQ_STEP(vclfail, VCLFAIL, static) \ REQ_STEP(synth, SYNTH, static) \ REQ_STEP(transmit, TRANSMIT, static) #define REQ_STEP(l, U, priv) \ static req_state_f cnt_##l; \ priv const struct req_step R_STP_##U[1] = {{ \ .name = "Req Step " #l, \ .func = cnt_##l, \ }}; REQ_STEPS #undef REQ_STEP /*-------------------------------------------------------------------- * Handle "Expect:" and "Connection:" on incoming request */ static enum req_fsm_nxt v_matchproto_(req_state_f) cnt_transport(struct worker *wrk, struct req *req) { const char *p; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(req, REQ_MAGIC); CHECK_OBJ_NOTNULL(req->http, HTTP_MAGIC); CHECK_OBJ_NOTNULL(req->transport, TRANSPORT_MAGIC); AN(req->req_body_status); if (http_GetHdr(req->http, H_Expect, &p)) { if (!http_expect_eq(p, 100-continue)) { req->doclose = SC_RX_JUNK; (void)req->transport->minimal_response(req, 417); wrk->stats->client_req_417++; return (REQ_FSM_DONE); } if (req->http->protover >= 11 && req->htc->pipeline_b == NULL) // XXX: HTTP1 vs 2 ? req->want100cont = 1; http_Unset(req->http, H_Expect); } AZ(req->err_code); req->doclose = http_DoConnection(req->http, SC_REQ_CLOSE); if (req->doclose == SC_RX_BAD) { wrk->stats->client_req_400++; (void)req->transport->minimal_response(req, 400); return (REQ_FSM_DONE); } if (req->req_body_status->avail == 1) { AN(req->transport->req_body != NULL); VFP_Setup(req->vfc, wrk); req->vfc->resp = req->http; // XXX req->transport->req_body(req); } req->ws_req = WS_Snapshot(req->ws); HTTP_Clone(req->http0, req->http); // For ESI & restart req->req_step = R_STP_RECV; return (REQ_FSM_MORE); } /*-------------------------------------------------------------------- * Deliver an object to client */ int Resp_Setup_Deliver(struct req *req) { struct http *h; struct objcore *oc; CHECK_OBJ_NOTNULL(req, REQ_MAGIC); oc = req->objcore; CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); h = req->resp; HTTP_Setup(h, req->ws, req->vsl, SLT_RespMethod); if (HTTP_Decode(h, ObjGetAttr(req->wrk, oc, OA_HEADERS, NULL))) return (-1); http_ForceField(h, HTTP_HDR_PROTO, "HTTP/1.1"); if (req->is_hit) http_PrintfHeader(h, "X-Varnish: %ju %ju", VXID(req->vsl->wid), VXID(ObjGetXID(req->wrk, oc))); else http_PrintfHeader(h, "X-Varnish: %ju", VXID(req->vsl->wid)); /* * We base Age calculation upon the last timestamp taken during client * request processing. This gives some inaccuracy, but since Age is only * full second resolution that shouldn't matter. (Last request timestamp * could be a Start timestamp taken before the object entered into cache * leading to negative age. Truncate to zero in that case). */ http_PrintfHeader(h, "Age: %.0f", floor(fmax(0., req->t_prev - oc->t_origin))); http_AppendHeader(h, H_Via, http_ViaHeader()); if (cache_param->http_gzip_support && ObjCheckFlag(req->wrk, oc, OF_GZIPED) && !RFC2616_Req_Gzip(req->http)) RFC2616_Weaken_Etag(h); return (0); } void Resp_Setup_Synth(struct req *req) { struct http *h; CHECK_OBJ_NOTNULL(req, REQ_MAGIC); h = req->resp; HTTP_Setup(h, req->ws, req->vsl, SLT_RespMethod); AZ(req->objcore); http_PutResponse(h, "HTTP/1.1", req->err_code, req->err_reason); http_TimeHeader(h, "Date: ", W_TIM_real(req->wrk)); http_SetHeader(h, "Server: Varnish"); http_PrintfHeader(h, "X-Varnish: %ju", VXID(req->vsl->wid)); /* * For late 100-continue, we suggest to VCL to close the connection to * neither send a 100-continue nor drain-read the request. But VCL has * the option to veto by removing Connection: close */ if (req->want100cont) http_SetHeader(h, "Connection: close"); } static enum req_fsm_nxt v_matchproto_(req_state_f) cnt_deliver(struct worker *wrk, struct req *req) { unsigned status; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(req, REQ_MAGIC); CHECK_OBJ_NOTNULL(req->objcore, OBJCORE_MAGIC); CHECK_OBJ_NOTNULL(req->objcore->objhead, OBJHEAD_MAGIC); AZ(req->stale_oc); AN(req->vcl); assert(req->objcore->refcnt > 0); ObjTouch(req->wrk, req->objcore, req->t_prev); if (Resp_Setup_Deliver(req)) { (void)HSH_DerefObjCore(wrk, &req->objcore, HSH_RUSH_POLICY); req->err_code = 500; req->req_step = R_STP_SYNTH; return (REQ_FSM_MORE); } status = http_GetStatus(req->resp); if (cache_param->http_range_support && status == 200 && !(req->objcore->flags & OC_F_PRIVATE)) http_ForceHeader(req->resp, H_Accept_Ranges, "bytes"); req->t_resp = W_TIM_real(wrk); VCL_deliver_method(req->vcl, wrk, req, NULL, NULL); assert(req->restarts <= cache_param->max_restarts); if (wrk->vpi->handling != VCL_RET_DELIVER) { HSH_Cancel(wrk, req->objcore, NULL); (void)HSH_DerefObjCore(wrk, &req->objcore, HSH_RUSH_POLICY); http_Teardown(req->resp); switch (wrk->vpi->handling) { case VCL_RET_RESTART: req->req_step = R_STP_RESTART; break; case VCL_RET_FAIL: req->req_step = R_STP_VCLFAIL; break; case VCL_RET_SYNTH: req->req_step = R_STP_SYNTH; break; default: WRONG("Illegal return from vcl_deliver{}"); } return (REQ_FSM_MORE); } VSLb_ts_req(req, "Process", W_TIM_real(wrk)); assert(wrk->vpi->handling == VCL_RET_DELIVER); if (IS_TOPREQ(req) && RFC2616_Do_Cond(req)) http_PutResponse(req->resp, "HTTP/1.1", 304, NULL); req->req_step = R_STP_TRANSMIT; return (REQ_FSM_MORE); } /*-------------------------------------------------------------------- * VCL failed, die horribly */ static enum req_fsm_nxt v_matchproto_(req_state_f) cnt_vclfail(struct worker *wrk, struct req *req) { struct vrt_ctx ctx[1]; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(req, REQ_MAGIC); AZ(req->objcore); AZ(req->stale_oc); INIT_OBJ(ctx, VRT_CTX_MAGIC); VCL_Req2Ctx(ctx, req); Req_Rollback(ctx); if (req->req_reset) { req->err_code = 408; req->err_reason = "Client disconnected"; } else { req->err_code = 503; req->err_reason = "VCL failed"; } req->req_step = R_STP_SYNTH; req->doclose = SC_VCL_FAILURE; req->vdp_filter_list = NULL; return (REQ_FSM_MORE); } /*-------------------------------------------------------------------- * Emit a synthetic response */ static enum req_fsm_nxt v_matchproto_(req_state_f) cnt_synth(struct worker *wrk, struct req *req) { struct vsb *synth_body; ssize_t sz, szl; uint16_t status; uint8_t *ptr; const char *body; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(req, REQ_MAGIC); AZ(req->objcore); AZ(req->stale_oc); wrk->stats->s_synth++; if (req->err_code < 100) req->err_code = 501; Resp_Setup_Synth(req); req->vdp_filter_list = NULL; synth_body = VSB_new_auto(); AN(synth_body); req->t_resp = W_TIM_real(wrk); VCL_synth_method(req->vcl, wrk, req, NULL, synth_body); AZ(VSB_finish(synth_body)); VSLb_ts_req(req, "Process", W_TIM_real(wrk)); while (wrk->vpi->handling == VCL_RET_FAIL) { if (req->esi_level > 0) { wrk->vpi->handling = VCL_RET_DELIVER; break; } VSB_destroy(&synth_body); (void)VRB_Ignore(req); status = req->req_reset ? 408 : 500; (void)req->transport->minimal_response(req, status); req->doclose = SC_VCL_FAILURE; // XXX: Not necessary any more ? VSLb_ts_req(req, "Resp", W_TIM_real(wrk)); http_Teardown(req->resp); return (REQ_FSM_DONE); } if (wrk->vpi->handling == VCL_RET_RESTART && req->restarts > cache_param->max_restarts) wrk->vpi->handling = VCL_RET_DELIVER; if (wrk->vpi->handling == VCL_RET_RESTART) { /* * XXX: Should we reset req->doclose = SC_VCL_FAILURE * XXX: If so, to what ? */ HTTP_Setup(req->resp, req->ws, req->vsl, SLT_RespMethod); VSB_destroy(&synth_body); req->req_step = R_STP_RESTART; return (REQ_FSM_MORE); } assert(wrk->vpi->handling == VCL_RET_DELIVER); http_Unset(req->resp, H_Content_Length); http_PrintfHeader(req->resp, "Content-Length: %zd", VSB_len(synth_body)); if (req->doclose == SC_NULL && http_HdrIs(req->resp, H_Connection, "close")) req->doclose = SC_RESP_CLOSE; /* Discard any lingering request body before delivery */ (void)VRB_Ignore(req); req->objcore = HSH_Private(wrk); CHECK_OBJ_NOTNULL(req->objcore, OBJCORE_MAGIC); szl = -1; if (STV_NewObject(wrk, req->objcore, stv_transient, 0)) { body = VSB_data(synth_body); szl = VSB_len(synth_body); assert(szl >= 0); while (szl > 0) { sz = szl; if (! ObjGetSpace(wrk, req->objcore, &sz, &ptr)) { szl = -1; break; } if (sz > szl) sz = szl; szl -= sz; memcpy(ptr, body, sz); ObjExtend(wrk, req->objcore, sz, szl == 0 ? 1 : 0); body += sz; } } if (szl >= 0) AZ(ObjSetU64(wrk, req->objcore, OA_LEN, VSB_len(synth_body))); HSH_DerefBoc(wrk, req->objcore); VSB_destroy(&synth_body); if (szl < 0) { VSLb(req->vsl, SLT_Error, "Could not get storage"); req->doclose = SC_OVERLOAD; VSLb_ts_req(req, "Resp", W_TIM_real(wrk)); (void)HSH_DerefObjCore(wrk, &req->objcore, 1); http_Teardown(req->resp); return (REQ_FSM_DONE); } req->req_step = R_STP_TRANSMIT; return (REQ_FSM_MORE); } /*-------------------------------------------------------------------- * The mechanics of sending a response (from deliver or synth) */ static enum req_fsm_nxt v_matchproto_(req_state_f) cnt_transmit(struct worker *wrk, struct req *req) { struct boc *boc; uint16_t status; int sendbody, head; intmax_t clval; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(req, REQ_MAGIC); CHECK_OBJ_NOTNULL(req->transport, TRANSPORT_MAGIC); CHECK_OBJ_NOTNULL(req->objcore, OBJCORE_MAGIC); AZ(req->stale_oc); AZ(req->res_mode); /* Grab a ref to the bo if there is one (=streaming) */ boc = HSH_RefBoc(req->objcore); if (boc && boc->state < BOS_STREAM) ObjWaitState(req->objcore, BOS_STREAM); clval = http_GetContentLength(req->resp); /* RFC 7230, 3.3.3 */ status = http_GetStatus(req->resp); head = http_method_eq(req->http0->hd[HTTP_HDR_METHOD].b, HEAD); if (boc != NULL || (req->objcore->flags & (OC_F_FAILED))) req->resp_len = clval; else req->resp_len = ObjGetLen(req->wrk, req->objcore); if (head || status < 200 || status == 204 || status == 304) { // rfc7230,l,1748,1752 sendbody = 0; } else { sendbody = 1; } VDP_Init(req->vdc, req->wrk, req->vsl, req); if (req->vdp_filter_list == NULL) req->vdp_filter_list = resp_Get_Filter_List(req); if (req->vdp_filter_list == NULL || VCL_StackVDP(req, req->vcl, req->vdp_filter_list)) { VSLb(req->vsl, SLT_Error, "Failure to push processors"); req->doclose = SC_OVERLOAD; req->acct.resp_bodybytes += VDP_Close(req->vdc, req->objcore, boc); } else { if (status < 200 || status == 204) { // rfc7230,l,1691,1695 http_Unset(req->resp, H_Content_Length); } else if (status == 304) { // rfc7230,l,1675,1677 http_Unset(req->resp, H_Content_Length); } else if (clval >= 0 && clval == req->resp_len) { /* Reuse C-L header */ } else if (head && req->objcore->flags & OC_F_HFM) { /* * Don't touch C-L header (debatable) * * The only way to do it correctly would be to GET * to the backend, and discard the body once the * filters have had a chance to chew on it, but that * would negate the "pass for huge objects" use case. */ } else { http_Unset(req->resp, H_Content_Length); if (req->resp_len >= 0) http_PrintfHeader(req->resp, "Content-Length: %jd", req->resp_len); } if (req->resp_len == 0) sendbody = 0; req->transport->deliver(req, boc, sendbody); } VSLb_ts_req(req, "Resp", W_TIM_real(wrk)); if (req->doclose == SC_NULL && (req->objcore->flags & OC_F_FAILED)) { /* The object we delivered failed due to a streaming error. * Fail the request. */ req->doclose = SC_TX_ERROR; } if (boc != NULL) HSH_DerefBoc(wrk, req->objcore); (void)HSH_DerefObjCore(wrk, &req->objcore, HSH_RUSH_POLICY); http_Teardown(req->resp); req->vdp_filter_list = NULL; req->res_mode = 0; return (REQ_FSM_DONE); } /*-------------------------------------------------------------------- * Initiated a fetch (pass/miss) which we intend to deliver */ static enum req_fsm_nxt v_matchproto_(req_state_f) cnt_fetch(struct worker *wrk, struct req *req) { CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(req, REQ_MAGIC); CHECK_OBJ_NOTNULL(req->objcore, OBJCORE_MAGIC); AZ(req->stale_oc); wrk->stats->s_fetch++; (void)VRB_Ignore(req); if (req->objcore->flags & OC_F_FAILED) { req->err_code = 503; req->req_step = R_STP_SYNTH; (void)HSH_DerefObjCore(wrk, &req->objcore, 1); AZ(req->objcore); return (REQ_FSM_MORE); } req->req_step = R_STP_DELIVER; return (REQ_FSM_MORE); } /*-------------------------------------------------------------------- * Attempt to lookup objhdr from hash. We disembark and reenter * this state if we get suspended on a busy objhdr. */ static enum req_fsm_nxt v_matchproto_(req_state_f) cnt_lookup(struct worker *wrk, struct req *req) { struct objcore *oc, *busy; enum lookup_e lr; int had_objhead = 0; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(req, REQ_MAGIC); AZ(req->objcore); AZ(req->stale_oc); AN(req->vcl); VRY_Prep(req); AZ(req->objcore); if (req->hash_objhead) had_objhead = 1; wrk->strangelove = 0; lr = HSH_Lookup(req, &oc, &busy); if (lr == HSH_BUSY) { /* * We lost the session to a busy object, disembark the * worker thread. We return to STP_LOOKUP when the busy * object has been unbusied, and still have the objhead * around to restart the lookup with. */ return (REQ_FSM_DISEMBARK); } assert(wrk->strangelove >= 0); if ((unsigned)wrk->strangelove >= cache_param->vary_notice) VSLb(req->vsl, SLT_Notice, "vsl: High number of variants (%d)", wrk->strangelove); if (had_objhead) VSLb_ts_req(req, "Waitinglist", W_TIM_real(wrk)); if (req->vcf != NULL) { (void)req->vcf->func(req, NULL, NULL, 2); req->vcf = NULL; } if (busy == NULL) { VRY_Finish(req, DISCARD); } else { AN(busy->flags & OC_F_BUSY); VRY_Finish(req, KEEP); } AZ(req->objcore); if (lr == HSH_MISS || lr == HSH_HITMISS) { AN(busy); AN(busy->flags & OC_F_BUSY); req->objcore = busy; req->stale_oc = oc; req->req_step = R_STP_MISS; if (lr == HSH_HITMISS) req->is_hitmiss = 1; return (REQ_FSM_MORE); } if (lr == HSH_HITPASS) { AZ(busy); AZ(oc); req->req_step = R_STP_PASS; req->is_hitpass = 1; return (REQ_FSM_MORE); } assert(lr == HSH_HIT || lr == HSH_GRACE); CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); AZ(oc->flags & OC_F_BUSY); req->objcore = oc; AZ(oc->flags & OC_F_HFM); VCL_hit_method(req->vcl, wrk, req, NULL, NULL); switch (wrk->vpi->handling) { case VCL_RET_DELIVER: if (busy != NULL) { AZ(oc->flags & OC_F_HFM); CHECK_OBJ_NOTNULL(busy->boc, BOC_MAGIC); // XXX: shouldn't we go to miss? VBF_Fetch(wrk, req, busy, oc, VBF_BACKGROUND); wrk->stats->s_fetch++; wrk->stats->s_bgfetch++; } else { (void)VRB_Ignore(req);// XXX: handle err } wrk->stats->cache_hit++; req->is_hit = 1; if (lr == HSH_GRACE) wrk->stats->cache_hit_grace++; req->req_step = R_STP_DELIVER; return (REQ_FSM_MORE); case VCL_RET_RESTART: req->req_step = R_STP_RESTART; break; case VCL_RET_FAIL: req->req_step = R_STP_VCLFAIL; break; case VCL_RET_SYNTH: req->req_step = R_STP_SYNTH; break; case VCL_RET_PASS: wrk->stats->cache_hit++; req->is_hit = 1; req->req_step = R_STP_PASS; break; default: WRONG("Illegal return from vcl_hit{}"); } /* Drop our object, we won't need it */ (void)HSH_DerefObjCore(wrk, &req->objcore, HSH_RUSH_POLICY); if (busy != NULL) { (void)HSH_DerefObjCore(wrk, &busy, 0); VRY_Clear(req); } return (REQ_FSM_MORE); } /*-------------------------------------------------------------------- * Cache miss. */ static enum req_fsm_nxt v_matchproto_(req_state_f) cnt_miss(struct worker *wrk, struct req *req) { CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(req, REQ_MAGIC); AN(req->vcl); CHECK_OBJ_NOTNULL(req->objcore, OBJCORE_MAGIC); CHECK_OBJ_ORNULL(req->stale_oc, OBJCORE_MAGIC); VCL_miss_method(req->vcl, wrk, req, NULL, NULL); switch (wrk->vpi->handling) { case VCL_RET_FETCH: wrk->stats->cache_miss++; VBF_Fetch(wrk, req, req->objcore, req->stale_oc, VBF_NORMAL); if (req->stale_oc != NULL) (void)HSH_DerefObjCore(wrk, &req->stale_oc, 0); req->req_step = R_STP_FETCH; return (REQ_FSM_MORE); case VCL_RET_FAIL: req->req_step = R_STP_VCLFAIL; break; case VCL_RET_SYNTH: req->req_step = R_STP_SYNTH; break; case VCL_RET_RESTART: req->req_step = R_STP_RESTART; break; case VCL_RET_PASS: req->req_step = R_STP_PASS; break; default: WRONG("Illegal return from vcl_miss{}"); } VRY_Clear(req); if (req->stale_oc != NULL) (void)HSH_DerefObjCore(wrk, &req->stale_oc, 0); AZ(HSH_DerefObjCore(wrk, &req->objcore, 1)); return (REQ_FSM_MORE); } /*-------------------------------------------------------------------- * Pass processing */ static enum req_fsm_nxt v_matchproto_(req_state_f) cnt_pass(struct worker *wrk, struct req *req) { CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(req, REQ_MAGIC); AN(req->vcl); AZ(req->objcore); AZ(req->stale_oc); VCL_pass_method(req->vcl, wrk, req, NULL, NULL); switch (wrk->vpi->handling) { case VCL_RET_FAIL: req->req_step = R_STP_VCLFAIL; break; case VCL_RET_SYNTH: req->req_step = R_STP_SYNTH; break; case VCL_RET_RESTART: req->req_step = R_STP_RESTART; break; case VCL_RET_FETCH: wrk->stats->s_pass++; req->objcore = HSH_Private(wrk); CHECK_OBJ_NOTNULL(req->objcore, OBJCORE_MAGIC); VBF_Fetch(wrk, req, req->objcore, NULL, VBF_PASS); req->req_step = R_STP_FETCH; break; default: WRONG("Illegal return from cnt_pass{}"); } return (REQ_FSM_MORE); } /*-------------------------------------------------------------------- * Pipe mode */ static enum req_fsm_nxt v_matchproto_(req_state_f) cnt_pipe(struct worker *wrk, struct req *req) { struct busyobj *bo; enum req_fsm_nxt nxt; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(req, REQ_MAGIC); AZ(req->objcore); AZ(req->stale_oc); AN(req->vcl); wrk->stats->s_pipe++; bo = VBO_GetBusyObj(wrk, req); CHECK_OBJ_NOTNULL(bo, BUSYOBJ_MAGIC); VSLb(bo->vsl, SLT_Begin, "bereq %ju pipe", VXID(req->vsl->wid)); VSLb(req->vsl, SLT_Link, "bereq %ju pipe", VXID(bo->vsl->wid)); VSLb_ts_busyobj(bo, "Start", W_TIM_real(wrk)); THR_SetBusyobj(bo); bo->sp = req->sp; SES_Ref(bo->sp); HTTP_Setup(bo->bereq, req->ws, bo->vsl, SLT_BereqMethod); http_FilterReq(bo->bereq, req->http, 0); // XXX: 0 ? http_PrintfHeader(bo->bereq, "X-Varnish: %ju", VXID(req->vsl->wid)); http_ForceHeader(bo->bereq, H_Connection, "close"); if (req->want100cont) { http_SetHeader(bo->bereq, "Expect: 100-continue"); req->want100cont = 0; } bo->wrk = wrk; bo->task_deadline = NAN; /* XXX: copy req->task_deadline */ if (WS_Overflowed(req->ws)) wrk->vpi->handling = VCL_RET_FAIL; else VCL_pipe_method(req->vcl, wrk, req, bo, NULL); switch (wrk->vpi->handling) { case VCL_RET_SYNTH: req->req_step = R_STP_SYNTH; nxt = REQ_FSM_MORE; break; case VCL_RET_PIPE: VSLb_ts_req(req, "Process", W_TIM_real(wrk)); VSLb_ts_busyobj(bo, "Process", wrk->lastused); if (V1P_Enter() == 0) { AZ(bo->req); bo->req = req; bo->wrk = wrk; /* Unless cached, reqbody is not our job */ if (req->req_body_status != BS_CACHED) req->req_body_status = BS_NONE; SES_Close(req->sp, VDI_Http1Pipe(req, bo)); nxt = REQ_FSM_DONE; V1P_Leave(); break; } wrk->stats->pipe_limited++; /* fall through */ case VCL_RET_FAIL: req->req_step = R_STP_VCLFAIL; nxt = REQ_FSM_MORE; break; default: WRONG("Illegal return from vcl_pipe{}"); } http_Teardown(bo->bereq); SES_Rel(bo->sp); VBO_ReleaseBusyObj(wrk, &bo); THR_SetBusyobj(NULL); return (nxt); } /*-------------------------------------------------------------------- * Handle restart events */ static enum req_fsm_nxt v_matchproto_(req_state_f) cnt_restart(struct worker *wrk, struct req *req) { CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(req, REQ_MAGIC); AZ(req->objcore); AZ(req->stale_oc); if (++req->restarts > cache_param->max_restarts) { VSLb(req->vsl, SLT_VCL_Error, "Too many restarts"); req->err_code = 503; req->req_step = R_STP_SYNTH; } else { // XXX: ReqEnd + ReqAcct ? VSLb_ts_req(req, "Restart", W_TIM_real(wrk)); VSL_ChgId(req->vsl, "req", "restart", VXID_Get(wrk, VSL_CLIENTMARKER)); VSLb_ts_req(req, "Start", req->t_prev); req->err_code = 0; req->req_step = R_STP_RECV; } return (REQ_FSM_MORE); } /* * prepare the request for vcl_recv, either initially or after a reset * e.g. due to vcl switching * * TODO * - make restarts == 0 bit re-usable for rollback * - remove duplication with Req_Cleanup() */ static void v_matchproto_(req_state_f) cnt_recv_prep(struct req *req, const char *ci) { if (req->restarts == 0) { /* * This really should be done earlier, but we want to capture * it in the VSL log. */ http_AppendHeader(req->http, H_X_Forwarded_For, ci); http_AppendHeader(req->http, H_Via, http_ViaHeader()); http_CollectHdr(req->http, H_Cache_Control); /* By default we use the first backend */ VRT_Assign_Backend(&req->director_hint, VCL_DefaultDirector(req->vcl)); req->d_ttl = -1; req->d_grace = -1; req->disable_esi = 0; req->hash_always_miss = 0; req->hash_ignore_busy = 0; req->hash_ignore_vary = 0; req->client_identity = NULL; req->storage = NULL; req->trace = FEATURE(FEATURE_TRACE); } req->is_hit = 0; req->is_hitmiss = 0; req->is_hitpass = 0; req->err_code = 0; req->err_reason = NULL; } /*-------------------------------------------------------------------- * We have a complete request, set everything up and start it. * We can come here both with a request from the client and with * a interior request during ESI delivery. */ static enum req_fsm_nxt v_matchproto_(req_state_f) cnt_recv(struct worker *wrk, struct req *req) { unsigned recv_handling; struct VSHA256Context sha256ctx; const char *ci; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(req, REQ_MAGIC); AN(req->vcl); AZ(req->objcore); AZ(req->stale_oc); AZ(req->err_code); AZ(isnan(req->t_first)); AZ(isnan(req->t_prev)); AZ(isnan(req->t_req)); ci = Req_LogStart(wrk, req); http_VSL_log(req->http); if (http_CountHdr(req->http0, H_Host) > 1) { VSLb(req->vsl, SLT_BogoHeader, "Multiple Host: headers"); wrk->stats->client_req_400++; (void)req->transport->minimal_response(req, 400); return (REQ_FSM_DONE); } if (http_CountHdr(req->http0, H_Content_Length) > 1) { VSLb(req->vsl, SLT_BogoHeader, "Multiple Content-Length: headers"); wrk->stats->client_req_400++; (void)req->transport->minimal_response(req, 400); return (REQ_FSM_DONE); } cnt_recv_prep(req, ci); if (req->req_body_status == BS_ERROR) { req->doclose = SC_OVERLOAD; return (REQ_FSM_DONE); } VCL_recv_method(req->vcl, wrk, req, NULL, NULL); if (wrk->vpi->handling == VCL_RET_FAIL) { req->req_step = R_STP_VCLFAIL; return (REQ_FSM_MORE); } if (wrk->vpi->handling == VCL_RET_VCL && req->restarts == 0) { // Req_Rollback has happened in VPI_vcl_select assert(WS_Snapshot(req->ws) == req->ws_req); cnt_recv_prep(req, ci); VCL_recv_method(req->vcl, wrk, req, NULL, NULL); } if (req->want100cont && !req->late100cont) { req->want100cont = 0; if (req->transport->minimal_response(req, 100)) { req->doclose = SC_REM_CLOSE; return (REQ_FSM_DONE); } } /* Attempts to cache req.body may fail */ if (req->req_body_status == BS_ERROR) { req->doclose = SC_RX_BODY; return (REQ_FSM_DONE); } recv_handling = wrk->vpi->handling; /* We wash the A-E header here for the sake of VRY */ if (cache_param->http_gzip_support && (recv_handling != VCL_RET_PIPE) && (recv_handling != VCL_RET_PASS)) { if (RFC2616_Req_Gzip(req->http)) { http_ForceHeader(req->http, H_Accept_Encoding, "gzip"); } else { http_Unset(req->http, H_Accept_Encoding); } } VSHA256_Init(&sha256ctx); VCL_hash_method(req->vcl, wrk, req, NULL, &sha256ctx); if (wrk->vpi->handling == VCL_RET_FAIL) recv_handling = wrk->vpi->handling; else assert(wrk->vpi->handling == VCL_RET_LOOKUP); VSHA256_Final(req->digest, &sha256ctx); switch (recv_handling) { case VCL_RET_VCL: VSLb(req->vsl, SLT_VCL_Error, "Illegal return(vcl): %s", req->restarts ? "Not after restarts" : "Only from active VCL"); req->err_code = 503; req->req_step = R_STP_SYNTH; break; case VCL_RET_PURGE: req->req_step = R_STP_PURGE; break; case VCL_RET_HASH: req->req_step = R_STP_LOOKUP; break; case VCL_RET_PIPE: if (!IS_TOPREQ(req)) { VSLb(req->vsl, SLT_VCL_Error, "vcl_recv{} returns pipe for ESI included object." " Doing pass."); req->req_step = R_STP_PASS; } else if (req->http0->protover > 11) { VSLb(req->vsl, SLT_VCL_Error, "vcl_recv{} returns pipe for HTTP/2 request." " Doing pass."); req->req_step = R_STP_PASS; } else { req->req_step = R_STP_PIPE; } break; case VCL_RET_PASS: req->req_step = R_STP_PASS; break; case VCL_RET_SYNTH: req->req_step = R_STP_SYNTH; break; case VCL_RET_RESTART: req->req_step = R_STP_RESTART; break; case VCL_RET_FAIL: req->req_step = R_STP_VCLFAIL; break; default: WRONG("Illegal return from vcl_recv{}"); } return (REQ_FSM_MORE); } /*-------------------------------------------------------------------- * Find the objhead, purge it. * * In VCL, a restart is necessary to get a new object */ static enum req_fsm_nxt v_matchproto_(req_state_f) cnt_purge(struct worker *wrk, struct req *req) { struct objcore *oc, *boc; enum lookup_e lr; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(req, REQ_MAGIC); AZ(req->objcore); AZ(req->stale_oc); AN(req->vcl); VRY_Prep(req); AZ(req->objcore); req->hash_always_miss = 1; lr = HSH_Lookup(req, &oc, &boc); assert (lr == HSH_MISS); AZ(oc); CHECK_OBJ_NOTNULL(boc, OBJCORE_MAGIC); VRY_Finish(req, DISCARD); (void)HSH_Purge(wrk, boc->objhead, req->t_req, 0, 0, 0); AZ(HSH_DerefObjCore(wrk, &boc, 1)); VCL_purge_method(req->vcl, wrk, req, NULL, NULL); switch (wrk->vpi->handling) { case VCL_RET_RESTART: req->req_step = R_STP_RESTART; break; case VCL_RET_FAIL: req->req_step = R_STP_VCLFAIL; break; case VCL_RET_SYNTH: req->req_step = R_STP_SYNTH; break; default: WRONG("Illegal return from vcl_purge{}"); } return (REQ_FSM_MORE); } /*-------------------------------------------------------------------- * Central state engine dispatcher. * * Kick the session around until it has had enough. * */ static void v_matchproto_(req_state_f) cnt_diag(struct req *req, const char *state) { CHECK_OBJ_NOTNULL(req, REQ_MAGIC); VSLb(req->vsl, SLT_Debug, "vxid %ju STP_%s sp %p vcl %p", VXID(req->vsl->wid), state, req->sp, req->vcl); VSL_Flush(req->vsl, 0); } void CNT_Embark(struct worker *wrk, struct req *req) { CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(req, REQ_MAGIC); /* wrk can have changed for restarts */ req->vfc->wrk = req->wrk = wrk; wrk->vsl = req->vsl; if (req->req_step == R_STP_TRANSPORT && req->vcl == NULL) { VCL_Refresh(&wrk->wpriv->vcl); req->vcl = wrk->wpriv->vcl; wrk->wpriv->vcl = NULL; VSLbs(req->vsl, SLT_VCL_use, TOSTRAND(VCL_Name(req->vcl))); } AN(req->vcl); } enum req_fsm_nxt CNT_Request(struct req *req) { struct vrt_ctx ctx[1]; struct worker *wrk; enum req_fsm_nxt nxt; CHECK_OBJ_NOTNULL(req, REQ_MAGIC); wrk = req->wrk; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(req->transport, TRANSPORT_MAGIC); AN(req->transport->deliver); AN(req->transport->minimal_response); /* * Possible entrance states */ assert( req->req_step == R_STP_LOOKUP || req->req_step == R_STP_TRANSPORT); AN(VXID_TAG(req->vsl->wid) & VSL_CLIENTMARKER); AN(req->vcl); for (nxt = REQ_FSM_MORE; nxt == REQ_FSM_MORE; ) { /* * This is a good place to be paranoid about the various * pointers still pointing to the things we expect. */ CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(wrk->wpriv, WORKER_PRIV_MAGIC); CHECK_OBJ_ORNULL(wrk->wpriv->nobjhead, OBJHEAD_MAGIC); CHECK_OBJ_NOTNULL(req, REQ_MAGIC); CHECK_OBJ_NOTNULL(req->doclose, STREAM_CLOSE_MAGIC); AN(req->req_step); AN(req->req_step->name); AN(req->req_step->func); if (DO_DEBUG(DBG_REQ_STATE)) cnt_diag(req, req->req_step->name); nxt = req->req_step->func(wrk, req); CHECK_OBJ_ORNULL(wrk->wpriv->nobjhead, OBJHEAD_MAGIC); } wrk->vsl = NULL; if (nxt == REQ_FSM_DONE) { INIT_OBJ(ctx, VRT_CTX_MAGIC); VCL_Req2Ctx(ctx, req); if (IS_TOPREQ(req)) { VCL_TaskLeave(ctx, req->top->privs); if (req->top->vcl0 != NULL) VCL_Recache(wrk, &req->top->vcl0); } VCL_TaskLeave(ctx, req->privs); assert(!IS_NO_VXID(req->vsl->wid)); VRB_Free(req); VRT_Assign_Backend(&req->director_hint, NULL); req->wrk = NULL; } assert(nxt == REQ_FSM_DISEMBARK || !WS_IsReserved(req->ws)); return (nxt); } varnish-7.5.0/bin/varnishd/cache/cache_rfc2616.c000066400000000000000000000246521457605730600212340ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include "config.h" #include #include "cache_varnishd.h" #include "vtim.h" #include "vct.h" /*-------------------------------------------------------------------- * TTL and Age calculation in Varnish * * RFC2616 has a lot to say about how caches should calculate the TTL * and expiry times of objects, but it sort of misses the case that * applies to Varnish: the server-side cache. * * A normal cache, shared or single-client, has no symbiotic relationship * with the server, and therefore must take a very defensive attitude * if the Data/Expiry/Age/max-age data does not make sense. Overall * the policy described in section 13 of RFC 2616 results in no caching * happening on the first little sign of trouble. * * Varnish on the other hand tries to offload as many transactions from * the backend as possible, and therefore just passing through everything * if there is a clock-skew between backend and Varnish is not a workable * choice. * * Varnish implements a policy which is RFC2616 compliant when there * is no clockskew, and falls as gracefully as possible otherwise. * Our "clockless cache" model is synthesized from the bits of RFC2616 * that talks about how a cache should react to a clockless origin server, * and more or less uses the inverse logic for the opposite relationship. * */ static inline unsigned rfc2616_time(const char *p) { char *ep; unsigned long val; if (*p == '-') return (0); val = strtoul(p, &ep, 10); if (val > UINT_MAX) return (UINT_MAX); while (vct_issp(*ep)) ep++; /* We accept ',' as an end character because we may be parsing a * multi-element Cache-Control part. We accept '.' to be future * compatble with fractional seconds. */ if (*ep == '\0' || *ep == ',' || *ep == '.') return (val); return (0); } void RFC2616_Ttl(struct busyobj *bo, vtim_real now, vtim_real *t_origin, float *ttl, float *grace, float *keep) { unsigned max_age, age; vtim_real h_date, h_expires; const char *p; const struct http *hp; CHECK_OBJ_NOTNULL(bo, BUSYOBJ_MAGIC); CHECK_OBJ_NOTNULL(bo->fetch_objcore, OBJCORE_MAGIC); assert(now != 0.0 && !isnan(now)); AN(t_origin); AN(ttl); AN(grace); AN(keep); *t_origin = now; *ttl = cache_param->default_ttl; *grace = cache_param->default_grace; *keep = cache_param->default_keep; hp = bo->beresp; max_age = age = 0; h_expires = 0; h_date = 0; /* * Initial cacheability determination per [RFC2616, 13.4] * We do not support ranges to the backend yet, so 206 is out. */ if (http_GetHdr(hp, H_Age, &p)) { age = rfc2616_time(p); *t_origin -= age; } if (bo->fetch_objcore->flags & OC_F_PRIVATE) { /* Pass object. Halt the processing here, keeping only the * parsed value of t_origin, as that will be needed to * synthesize a correct Age header in delivery. The * SLT_TTL log tag at the end of this function is * deliberetaly skipped to avoid confusion when reading * the log.*/ *ttl = -1; *grace = 0; *keep = 0; return; } if (http_GetHdr(hp, H_Expires, &p)) h_expires = VTIM_parse(p); if (http_GetHdr(hp, H_Date, &p)) h_date = VTIM_parse(p); switch (http_GetStatus(hp)) { case 302: /* Moved Temporarily */ case 307: /* Temporary Redirect */ /* * https://tools.ietf.org/html/rfc7231#section-6.1 * * Do not apply the default ttl, only set a ttl if Cache-Control * or Expires are present. Uncacheable otherwise. */ *ttl = -1.; /* FALL-THROUGH */ case 200: /* OK */ case 203: /* Non-Authoritative Information */ case 204: /* No Content */ case 300: /* Multiple Choices */ case 301: /* Moved Permanently */ case 304: /* Not Modified - handled like 200 */ case 404: /* Not Found */ case 410: /* Gone */ case 414: /* Request-URI Too Large */ /* * First find any relative specification from the backend * These take precedence according to RFC2616, 13.2.4 */ if ((http_GetHdrField(hp, H_Cache_Control, "s-maxage", &p) || http_GetHdrField(hp, H_Cache_Control, "max-age", &p)) && p != NULL) { max_age = rfc2616_time(p); *ttl = max_age; break; } /* No expire header, fall back to default */ if (h_expires == 0) break; /* If backend told us it is expired already, don't cache. */ if (h_expires < h_date) { *ttl = 0; break; } if (h_date == 0 || fabs(h_date - now) < cache_param->clock_skew) { /* * If we have no Date: header or if it is * sufficiently close to our clock we will * trust Expires: relative to our own clock. */ if (h_expires < now) *ttl = 0; else *ttl = h_expires - now; break; } else { /* * But even if the clocks are out of whack we can still * derive a relative time from the two headers. * (the negative ttl case is caught above) */ *ttl = (int)(h_expires - h_date); } break; default: *ttl = -1.; break; } /* * RFC5861 outlines a way to control the use of stale responses. * We use this to initialize the grace period. */ if (*ttl >= 0 && http_GetHdrField(hp, H_Cache_Control, "stale-while-revalidate", &p) && p != NULL) { *grace = rfc2616_time(p); } VSLb(bo->vsl, SLT_TTL, "RFC %.0f %.0f %.0f %.0f %.0f %.0f %.0f %u %s", *ttl, *grace, *keep, now, *t_origin, h_date, h_expires, max_age, bo->uncacheable ? "uncacheable" : "cacheable"); } /*-------------------------------------------------------------------- * Find out if the request can receive a gzip'ed response */ unsigned RFC2616_Req_Gzip(const struct http *hp) { /* * "x-gzip" is for http/1.0 backwards compat, final note in 14.3 * p104 says to not do q values for x-gzip, so we just test * for its existence. */ if (http_GetHdrToken(hp, H_Accept_Encoding, "x-gzip", NULL, NULL)) return (1); /* * "gzip" is the real thing, but the 'q' value must be nonzero. * We do not care a hoot if the client prefers some other * compression more than gzip: Varnish only does gzip. */ if (http_GetHdrQ(hp, H_Accept_Encoding, "gzip") > 0.) return (1); /* Bad client, no gzip. */ return (0); } /*--------------------------------------------------------------------*/ // rfc7232,l,547,548 static inline int rfc2616_strong_compare(const char *p, const char *e) { if ((p[0] == 'W' && p[1] == '/') || (e[0] == 'W' && e[1] == '/')) return (0); /* XXX: should we also have http_etag_cmp() ? */ return (strcmp(p, e) == 0); } // rfc7232,l,550,552 static inline int rfc2616_weak_compare(const char *p, const char *e) { if (p[0] == 'W' && p[1] == '/') p += 2; if (e[0] == 'W' && e[1] == '/') e += 2; /* XXX: should we also have http_etag_cmp() ? */ return (strcmp(p, e) == 0); } int RFC2616_Do_Cond(const struct req *req) { const char *p, *e; vtim_real ims, lm; if (!http_IsStatus(req->resp, 200)) return (0); /* * rfc7232,l,861,866 * We MUST ignore If-Modified-Since if we have an If-None-Match header */ if (http_GetHdr(req->http, H_If_None_Match, &p)) { if (!http_GetHdr(req->resp, H_ETag, &e)) return (0); if (http_GetHdr(req->http, H_Range, NULL)) return (rfc2616_strong_compare(p, e)); else return (rfc2616_weak_compare(p, e)); } if (http_GetHdr(req->http, H_If_Modified_Since, &p)) { ims = VTIM_parse(p); if (!ims || ims > req->t_req) // rfc7232,l,868,869 return (0); if (http_GetHdr(req->resp, H_Last_Modified, &p)) { lm = VTIM_parse(p); if (!lm || lm > ims) return (0); return (1); } AZ(ObjGetDouble(req->wrk, req->objcore, OA_LASTMODIFIED, &lm)); if (lm > ims) return (0); return (1); } return (0); } /*--------------------------------------------------------------------*/ void RFC2616_Weaken_Etag(struct http *hp) { const char *p; CHECK_OBJ_NOTNULL(hp, HTTP_MAGIC); if (!http_GetHdr(hp, H_ETag, &p)) return; AN(p); if (p[0] == 'W' && p[1] == '/') return; http_Unset(hp, H_ETag); http_PrintfHeader(hp, "ETag: W/%s", p); } /*--------------------------------------------------------------------*/ void RFC2616_Vary_AE(struct http *hp) { const char *vary; if (http_GetHdrToken(hp, H_Vary, "Accept-Encoding", NULL, NULL)) return; if (http_GetHdr(hp, H_Vary, &vary)) { http_Unset(hp, H_Vary); http_PrintfHeader(hp, "Vary: %s, Accept-Encoding", vary); } else { http_SetHeader(hp, "Vary: Accept-Encoding"); } } /*--------------------------------------------------------------------*/ const char * RFC2616_Strong_LM(const struct http *hp, struct worker *wrk, struct objcore *oc) { const char *p = NULL, *e = NULL; vtim_real lm, d; CHECK_OBJ_ORNULL(wrk, WORKER_MAGIC); CHECK_OBJ_ORNULL(oc, OBJCORE_MAGIC); CHECK_OBJ_ORNULL(hp, HTTP_MAGIC); if (hp != NULL) { (void)http_GetHdr(hp, H_Last_Modified, &p); (void)http_GetHdr(hp, H_Date, &e); } else if (wrk != NULL && oc != NULL) { p = HTTP_GetHdrPack(wrk, oc, H_Last_Modified); e = HTTP_GetHdrPack(wrk, oc, H_Date); } if (p == NULL || e == NULL) return (NULL); lm = VTIM_parse(p); d = VTIM_parse(e); /* The cache entry includes a Date value which is at least one second * after the Last-Modified value. * [RFC9110 8.8.2.2-6.2] */ return ((lm && d && lm + 1 <= d) ? p : NULL); } varnish-7.5.0/bin/varnishd/cache/cache_session.c000066400000000000000000000404251457605730600216220ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Session management * * The overall goal here is to hold as little state as possible for an * idle session. This leads to various nasty-ish overloads of struct * sess fields, for instance ->fd being negative ->reason. * */ //lint --e{766} #include "config.h" #include "cache_varnishd.h" #include #include #include "cache_pool.h" #include "cache_transport.h" #include "vsa.h" #include "vtcp.h" #include "vtim.h" #include "waiter/waiter.h" static const struct { const char *type; } sess_attr[SA_LAST] = { #define SESS_ATTR(UC, lc, typ, len) [SA_##UC] = { #typ }, #include "tbl/sess_attr.h" }; enum sess_close { SCE_NULL = 0, #define SESS_CLOSE(nm, stat, err, desc) SCE_##nm, #include "tbl/sess_close.h" SCE_MAX, }; const struct stream_close SC_NULL[1] = {{ .magic = STREAM_CLOSE_MAGIC, .idx = SCE_NULL, .is_err = 0, .name = "null", .desc = "Not Closing", }}; #define SESS_CLOSE(nm, stat, err, text) \ const struct stream_close SC_##nm[1] = {{ \ .magic = STREAM_CLOSE_MAGIC, \ .idx = SCE_##nm, \ .is_err = err, \ .name = #nm, \ .desc = text, \ }}; #include "tbl/sess_close.h" static const stream_close_t sc_lookup[SCE_MAX] = { [SCE_NULL] = SC_NULL, #define SESS_CLOSE(nm, stat, err, desc) \ [SCE_##nm] = SC_##nm, #include "tbl/sess_close.h" }; /*--------------------------------------------------------------------*/ void SES_SetTransport(struct worker *wrk, struct sess *sp, struct req *req, const struct transport *xp) { CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(sp, SESS_MAGIC); CHECK_OBJ_NOTNULL(req, REQ_MAGIC); CHECK_OBJ_NOTNULL(xp, TRANSPORT_MAGIC); assert(xp->number > 0); sp->sattr[SA_TRANSPORT] = xp->number; req->transport = xp; wrk->task->func = xp->new_session; wrk->task->priv = req; } /*--------------------------------------------------------------------*/ #define SES_NOATTR_OFFSET 0xffff static int ses_get_attr(const struct sess *sp, enum sess_attr a, void **dst) { CHECK_OBJ_NOTNULL(sp, SESS_MAGIC); assert(a < SA_LAST); AN(dst); if (sp->sattr[a] == SES_NOATTR_OFFSET) { *dst = NULL; return (-1); } *dst = WS_AtOffset(sp->ws, sp->sattr[a], 0); return (0); } static int ses_set_attr(const struct sess *sp, enum sess_attr a, const void *src, int sz) { void *dst; CHECK_OBJ_NOTNULL(sp, SESS_MAGIC); assert(a < SA_LAST); AN(src); assert(sz > 0); if (sp->sattr[a] == SES_NOATTR_OFFSET) return (-1); dst = WS_AtOffset(sp->ws, sp->sattr[a], sz); AN(dst); memcpy(dst, src, sz); return (0); } static int ses_res_attr(struct sess *sp, enum sess_attr a, void **dst, ssize_t *szp) { unsigned o; ssize_t sz; CHECK_OBJ_NOTNULL(sp, SESS_MAGIC); assert(a < SA_LAST); AN(dst); sz = *szp; *szp = 0; assert(sz >= 0); if (WS_ReserveSize(sp->ws, sz) == 0) return (0); o = WS_ReservationOffset(sp->ws); if (o >= SES_NOATTR_OFFSET) { WS_Release(sp->ws, 0); return (0); } *dst = WS_Reservation(sp->ws); *szp = sz; sp->sattr[a] = (uint16_t)o; WS_Release(sp->ws, sz); return (1); } #define SESS_ATTR(UP, low, typ, len) \ int \ SES_Set_##low(const struct sess *sp, const typ *src) \ { \ assert(len > 0); \ return (ses_set_attr(sp, SA_##UP, src, len)); \ } \ \ int \ SES_Get_##low(const struct sess *sp, typ **dst) \ { \ assert(len > 0); \ return (ses_get_attr(sp, SA_##UP, (void**)dst)); \ } \ \ int \ SES_Reserve_##low(struct sess *sp, typ **dst, ssize_t *sz) \ { \ assert(len > 0); \ AN(sz); \ *sz = len; \ return (ses_res_attr(sp, SA_##UP, (void**)dst, sz)); \ } #include "tbl/sess_attr.h" int SES_Set_String_Attr(struct sess *sp, enum sess_attr a, const char *src) { void *q; ssize_t l, sz; CHECK_OBJ_NOTNULL(sp, SESS_MAGIC); AN(src); assert(a < SA_LAST); if (strcmp(sess_attr[a].type, "char")) WRONG("wrong sess_attr: not char"); l = sz = strlen(src) + 1; if (! ses_res_attr(sp, a, &q, &sz)) return (0); assert(l == sz); strcpy(q, src); return (1); } const char * SES_Get_String_Attr(const struct sess *sp, enum sess_attr a) { void *q; CHECK_OBJ_NOTNULL(sp, SESS_MAGIC); assert(a < SA_LAST); if (strcmp(sess_attr[a].type, "char")) WRONG("wrong sess_attr: not char"); if (ses_get_attr(sp, a, &q) < 0) return (NULL); return (q); } /*--------------------------------------------------------------------*/ void HTC_Status(enum htc_status_e e, const char **name, const char **desc) { switch (e) { #define HTC_STATUS(e, n, s, l) \ case HTC_S_ ## e: \ *name = s; \ *desc = l; \ return; #include "tbl/htc.h" default: WRONG("HTC_Status"); } } /*--------------------------------------------------------------------*/ void HTC_RxInit(struct http_conn *htc, struct ws *ws) { unsigned l; CHECK_OBJ_NOTNULL(htc, HTTP_CONN_MAGIC); htc->ws = ws; l = WS_ReqPipeline(htc->ws, htc->pipeline_b, htc->pipeline_e); htc->rxbuf_b = WS_Reservation(ws); htc->rxbuf_e = htc->rxbuf_b + l; htc->pipeline_b = NULL; htc->pipeline_e = NULL; } void HTC_RxPipeline(struct http_conn *htc, char *p) { CHECK_OBJ_NOTNULL(htc, HTTP_CONN_MAGIC); assert(p >= htc->rxbuf_b); assert(p <= htc->rxbuf_e); if (p == htc->rxbuf_e) { htc->pipeline_b = NULL; htc->pipeline_e = NULL; } else { htc->pipeline_b = p; htc->pipeline_e = htc->rxbuf_e; } } /*---------------------------------------------------------------------- * Receive a request/packet/whatever, with timeouts * * maxbytes is the maximum number of bytes the caller expects to need to * reach a complete work unit. Note that due to pipelining the actual * number of bytes passed to func in htc->rxbuf_b through htc->rxbuf_e may * be larger. * * t0 is when we start * *t1 becomes time of first non-idle rx * *t2 becomes time of complete rx * ti is when we return IDLE if nothing has arrived * tn is when we timeout on non-complete (total timeout) * td is max timeout between reads */ enum htc_status_e HTC_RxStuff(struct http_conn *htc, htc_complete_f *func, vtim_real *t1, vtim_real *t2, vtim_real ti, vtim_real tn, vtim_dur td, int maxbytes) { vtim_dur tmo; vtim_real now; enum htc_status_e hs; unsigned l, r; ssize_t z; CHECK_OBJ_NOTNULL(htc, HTTP_CONN_MAGIC); AN(htc->rfd); assert(*htc->rfd > 0); AN(htc->rxbuf_b); AN(WS_Reservation(htc->ws)); l = pdiff(htc->rxbuf_b, htc->rxbuf_e); r = WS_ReservationSize(htc->ws); assert(l <= r); AZ(isnan(tn) && isnan(td)); if (t1 != NULL) assert(isnan(*t1)); if (l == r) { /* Can't work with a zero size buffer */ WS_ReleaseP(htc->ws, htc->rxbuf_b); return (HTC_S_OVERFLOW); } z = r; if (z < maxbytes) maxbytes = z; /* Cap maxbytes at available WS */ while (1) { now = VTIM_real(); AZ(htc->pipeline_b); AZ(htc->pipeline_e); l = pdiff(htc->rxbuf_b, htc->rxbuf_e); assert(l <= r); hs = func(htc); if (hs == HTC_S_OVERFLOW || hs == HTC_S_JUNK) { WS_ReleaseP(htc->ws, htc->rxbuf_b); return (hs); } if (hs == HTC_S_COMPLETE) { WS_ReleaseP(htc->ws, htc->rxbuf_e); /* Got it, run with it */ if (t1 != NULL && isnan(*t1)) *t1 = now; if (t2 != NULL) *t2 = now; return (HTC_S_COMPLETE); } if (hs == HTC_S_MORE) { /* Working on it */ if (t1 != NULL && isnan(*t1)) *t1 = now; } else if (hs == HTC_S_EMPTY) htc->rxbuf_e = htc->rxbuf_b; else WRONG("htc_status_e"); if (hs == HTC_S_EMPTY && !isnan(ti) && (isnan(tn) || ti < tn)) tmo = ti - now; else if (isnan(tn)) tmo = td; else if (isnan(td)) tmo = tn - now; else if (td < tn - now) tmo = td; else tmo = tn - now; AZ(isnan(tmo)); z = maxbytes - (htc->rxbuf_e - htc->rxbuf_b); if (z <= 0) { /* maxbytes reached but not HTC_S_COMPLETE. Return * overflow. */ WS_ReleaseP(htc->ws, htc->rxbuf_b); return (HTC_S_OVERFLOW); } if (tmo <= 0.0) tmo = 1e-3; z = VTCP_read(*htc->rfd, htc->rxbuf_e, z, tmo); if (z == 0 || z == -1) { WS_ReleaseP(htc->ws, htc->rxbuf_b); return (HTC_S_EOF); } else if (z > 0) htc->rxbuf_e += z; else if (z == -2) { WS_ReleaseP(htc->ws, htc->rxbuf_b); if (hs == HTC_S_EMPTY) return (HTC_S_IDLE); else return (HTC_S_TIMEOUT); } } } /*-------------------------------------------------------------------- * Get a new session, preferably by recycling an already ready one * * Layout is: * struct sess * workspace */ struct sess * SES_New(struct pool *pp) { struct sess *sp; unsigned sz; char *p, *e; CHECK_OBJ_NOTNULL(pp, POOL_MAGIC); sp = MPL_Get(pp->mpl_sess, &sz); AN(sp); INIT_OBJ(sp, SESS_MAGIC); sp->pool = pp; sp->refcnt = 1; memset(sp->sattr, 0xff, sizeof sp->sattr); e = (char*)sp + sz; p = (char*)(sp + 1); p = (void*)PRNDUP(p); assert(p < e); WS_Init(sp->ws, "ses", p, e - p); sp->t_open = NAN; sp->t_idle = NAN; sp->timeout_idle = NAN; sp->timeout_linger = NAN; sp->send_timeout = NAN; sp->idle_send_timeout = NAN; Lck_New(&sp->mtx, lck_sess); CHECK_OBJ_NOTNULL(sp, SESS_MAGIC); return (sp); } /*-------------------------------------------------------------------- * Handle a session (from waiter) */ static void v_matchproto_(waiter_handle_f) ses_handle(struct waited *wp, enum wait_event ev, vtim_real now) { struct sess *sp; struct pool *pp; struct pool_task *tp; const struct transport *xp; CHECK_OBJ_NOTNULL(wp, WAITED_MAGIC); CAST_OBJ_NOTNULL(sp, wp->priv1, SESS_MAGIC); CAST_OBJ_NOTNULL(xp, wp->priv2, TRANSPORT_MAGIC); assert(WS_Reservation(sp->ws) == wp); FINI_OBJ(wp); /* The WS was reserved in SES_Wait() */ WS_Release(sp->ws, 0); switch (ev) { case WAITER_TIMEOUT: SES_Delete(sp, SC_RX_CLOSE_IDLE, now); break; case WAITER_REMCLOSE: SES_Delete(sp, SC_REM_CLOSE, now); break; case WAITER_ACTION: pp = sp->pool; CHECK_OBJ_NOTNULL(pp, POOL_MAGIC); /* SES_Wait() guarantees the next will not assert. */ assert(sizeof *tp <= WS_ReserveSize(sp->ws, sizeof *tp)); tp = WS_Reservation(sp->ws); tp->func = xp->unwait; tp->priv = sp; if (Pool_Task(pp, tp, TASK_QUEUE_REQ)) SES_Delete(sp, SC_OVERLOAD, now); break; case WAITER_CLOSE: WRONG("Should not see WAITER_CLOSE on client side"); break; default: WRONG("Wrong event in ses_handle"); } } /*-------------------------------------------------------------------- */ void SES_Wait(struct sess *sp, const struct transport *xp) { struct pool *pp; struct waited *wp; unsigned u; CHECK_OBJ_NOTNULL(sp, SESS_MAGIC); CHECK_OBJ_NOTNULL(xp, TRANSPORT_MAGIC); pp = sp->pool; CHECK_OBJ_NOTNULL(pp, POOL_MAGIC); assert(sp->fd > 0); /* * XXX: waiter_epoll prevents us from zeroing the struct because * XXX: it keeps state across calls. */ VTCP_nonblocking(sp->fd); /* * Put struct waited on the workspace. Make sure that the * workspace can hold enough space for both struct waited * and pool_task, as pool_task will be needed when coming * off the waiter again. */ u = WS_ReserveAll(sp->ws); if (u < sizeof (struct waited) || u < sizeof(struct pool_task)) { WS_MarkOverflow(sp->ws); SES_Delete(sp, SC_OVERLOAD, NAN); return; } wp = WS_Reservation(sp->ws); INIT_OBJ(wp, WAITED_MAGIC); wp->fd = sp->fd; wp->priv1 = sp; wp->priv2 = xp; wp->idle = sp->t_idle; wp->func = ses_handle; wp->tmo = SESS_TMO(sp, timeout_idle); if (Wait_Enter(pp->waiter, wp)) SES_Delete(sp, SC_PIPE_OVERFLOW, NAN); } /*-------------------------------------------------------------------- * Update sc_ counters by reason * * assuming that the approximation of non-atomic global counters is sufficient. * if not: update to per-wrk */ static void ses_close_acct(stream_close_t reason) { CHECK_OBJ_NOTNULL(reason, STREAM_CLOSE_MAGIC); switch (reason->idx) { #define SESS_CLOSE(reason, stat, err, desc) \ case SCE_ ## reason: \ VSC_C_main->sc_ ## stat++; \ break; #include "tbl/sess_close.h" default: WRONG("Wrong event in ses_close_acct"); } if (reason->is_err) VSC_C_main->sess_closed_err++; } /*-------------------------------------------------------------------- * Close a session's connection. * XXX: Technically speaking we should catch a t_end timestamp here * XXX: for SES_Delete() to use. */ void SES_Close(struct sess *sp, stream_close_t reason) { int i; CHECK_OBJ_NOTNULL(reason, STREAM_CLOSE_MAGIC); assert(reason->idx > 0); assert(sp->fd > 0); i = close(sp->fd); assert(i == 0 || errno != EBADF); /* XXX EINVAL seen */ sp->fd = -reason->idx; ses_close_acct(reason); } /*-------------------------------------------------------------------- * Report and dismantle a session. */ void SES_Delete(struct sess *sp, stream_close_t reason, vtim_real now) { CHECK_OBJ_NOTNULL(sp, SESS_MAGIC); CHECK_OBJ_NOTNULL(reason, STREAM_CLOSE_MAGIC); if (reason != SC_NULL) SES_Close(sp, reason); assert(sp->fd < 0); if (isnan(now)) now = VTIM_real(); AZ(isnan(sp->t_open)); if (now < sp->t_open) { VSL(SLT_Debug, sp->vxid, "Clock step (now=%f < t_open=%f)", now, sp->t_open); if (now + cache_param->clock_step < sp->t_open) WRONG("Clock step detected"); now = sp->t_open; /* Do not log negatives */ } if (reason == SC_NULL) { assert(sp->fd < 0 && -sp->fd < SCE_MAX); reason = sc_lookup[-sp->fd]; } CHECK_OBJ_NOTNULL(reason, STREAM_CLOSE_MAGIC); VSL(SLT_SessClose, sp->vxid, "%s %.3f", reason->name, now - sp->t_open); VSL(SLT_End, sp->vxid, "%s", ""); if (WS_Overflowed(sp->ws)) VSC_C_main->ws_session_overflow++; SES_Rel(sp); } void SES_DeleteHS(struct sess *sp, enum htc_status_e hs, vtim_real now) { stream_close_t reason; switch (hs) { case HTC_S_JUNK: reason = SC_RX_JUNK; break; case HTC_S_CLOSE: reason = SC_REM_CLOSE; break; case HTC_S_TIMEOUT: reason = SC_RX_TIMEOUT; break; case HTC_S_OVERFLOW: reason = SC_RX_OVERFLOW; break; case HTC_S_EOF: reason = SC_REM_CLOSE; break; default: WRONG("htc_status (bad)"); } SES_Delete(sp, reason, now); } /*-------------------------------------------------------------------- */ void SES_Ref(struct sess *sp) { CHECK_OBJ_NOTNULL(sp, SESS_MAGIC); Lck_Lock(&sp->mtx); assert(sp->refcnt > 0); sp->refcnt++; Lck_Unlock(&sp->mtx); } void SES_Rel(struct sess *sp) { int i; struct pool *pp; CHECK_OBJ_NOTNULL(sp, SESS_MAGIC); pp = sp->pool; CHECK_OBJ_NOTNULL(pp, POOL_MAGIC); Lck_Lock(&sp->mtx); assert(sp->refcnt > 0); i = --sp->refcnt; Lck_Unlock(&sp->mtx); if (i) return; Lck_Delete(&sp->mtx); #ifdef ENABLE_WORKSPACE_EMULATOR WS_Rollback(sp->ws, 0); #endif MPL_Free(sp->pool->mpl_sess, sp); } /*-------------------------------------------------------------------- * Create and delete pools */ void SES_NewPool(struct pool *pp, unsigned pool_no) { char nb[8]; CHECK_OBJ_NOTNULL(pp, POOL_MAGIC); bprintf(nb, "req%u", pool_no); pp->mpl_req = MPL_New(nb, &cache_param->pool_req, &cache_param->workspace_client); bprintf(nb, "sess%u", pool_no); pp->mpl_sess = MPL_New(nb, &cache_param->pool_sess, &cache_param->workspace_session); pp->waiter = Waiter_New(); } void SES_DestroyPool(struct pool *pp) { MPL_Destroy(&pp->mpl_req); MPL_Destroy(&pp->mpl_sess); Waiter_Destroy(&pp->waiter); } varnish-7.5.0/bin/varnishd/cache/cache_shmlog.c000066400000000000000000000361541457605730600214340ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include "config.h" #include "cache_varnishd.h" #include #include #include "vgz.h" #include "vsl_priv.h" #include "vmb.h" #include "common/heritage.h" #include "common/vsmw.h" /* ------------------------------------------------------------ * strands helpers - move elsewhere? */ static unsigned strands_len(const struct strands *s) { unsigned r = 0; int i; for (i = 0; i < s->n; i++) { if (s->p[i] == NULL || *s->p[i] == '\0') continue; r += strlen(s->p[i]); } return (r); } /* * like VRT_Strands(), but truncating instead of failing for end of buffer * * returns number of bytes including NUL */ static unsigned strands_cat(char *buf, unsigned bufl, const struct strands *s) { unsigned l = 0, ll; int i; /* NUL-terminated */ assert(bufl > 0); bufl--; for (i = 0; i < s->n && bufl > 0; i++) { if (s->p[i] == NULL || *s->p[i] == '\0') continue; ll = vmin_t(unsigned, strlen(s->p[i]), bufl); memcpy(buf, s->p[i], ll); l += ll; buf += ll; bufl -= ll; } *buf = '\0'; /* NUL-terminated */ return (l + 1); } /* These cannot be struct lock, which depends on vsm/vsl working */ static pthread_mutex_t vsl_mtx; static pthread_mutex_t vsc_mtx; static pthread_mutex_t vsm_mtx; static struct VSL_head *vsl_head; static const uint32_t *vsl_end; static uint32_t *vsl_ptr; static unsigned vsl_segment_n; static ssize_t vsl_segsize; struct VSC_main *VSC_C_main; static void vsl_sanity(const struct vsl_log *vsl) { AN(vsl); AN(vsl->wlp); AN(vsl->wlb); AN(vsl->wle); assert(vsl->wlb <= vsl->wlp); assert(vsl->wlp <= vsl->wle); } /*-------------------------------------------------------------------- * Check if the VSL_tag is masked by parameter bitmap */ static inline int vsl_tag_is_masked(enum VSL_tag_e tag) { volatile uint8_t *bm = &cache_param->vsl_mask[0]; uint8_t b; assert(tag > SLT__Bogus); assert(tag < SLT__Reserved); bm += ((unsigned)tag >> 3); b = (0x80 >> ((unsigned)tag & 7)); return (*bm & b); } int VSL_tag_is_masked(enum VSL_tag_e tag) { return (vsl_tag_is_masked(tag)); } /*-------------------------------------------------------------------- * Lay down a header fields, and return pointer to the next record */ static inline uint32_t * vsl_hdr(enum VSL_tag_e tag, uint32_t *p, unsigned len, vxid_t vxid) { AZ((uintptr_t)p & 0x3); assert(tag > SLT__Bogus); assert(tag < SLT__Reserved); AZ(len & ~VSL_LENMASK); p[2] = vxid.vxid >> 32; p[1] = vxid.vxid; p[0] = (((unsigned)tag & VSL_IDMASK) << VSL_IDSHIFT) | (VSL_VERSION_3 << VSL_VERSHIFT) | len; return (VSL_END(p, len)); } /*-------------------------------------------------------------------- * Space available in a VSL buffer when accounting for overhead */ static unsigned vsl_space(const struct vsl_log *vsl) { ptrdiff_t mlen; mlen = vsl->wle - vsl->wlp; assert(mlen >= 0); if (mlen < VSL_OVERHEAD + 1) return (0); mlen -= VSL_OVERHEAD; mlen *= sizeof *vsl->wlp; if (mlen > cache_param->vsl_reclen) mlen = cache_param->vsl_reclen; return(mlen); } /*-------------------------------------------------------------------- * Wrap the VSL buffer */ static void vsl_wrap(void) { assert(vsl_ptr >= vsl_head->log); assert(vsl_ptr < vsl_end); vsl_segment_n += VSL_SEGMENTS - (vsl_segment_n % VSL_SEGMENTS); assert(vsl_segment_n % VSL_SEGMENTS == 0); vsl_head->offset[0] = 0; vsl_head->log[0] = VSL_ENDMARKER; VWMB(); if (vsl_ptr != vsl_head->log) { *vsl_ptr = VSL_WRAPMARKER; vsl_ptr = vsl_head->log; } vsl_head->segment_n = vsl_segment_n; VSC_C_main->shm_cycles++; } /*-------------------------------------------------------------------- * Reserve bytes for a record, wrap if necessary */ static uint32_t * vsl_get(unsigned len, unsigned records, unsigned flushes) { uint32_t *p; int err; err = pthread_mutex_trylock(&vsl_mtx); if (err == EBUSY) { PTOK(pthread_mutex_lock(&vsl_mtx)); VSC_C_main->shm_cont++; } else { AZ(err); } assert(vsl_ptr < vsl_end); AZ((uintptr_t)vsl_ptr & 0x3); VSC_C_main->shm_writes++; VSC_C_main->shm_flushes += flushes; VSC_C_main->shm_records += records; VSC_C_main->shm_bytes += VSL_BYTES(VSL_OVERHEAD + VSL_WORDS((uint64_t)len)); /* Wrap if necessary */ if (VSL_END(vsl_ptr, len) >= vsl_end) vsl_wrap(); p = vsl_ptr; vsl_ptr = VSL_END(vsl_ptr, len); assert(vsl_ptr < vsl_end); AZ((uintptr_t)vsl_ptr & 0x3); *vsl_ptr = VSL_ENDMARKER; while ((vsl_ptr - vsl_head->log) / vsl_segsize > vsl_segment_n % VSL_SEGMENTS) { vsl_segment_n++; vsl_head->offset[vsl_segment_n % VSL_SEGMENTS] = vsl_ptr - vsl_head->log; } PTOK(pthread_mutex_unlock(&vsl_mtx)); /* Implicit VWMB() in mutex op ensures ENDMARKER and new table values are seen before new segment number */ vsl_head->segment_n = vsl_segment_n; return (p); } /*-------------------------------------------------------------------- * Stick a finished record into VSL. */ static void vslr(enum VSL_tag_e tag, vxid_t vxid, const char *b, unsigned len) { uint32_t *p; unsigned mlen; mlen = cache_param->vsl_reclen; /* Truncate */ if (len > mlen) len = mlen; p = vsl_get(len, 1, 0); memcpy(p + VSL_OVERHEAD, b, len); /* * the vxid needs to be written before the barrier to * ensure it is valid when vsl_hdr() marks the record * ready by writing p[0] */ p[2] = vxid.vxid >> 32; p[1] = vxid.vxid; VWMB(); (void)vsl_hdr(tag, p, len, vxid); } /*-------------------------------------------------------------------- * Add a unbuffered record to VSL * * NB: This variant should be used sparingly and only for low volume * NB: since it significantly adds to the mutex load on the VSL. */ void VSLv(enum VSL_tag_e tag, vxid_t vxid, const char *fmt, va_list ap) { unsigned n, mlen = cache_param->vsl_reclen; char buf[mlen]; AN(fmt); if (vsl_tag_is_masked(tag)) return; if (strchr(fmt, '%') == NULL) { vslr(tag, vxid, fmt, strlen(fmt) + 1); } else { n = vsnprintf(buf, mlen, fmt, ap); n = vmin(n, mlen - 1); buf[n++] = '\0'; /* NUL-terminated */ vslr(tag, vxid, buf, n); } } void VSLs(enum VSL_tag_e tag, vxid_t vxid, const struct strands *s) { unsigned n, mlen = cache_param->vsl_reclen; char buf[mlen]; if (vsl_tag_is_masked(tag)) return; n = strands_cat(buf, mlen, s); vslr(tag, vxid, buf, n); } void VSL(enum VSL_tag_e tag, vxid_t vxid, const char *fmt, ...) { va_list ap; va_start(ap, fmt); VSLv(tag, vxid, fmt, ap); va_end(ap); } /*--------------------------------------------------------------------*/ void VSL_Flush(struct vsl_log *vsl, int overflow) { uint32_t *p; unsigned l; vsl_sanity(vsl); l = pdiff(vsl->wlb, vsl->wlp); if (l == 0) return; assert(l >= 8); p = vsl_get(l, vsl->wlr, overflow); memcpy(p + VSL_OVERHEAD, vsl->wlb, l); p[1] = l; VWMB(); p[0] = ((((unsigned)SLT__Batch & 0xff) << VSL_IDSHIFT)); vsl->wlp = vsl->wlb; vsl->wlr = 0; } /*-------------------------------------------------------------------- * Buffered VSLs */ static char * vslb_get(struct vsl_log *vsl, enum VSL_tag_e tag, unsigned *length) { unsigned mlen = cache_param->vsl_reclen; char *retval; vsl_sanity(vsl); if (*length < mlen) mlen = *length; if (VSL_END(vsl->wlp, mlen) > vsl->wle) VSL_Flush(vsl, 1); retval = VSL_DATA(vsl->wlp); /* If it still doesn't fit, truncate */ if (VSL_END(vsl->wlp, mlen) > vsl->wle) mlen = vsl_space(vsl); vsl->wlp = vsl_hdr(tag, vsl->wlp, mlen, vsl->wid); vsl->wlr++; *length = mlen; return (retval); } static void vslb_simple(struct vsl_log *vsl, enum VSL_tag_e tag, unsigned length, const char *str) { char *p; if (length == 0) length = strlen(str); length += 1; // NUL p = vslb_get(vsl, tag, &length); memcpy(p, str, length - 1); p[length - 1] = '\0'; if (DO_DEBUG(DBG_SYNCVSL)) VSL_Flush(vsl, 0); } /*-------------------------------------------------------------------- * VSL-buffered-txt */ void VSLbt(struct vsl_log *vsl, enum VSL_tag_e tag, txt t) { Tcheck(t); if (vsl_tag_is_masked(tag)) return; vslb_simple(vsl, tag, Tlen(t), t.b); } /*-------------------------------------------------------------------- * VSL-buffered-strands */ void VSLbs(struct vsl_log *vsl, enum VSL_tag_e tag, const struct strands *s) { unsigned l; char *p; if (vsl_tag_is_masked(tag)) return; l = strands_len(s) + 1; p = vslb_get(vsl, tag, &l); (void)strands_cat(p, l, s); if (DO_DEBUG(DBG_SYNCVSL)) VSL_Flush(vsl, 0); } /*-------------------------------------------------------------------- * VSL-buffered */ void VSLbv(struct vsl_log *vsl, enum VSL_tag_e tag, const char *fmt, va_list ap) { char *p, *p1; unsigned n = 0, mlen; va_list ap2; AN(fmt); if (vsl_tag_is_masked(tag)) return; /* * If there are no printf-expansions, don't waste time expanding them */ if (strchr(fmt, '%') == NULL) { vslb_simple(vsl, tag, 0, fmt); return; } /* * If the format is trivial, deal with it directly */ if (!strcmp(fmt, "%s")) { p1 = va_arg(ap, char *); vslb_simple(vsl, tag, 0, p1); return; } vsl_sanity(vsl); mlen = vsl_space(vsl); // First attempt, only if any space at all if (mlen > 0) { p = VSL_DATA(vsl->wlp); va_copy(ap2, ap); n = vsnprintf(p, mlen, fmt, ap2); va_end(ap2); } // Second attempt, if a flush might help if (mlen == 0 || (n + 1 > mlen && n + 1 <= cache_param->vsl_reclen)) { VSL_Flush(vsl, 1); mlen = vsl_space(vsl); p = VSL_DATA(vsl->wlp); n = vsnprintf(p, mlen, fmt, ap); } if (n + 1 < mlen) mlen = n + 1; (void)vslb_get(vsl, tag, &mlen); if (DO_DEBUG(DBG_SYNCVSL)) VSL_Flush(vsl, 0); } void VSLb(struct vsl_log *vsl, enum VSL_tag_e tag, const char *fmt, ...) { va_list ap; vsl_sanity(vsl); va_start(ap, fmt); VSLbv(vsl, tag, fmt, ap); va_end(ap); } #define Tf6 "%ju.%06ju" #define Ta6(t) (uintmax_t)floor((t)), (uintmax_t)floor((t) * 1e6) % 1000000U void VSLb_ts(struct vsl_log *vsl, const char *event, vtim_real first, vtim_real *pprev, vtim_real now) { /* * XXX: Make an option to turn off some unnecessary timestamp * logging. This must be done carefully because some functions * (e.g. V1L_Open) takes the last timestamp as its initial * value for timeout calculation. */ vsl_sanity(vsl); AN(event); AN(pprev); assert(!isnan(now) && now != 0.); VSLb(vsl, SLT_Timestamp, "%s: " Tf6 " " Tf6 " " Tf6, event, Ta6(now), Ta6(now - first), Ta6(now - *pprev)); *pprev = now; } void VSLb_bin(struct vsl_log *vsl, enum VSL_tag_e tag, ssize_t len, const void *ptr) { unsigned mlen; char *p; vsl_sanity(vsl); AN(ptr); if (vsl_tag_is_masked(tag)) return; mlen = cache_param->vsl_reclen; /* Truncate */ len = vmin_t(ssize_t, len, mlen); assert(vsl->wlp <= vsl->wle); /* Flush if necessary */ if (VSL_END(vsl->wlp, len) > vsl->wle) VSL_Flush(vsl, 1); assert(VSL_END(vsl->wlp, len) <= vsl->wle); p = VSL_DATA(vsl->wlp); memcpy(p, ptr, len); vsl->wlp = vsl_hdr(tag, vsl->wlp, len, vsl->wid); assert(vsl->wlp <= vsl->wle); vsl->wlr++; if (DO_DEBUG(DBG_SYNCVSL)) VSL_Flush(vsl, 0); } /*-------------------------------------------------------------------- * Setup a VSL buffer, allocate space if none provided. */ void VSL_Setup(struct vsl_log *vsl, void *ptr, size_t len) { if (ptr == NULL) { len = cache_param->vsl_buffer; ptr = malloc(len); AN(ptr); } vsl->wlp = ptr; vsl->wlb = ptr; vsl->wle = ptr; vsl->wle += len / sizeof(*vsl->wle); vsl->wlr = 0; vsl->wid = NO_VXID; vsl_sanity(vsl); } /*--------------------------------------------------------------------*/ void VSL_ChgId(struct vsl_log *vsl, const char *typ, const char *why, vxid_t vxid) { vxid_t ovxid; vsl_sanity(vsl); ovxid = vsl->wid; VSLb(vsl, SLT_Link, "%s %ju %s", typ, VXID(vxid), why); VSL_End(vsl); vsl->wid = vxid; VSLb(vsl, SLT_Begin, "%s %ju %s", typ, VXID(ovxid), why); } /*--------------------------------------------------------------------*/ void VSL_End(struct vsl_log *vsl) { txt t; char p[] = ""; vsl_sanity(vsl); assert(!IS_NO_VXID(vsl->wid)); t.b = p; t.e = p; VSLbt(vsl, SLT_End, t); VSL_Flush(vsl, 0); vsl->wid = NO_VXID; } static void v_matchproto_(vsm_lock_f) vsm_vsc_lock(void) { PTOK(pthread_mutex_lock(&vsc_mtx)); } static void v_matchproto_(vsm_lock_f) vsm_vsc_unlock(void) { PTOK(pthread_mutex_unlock(&vsc_mtx)); } static void v_matchproto_(vsm_lock_f) vsm_vsmw_lock(void) { PTOK(pthread_mutex_lock(&vsm_mtx)); } static void v_matchproto_(vsm_lock_f) vsm_vsmw_unlock(void) { PTOK(pthread_mutex_unlock(&vsm_mtx)); } /*--------------------------------------------------------------------*/ void VSM_Init(void) { unsigned u; assert(UINT_MAX % VSL_SEGMENTS == VSL_SEGMENTS - 1); PTOK(pthread_mutex_init(&vsl_mtx, &mtxattr_errorcheck)); PTOK(pthread_mutex_init(&vsc_mtx, &mtxattr_errorcheck)); PTOK(pthread_mutex_init(&vsm_mtx, &mtxattr_errorcheck)); vsc_lock = vsm_vsc_lock; vsc_unlock = vsm_vsc_unlock; vsmw_lock = vsm_vsmw_lock; vsmw_unlock = vsm_vsmw_unlock; heritage.proc_vsmw = VSMW_New(heritage.vsm_fd, 0640, "_.index"); AN(heritage.proc_vsmw); VSC_C_main = VSC_main_New(NULL, NULL, ""); AN(VSC_C_main); AN(heritage.proc_vsmw); vsl_head = VSMW_Allocf(heritage.proc_vsmw, NULL, VSL_CLASS, cache_param->vsl_space, VSL_CLASS); AN(vsl_head); vsl_segsize = ((cache_param->vsl_space - sizeof *vsl_head) / sizeof *vsl_end) / VSL_SEGMENTS; vsl_end = vsl_head->log + vsl_segsize * VSL_SEGMENTS; /* Make segment_n always overflow on first log wrap to make any problems with regard to readers on that event visible */ vsl_segment_n = UINT_MAX - (VSL_SEGMENTS - 1); AZ(vsl_segment_n % VSL_SEGMENTS); vsl_ptr = vsl_head->log; *vsl_ptr = VSL_ENDMARKER; memset(vsl_head, 0, sizeof *vsl_head); vsl_head->segsize = vsl_segsize; vsl_head->offset[0] = 0; vsl_head->segment_n = vsl_segment_n; for (u = 1; u < VSL_SEGMENTS; u++) vsl_head->offset[u] = -1; VWMB(); memcpy(vsl_head->marker, VSL_HEAD_MARKER, sizeof vsl_head->marker); } varnish-7.5.0/bin/varnishd/cache/cache_transport.h000066400000000000000000000067321457605730600222030ustar00rootroot00000000000000/*- * Copyright (c) 2016 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * A transport is how we talk HTTP for a given request. * * This is different from a protocol because ESI child requests have * their own "protocol" to talk to the parent ESI request, which may * or may not, be talking a "real" HTTP protocol itself. * */ struct req; struct boc; typedef void vtr_deliver_f (struct req *, struct boc *, int sendbody); typedef void vtr_req_body_f (struct req *); typedef void vtr_sess_panic_f (struct vsb *, const struct sess *); typedef void vtr_req_panic_f (struct vsb *, const struct req *); typedef void vtr_req_fail_f (struct req *, stream_close_t); typedef void vtr_reembark_f (struct worker *, struct req *); typedef int vtr_poll_f (struct req *); typedef int vtr_minimal_response_f (struct req *, uint16_t status); struct transport { unsigned magic; #define TRANSPORT_MAGIC 0xf157f32f uint16_t number; const char *proto_ident; // for -a args const char *name; task_func_t *new_session; task_func_t *unwait; vtr_req_fail_f *req_fail; vtr_req_body_f *req_body; vtr_deliver_f *deliver; vtr_sess_panic_f *sess_panic; vtr_req_panic_f *req_panic; vtr_reembark_f *reembark; vtr_poll_f *poll; vtr_minimal_response_f *minimal_response; VTAILQ_ENTRY(transport) list; }; #define TRANSPORTS \ TRANSPORT_MACRO(PROXY) \ TRANSPORT_MACRO(HTTP1) \ TRANSPORT_MACRO(HTTP2) #define TRANSPORT_MACRO(name) extern struct transport name##_transport; TRANSPORTS #undef TRANSPORT_MACRO htc_complete_f H2_prism_complete; void H2_PU_Sess(struct worker *, struct sess *, struct req *); void H2_OU_Sess(struct worker *, struct sess *, struct req *); const struct transport *XPORT_ByNumber(uint16_t no); int VPX_Send_Proxy(int fd, int version, const struct sess *); /* cache_session.c */ struct sess *SES_New(struct pool *); void SES_Delete(struct sess *, stream_close_t reason, vtim_real now); void SES_DeleteHS(struct sess *, enum htc_status_e hs, vtim_real now); void SES_Close(struct sess *, stream_close_t reason); void SES_SetTransport(struct worker *, struct sess *, struct req *, const struct transport *); varnish-7.5.0/bin/varnishd/cache/cache_varnishd.h000066400000000000000000000437121457605730600217640ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Stuff that should *never* be exposed to a VMOD */ #include "cache.h" #include "vsb.h" #include #include #include #include #include "common/common_param.h" #ifdef NOT_IN_A_VMOD # include "VSC_main.h" #endif /*--------------------------------------------------------------------*/ struct vfp; struct vdp; struct cli_proto; struct poolparam; /*--------------------------------------------------------------------*/ typedef enum req_fsm_nxt req_state_f(struct worker *, struct req *); struct req_step { const char *name; req_state_f *func; }; extern const struct req_step R_STP_TRANSPORT[1]; extern const struct req_step R_STP_RECV[1]; struct vxid_pool { uint64_t next; uint32_t count; }; /*-------------------------------------------------------------------- * Private part of worker threads */ struct worker_priv { unsigned magic; #define WORKER_PRIV_MAGIC 0x3047db99 struct objhead *nobjhead; struct objcore *nobjcore; void *nhashpriv; struct vxid_pool vxid_pool[1]; struct vcl *vcl; }; /*-------------------------------------------------------------------- * HTTP Protocol connection structure * * This is the protocol independent object for a HTTP connection, used * both for backend and client sides. * */ struct http_conn { unsigned magic; #define HTTP_CONN_MAGIC 0x3e19edd1 int *rfd; stream_close_t doclose; body_status_t body_status; struct ws *ws; char *rxbuf_b; char *rxbuf_e; char *pipeline_b; char *pipeline_e; ssize_t content_length; void *priv; /* Timeouts */ vtim_dur first_byte_timeout; vtim_dur between_bytes_timeout; }; enum htc_status_e { #define HTC_STATUS(e, n, s, l) HTC_S_ ## e = n, #include "tbl/htc.h" }; typedef enum htc_status_e htc_complete_f(struct http_conn *); /* -------------------------------------------------------------------*/ extern volatile struct params * cache_param; /* ------------------------------------------------------------------- * The VCF facility is deliberately undocumented, use at your peril. */ struct vcf_return { const char *name; }; #define VCF_RETURNS() \ VCF_RETURN(CONTINUE) \ VCF_RETURN(DEFAULT) \ VCF_RETURN(MISS) \ VCF_RETURN(HIT) #define VCF_RETURN(x) extern const struct vcf_return VCF_##x[1]; VCF_RETURNS() #undef VCF_RETURN typedef const struct vcf_return *vcf_func_f( struct req *req, struct objcore **oc, struct objcore **oc_exp, int state); struct vcf { unsigned magic; #define VCF_MAGIC 0x183285d1 vcf_func_f *func; void *priv; }; /* Prototypes etc ----------------------------------------------------*/ /* cache_acceptor.c */ void VCA_Init(void); void VCA_Shutdown(void); /* cache_backend_cfg.c */ void VBE_InitCfg(void); /* cache_ban.c */ /* for stevedoes resurrecting bans */ void BAN_Hold(void); void BAN_Release(void); void BAN_Reload(const uint8_t *ban, unsigned len); struct ban *BAN_FindBan(vtim_real t0); void BAN_RefBan(struct objcore *oc, struct ban *); vtim_real BAN_Time(const struct ban *ban); /* cache_busyobj.c */ struct busyobj *VBO_GetBusyObj(const struct worker *, const struct req *); void VBO_ReleaseBusyObj(struct worker *wrk, struct busyobj **busyobj); /* cache_director.c */ int VDI_GetHdr(struct busyobj *); VCL_IP VDI_GetIP(struct busyobj *); void VDI_Finish(struct busyobj *bo); stream_close_t VDI_Http1Pipe(struct req *, struct busyobj *); void VDI_Panic(const struct director *, struct vsb *, const char *nm); void VDI_Event(const struct director *d, enum vcl_event_e ev); void VDI_Init(void); /* cache_deliver_proc.c */ void VDP_Fini(const struct vdp_ctx *vdc); void VDP_Init(struct vdp_ctx *vdc, struct worker *wrk, struct vsl_log *vsl, struct req *req); uint64_t VDP_Close(struct vdp_ctx *, struct objcore *, struct boc *); void VDP_Panic(struct vsb *vsb, const struct vdp_ctx *vdc); int VDP_Push(VRT_CTX, struct vdp_ctx *, struct ws *, const struct vdp *, void *priv); int VDP_DeliverObj(struct vdp_ctx *vdc, struct objcore *oc); extern const struct vdp VDP_gunzip; extern const struct vdp VDP_esi; extern const struct vdp VDP_range; /* cache_exp.c */ vtim_real EXP_Ttl(const struct req *, const struct objcore *); vtim_real EXP_Ttl_grace(const struct req *, const struct objcore *oc); void EXP_RefNewObjcore(struct objcore *); void EXP_Insert(struct worker *wrk, struct objcore *oc); void EXP_Remove(struct objcore *, const struct objcore *); #define EXP_Dttl(req, oc) (oc->ttl - (req->t_req - oc->t_origin)) /* cache_expire.c */ /* * The set of variables which control object expiry are inconveniently * 24 bytes long (double+3*float) and this causes alignment waste if * we put then in a struct. * These three macros operate on the struct we don't use. */ #define EXP_ZERO(xx) \ do { \ (xx)->t_origin = 0.0; \ (xx)->ttl = 0.0; \ (xx)->grace = 0.0; \ (xx)->keep = 0.0; \ } while (0) #define EXP_COPY(to,fm) \ do { \ (to)->t_origin = (fm)->t_origin; \ (to)->ttl = (fm)->ttl; \ (to)->grace = (fm)->grace; \ (to)->keep = (fm)->keep; \ } while (0) #define EXP_WHEN(to) \ ((to)->t_origin + (to)->ttl + (to)->grace + (to)->keep) /* cache_exp.c */ void EXP_Rearm(struct objcore *oc, vtim_real now, vtim_dur ttl, vtim_dur grace, vtim_dur keep); /* From cache_main.c */ void BAN_Init(void); void BAN_Compile(void); void BAN_Shutdown(void); /* From cache_hash.c */ void BAN_NewObjCore(struct objcore *oc); void BAN_DestroyObj(struct objcore *oc); int BAN_CheckObject(struct worker *, struct objcore *, struct req *); /* cache_busyobj.c */ void VBO_Init(void); /* cache_cli.c [CLI] */ void CLI_Init(void); void CLI_Run(void); void CLI_AddFuncs(struct cli_proto *p); /* cache_expire.c */ void EXP_Init(void); void EXP_Shutdown(void); /* cache_fetch.c */ enum vbf_fetch_mode_e { VBF_NORMAL = 0, VBF_PASS = 1, VBF_BACKGROUND = 2, }; void VBF_Fetch(struct worker *wrk, struct req *req, struct objcore *oc, struct objcore *oldoc, enum vbf_fetch_mode_e); const char *VBF_Get_Filter_List(struct busyobj *); void Bereq_Rollback(VRT_CTX); /* cache_fetch_proc.c */ void VFP_Init(void); struct vfp_entry *VFP_Push(struct vfp_ctx *, const struct vfp *); enum vfp_status VFP_GetStorage(struct vfp_ctx *, ssize_t *sz, uint8_t **ptr); void VFP_Extend(const struct vfp_ctx *, ssize_t sz, enum vfp_status); void VFP_Setup(struct vfp_ctx *vc, struct worker *wrk); int VFP_Open(VRT_CTX, struct vfp_ctx *); uint64_t VFP_Close(struct vfp_ctx *); extern const struct vfp VFP_gunzip; extern const struct vfp VFP_gzip; extern const struct vfp VFP_testgunzip; extern const struct vfp VFP_esi; extern const struct vfp VFP_esi_gzip; /* cache_http.c */ void HTTP_Init(void); /* cache_http1_proto.c */ htc_complete_f HTTP1_Complete; uint16_t HTTP1_DissectRequest(struct http_conn *, struct http *); uint16_t HTTP1_DissectResponse(struct http_conn *, struct http *resp, const struct http *req); unsigned HTTP1_Write(const struct worker *w, const struct http *hp, const int*); /* cache_main.c */ vxid_t VXID_Get(const struct worker *, uint64_t marker); extern pthread_key_t panic_key; extern pthread_key_t witness_key; void THR_SetName(const char *name); const char* THR_GetName(void); void THR_SetBusyobj(const struct busyobj *); struct busyobj * THR_GetBusyobj(void); void THR_SetRequest(const struct req *); struct req * THR_GetRequest(void); void THR_SetWorker(const struct worker *); struct worker * THR_GetWorker(void); void THR_Init(void); /* cache_lck.c */ void LCK_Init(void); /* cache_mempool.c */ void MPL_AssertSane(const void *item); struct mempool * MPL_New(const char *name, volatile struct poolparam *pp, volatile unsigned *cur_size); void MPL_Destroy(struct mempool **mpp); void *MPL_Get(struct mempool *mpl, unsigned *size); void MPL_Free(struct mempool *mpl, void *item); /* cache_obj.c */ void ObjInit(void); struct objcore * ObjNew(const struct worker *); void ObjDestroy(const struct worker *, struct objcore **); int ObjGetSpace(struct worker *, struct objcore *, ssize_t *sz, uint8_t **ptr); void ObjExtend(struct worker *, struct objcore *, ssize_t l, int final); uint64_t ObjWaitExtend(const struct worker *, const struct objcore *, uint64_t l); void ObjSetState(struct worker *, const struct objcore *, enum boc_state_e next); void ObjWaitState(const struct objcore *, enum boc_state_e want); void ObjTouch(struct worker *, struct objcore *, vtim_real now); void ObjFreeObj(struct worker *, struct objcore *); void ObjSlim(struct worker *, struct objcore *); void *ObjSetAttr(struct worker *, struct objcore *, enum obj_attr, ssize_t len, const void *); int ObjCopyAttr(struct worker *, struct objcore *, struct objcore *, enum obj_attr attr); void ObjBocDone(struct worker *, struct objcore *, struct boc **); int ObjSetDouble(struct worker *, struct objcore *, enum obj_attr, double); int ObjSetU64(struct worker *, struct objcore *, enum obj_attr, uint64_t); int ObjSetXID(struct worker *, struct objcore *, vxid_t); void ObjSetFlag(struct worker *, struct objcore *, enum obj_flags of, int val); void ObjSendEvent(struct worker *, struct objcore *oc, unsigned event); #define OEV_INSERT (1U<<1) #define OEV_BANCHG (1U<<2) #define OEV_TTLCHG (1U<<3) #define OEV_EXPIRE (1U<<4) #define OEV_MASK (OEV_INSERT|OEV_BANCHG|OEV_TTLCHG|OEV_EXPIRE) typedef void obj_event_f(struct worker *, void *priv, struct objcore *, unsigned); uintptr_t ObjSubscribeEvents(obj_event_f *, void *, unsigned mask); void ObjUnsubscribeEvents(uintptr_t *); /* cache_panic.c */ void PAN_Init(void); int PAN__DumpStruct(struct vsb *vsb, int block, int track, const void *ptr, const char *smagic, unsigned magic, const char *fmt, ...) v_printflike_(7,8); #define PAN_dump_struct(vsb, ptr, magic, ...) \ PAN__DumpStruct(vsb, 1, 1, ptr, #magic, magic, __VA_ARGS__) #define PAN_dump_oneline(vsb, ptr, magic, ...) \ PAN__DumpStruct(vsb, 0, 1, ptr, #magic, magic, __VA_ARGS__) #define PAN_dump_once(vsb, ptr, magic, ...) \ PAN__DumpStruct(vsb, 1, 0, ptr, #magic, magic, __VA_ARGS__) #define PAN_dump_once_oneline(vsb, ptr, magic, ...) \ PAN__DumpStruct(vsb, 0, 0, ptr, #magic, magic, __VA_ARGS__) /* cache_pool.c */ void Pool_Init(void); int Pool_Task(struct pool *pp, struct pool_task *task, enum task_prio prio); int Pool_Task_Arg(struct worker *, enum task_prio, task_func_t *, const void *arg, size_t arg_len); void Pool_Sumstat(const struct worker *w); int Pool_TrySumstat(const struct worker *wrk); void Pool_PurgeStat(unsigned nobj); int Pool_Task_Any(struct pool_task *task, enum task_prio prio); void pan_pool(struct vsb *); /* cache_range.c */ int VRG_CheckBo(struct busyobj *); /* cache_req.c */ struct req *Req_New(struct sess *); void Req_Release(struct req *); void Req_Rollback(VRT_CTX); void Req_Cleanup(struct sess *sp, struct worker *wrk, struct req *req); void Req_Fail(struct req *req, stream_close_t reason); void Req_AcctLogCharge(struct VSC_main_wrk *, struct req *); void Req_LogHit(struct worker *, struct req *, struct objcore *, intmax_t); const char *Req_LogStart(const struct worker *, struct req *); /* cache_req_body.c */ int VRB_Ignore(struct req *); ssize_t VRB_Cache(struct req *, ssize_t maxsize); void VRB_Free(struct req *); /* cache_req_fsm.c [CNT] */ int Resp_Setup_Deliver(struct req *); void Resp_Setup_Synth(struct req *); enum req_fsm_nxt { REQ_FSM_MORE, REQ_FSM_DONE, REQ_FSM_DISEMBARK, }; void CNT_Embark(struct worker *, struct req *); enum req_fsm_nxt CNT_Request(struct req *); /* cache_session.c */ void SES_NewPool(struct pool *, unsigned pool_no); void SES_DestroyPool(struct pool *); void SES_Wait(struct sess *, const struct transport *); void SES_Ref(struct sess *sp); void SES_Rel(struct sess *sp); void HTC_Status(enum htc_status_e, const char **, const char **); void HTC_RxInit(struct http_conn *htc, struct ws *ws); void HTC_RxPipeline(struct http_conn *htc, char *); enum htc_status_e HTC_RxStuff(struct http_conn *, htc_complete_f *, vtim_real *t1, vtim_real *t2, vtim_real ti, vtim_real tn, vtim_dur td, int maxbytes); #define SESS_ATTR(UP, low, typ, len) \ int SES_Set_##low(const struct sess *sp, const typ *src); \ int SES_Reserve_##low(struct sess *sp, typ **dst, ssize_t *sz); #include "tbl/sess_attr.h" int SES_Set_String_Attr(struct sess *sp, enum sess_attr a, const char *src); /* cache_shmlog.c */ extern struct VSC_main *VSC_C_main; void VSM_Init(void); void VSL_Setup(struct vsl_log *vsl, void *ptr, size_t len); void VSL_ChgId(struct vsl_log *vsl, const char *typ, const char *why, vxid_t vxid); void VSL_End(struct vsl_log *vsl); void VSL_Flush(struct vsl_log *, int overflow); /* cache_conn_pool.c */ struct conn_pool; void VCP_Init(void); void VCP_Panic(struct vsb *, struct conn_pool *); /* cache_backend_probe.c */ void VBP_Init(void); /* cache_vary.c */ int VRY_Create(struct busyobj *bo, struct vsb **psb); int VRY_Match(const struct req *, const uint8_t *vary); void VRY_Prep(struct req *); void VRY_Clear(struct req *); enum vry_finish_flag { KEEP, DISCARD }; void VRY_Finish(struct req *req, enum vry_finish_flag); /* cache_vcl.c */ void VCL_Bo2Ctx(struct vrt_ctx *, struct busyobj *); void VCL_Req2Ctx(struct vrt_ctx *, struct req *); struct vrt_ctx *VCL_Get_CliCtx(int); struct vsb *VCL_Rel_CliCtx(struct vrt_ctx **); void VCL_Panic(struct vsb *, const char *nm, const struct vcl *); void VCL_Poll(void); void VCL_Init(void); #define VCL_MET_MAC(l,u,t,b) \ void VCL_##l##_method(struct vcl *, struct worker *, struct req *, \ struct busyobj *bo, void *specific); #include "tbl/vcl_returns.h" typedef int vcl_be_func(struct cli *, struct director *, void *); int VCL_IterDirector(struct cli *, const char *, vcl_be_func *, void *); /* cache_vrt.c */ void pan_privs(struct vsb *, const struct vrt_privs *); /* cache_vrt_filter.c */ int VCL_StackVFP(struct vfp_ctx *, const struct vcl *, const char *); int VCL_StackVDP(struct req *, const struct vcl *, const char *); const char *resp_Get_Filter_List(struct req *req); void VCL_VRT_Init(void); /* cache_vrt_vcl.c */ const char *VCL_Return_Name(unsigned); const char *VCL_Method_Name(unsigned); void VCL_Refresh(struct vcl **); void VCL_Recache(const struct worker *, struct vcl **); void VCL_Ref(struct vcl *); void VCL_Rel(struct vcl **); VCL_BACKEND VCL_DefaultDirector(const struct vcl *); const struct vrt_backend_probe *VCL_DefaultProbe(const struct vcl *); /* cache_vrt_priv.c */ extern struct vrt_privs cli_task_privs[1]; void VCL_TaskEnter(struct vrt_privs *); void VCL_TaskLeave(VRT_CTX, struct vrt_privs *); /* cache_vrt_vmod.c */ void VMOD_Init(void); void VMOD_Panic(struct vsb *); #if defined(ENABLE_COVERAGE) || defined(ENABLE_SANITIZER) # define DONT_DLCLOSE_VMODS #endif /* cache_wrk.c */ void WRK_Init(void); void WRK_AddStat(const struct worker *); void WRK_Log(enum VSL_tag_e, const char *, ...); /* cache_vpi.c */ extern const size_t vpi_wrk_len; void VPI_wrk_init(struct worker *, void *, size_t); void VPI_Panic(struct vsb *, const struct wrk_vpi *, const struct vcl *); /* cache_ws.c */ void WS_Panic(struct vsb *, const struct ws *); static inline int WS_IsReserved(const struct ws *ws) { return (ws->r != NULL); } void *WS_AtOffset(const struct ws *ws, unsigned off, unsigned len); unsigned WS_ReservationOffset(const struct ws *ws); unsigned WS_ReqPipeline(struct ws *, const void *b, const void *e); /* cache_ws_common.c */ void WS_Id(const struct ws *ws, char *id); void WS_Rollback(struct ws *, uintptr_t); /* http1/cache_http1_pipe.c */ void V1P_Init(void); /* cache_http2_deliver.c */ void V2D_Init(void); /* stevedore.c */ void STV_open(void); void STV_close(void); const struct stevedore *STV_next(void); int STV_BanInfoDrop(const uint8_t *ban, unsigned len); int STV_BanInfoNew(const uint8_t *ban, unsigned len); void STV_BanExport(const uint8_t *banlist, unsigned len); // STV_NewObject() len is space for OBJ_VARATTR int STV_NewObject(struct worker *, struct objcore *, const struct stevedore *, unsigned len); struct stv_buffer; struct stv_buffer *STV_AllocBuf(struct worker *wrk, const struct stevedore *stv, size_t size); void STV_FreeBuf(struct worker *wrk, struct stv_buffer **pstvbuf); void *STV_GetBufPtr(struct stv_buffer *stvbuf, size_t *psize); #if WITH_PERSISTENT_STORAGE /* storage_persistent.c */ void SMP_Ready(void); #endif #define FEATURE(x) COM_FEATURE(cache_param->feature_bits, x) #define EXPERIMENT(x) COM_EXPERIMENT(cache_param->experimental_bits, x) #define DO_DEBUG(x) COM_DO_DEBUG(cache_param->debug_bits, x) #define DSL(debug_bit, id, ...) \ do { \ if (DO_DEBUG(debug_bit)) \ VSL(SLT_Debug, (id), __VA_ARGS__); \ } while (0) #define DSLb(debug_bit, ...) \ do { \ if (DO_DEBUG(debug_bit)) \ WRK_Log(SLT_Debug, __VA_ARGS__); \ } while (0) varnish-7.5.0/bin/varnishd/cache/cache_vary.c000066400000000000000000000214331457605730600211160ustar00rootroot00000000000000/*- * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Do Vary processing. * * When we insert an object into the cache which has a Vary: header, * we encode a vary matching string containing the headers mentioned * and their value. * * When we match an object in the cache, we check the present request * against the vary matching string. * * The only kind of header-munging we do is leading & trailing space * removal. All the potential "q=foo" gymnastics is not worth the * effort. * * The vary matching string has the following format: * * Sequence of: { * \ Length of header contents. * / * \ *
\ Same format as argument to http_GetHdr() * ':' / * '\0' / *
> Only present if length != 0xffff * } * 0xff, \ Length field * 0xff, / * '\0' > Terminator */ #include "config.h" #include #include "cache_varnishd.h" #include "vct.h" #include "vend.h" static unsigned VRY_Validate(const uint8_t *vary); /********************************************************************** * Create a Vary matching string from a Vary header * * Return value: * <0: Parse error * 0: No Vary header on object * >0: Length of Vary matching string in *psb */ int VRY_Create(struct busyobj *bo, struct vsb **psb) { const char *v, *p, *q, *h, *e; struct vsb *sb, *sbh; unsigned l; int error = 0; CHECK_OBJ_NOTNULL(bo, BUSYOBJ_MAGIC); CHECK_OBJ_NOTNULL(bo->bereq, HTTP_MAGIC); CHECK_OBJ_NOTNULL(bo->beresp, HTTP_MAGIC); AN(psb); AZ(*psb); /* No Vary: header, no worries */ if (!http_GetHdr(bo->beresp, H_Vary, &v)) return (0); /* For vary matching string */ sb = VSB_new_auto(); AN(sb); /* For header matching strings */ sbh = VSB_new_auto(); AN(sbh); for (p = v; *p; p++) { /* Find next header-name */ if (vct_issp(*p)) continue; for (q = p; *q && !vct_issp(*q) && *q != ','; q++) continue; if (q - p > INT8_MAX) { VSLb(bo->vsl, SLT_Error, "Vary header name length exceeded"); error = 1; break; } /* Build a header-matching string out of it */ VSB_clear(sbh); AZ(VSB_printf(sbh, "%c%.*s:%c", (char)(1 + (q - p)), (int)(q - p), p, 0)); AZ(VSB_finish(sbh)); if (http_GetHdr(bo->bereq, VSB_data(sbh), &h)) { AZ(vct_issp(*h)); /* Trim trailing space */ e = strchr(h, '\0'); while (e > h && vct_issp(e[-1])) e--; /* Encode two byte length and contents */ l = e - h; if (l > 0xffff - 1) { VSLb(bo->vsl, SLT_Error, "Vary header maximum length exceeded"); error = 1; break; } } else { e = h; l = 0xffff; } AZ(VSB_printf(sb, "%c%c", (int)(l >> 8), (int)(l & 0xff))); /* Append to vary matching string */ AZ(VSB_bcat(sb, VSB_data(sbh), VSB_len(sbh))); if (e != h) AZ(VSB_bcat(sb, h, e - h)); while (vct_issp(*q)) q++; if (*q == '\0') break; if (*q != ',') { VSLb(bo->vsl, SLT_Error, "Malformed Vary header"); error = 1; break; } p = q; } if (error) { VSB_destroy(&sbh); VSB_destroy(&sb); return (-1); } /* Terminate vary matching string */ VSB_printf(sb, "%c%c%c", 0xff, 0xff, 0); VSB_destroy(&sbh); AZ(VSB_finish(sb)); *psb = sb; return (VSB_len(sb)); } /* * Find length of a vary entry */ static unsigned VRY_Len(const uint8_t *p) { unsigned l = vbe16dec(p); return (2 + p[2] + 2 + (l == 0xffff ? 0 : l)); } /* * Compare two vary entries */ static int vry_cmp(const uint8_t *v1, const uint8_t *v2) { unsigned retval = 0; if (!memcmp(v1, v2, VRY_Len(v1))) { /* Same same */ retval = 0; } else if (memcmp(v1 + 2, v2 + 2, v1[2] + 2)) { /* Different header */ retval = 1; } else if (cache_param->http_gzip_support && http_hdr_eq(H_Accept_Encoding, (const char*) v1 + 2)) { /* * If we do gzip processing, we do not vary on Accept-Encoding, * because we want everybody to get the gzip'ed object, and * varnish will gunzip as necessary. We implement the skip at * check time, rather than create time, so that object in * persistent storage can be used with either setting of * http_gzip_support. */ retval = 0; } else { /* Same header, different content */ retval = 2; } return (retval); } /********************************************************************** * Prepare predictive vary string */ void VRY_Prep(struct req *req) { if (req->hash_objhead == NULL) { /* Not a waiting list return */ AZ(req->vary_b); AZ(req->vary_e); (void)WS_ReserveAll(req->ws); } else { AN(WS_Reservation(req->ws)); } req->vary_b = WS_Reservation(req->ws); req->vary_e = req->vary_b + WS_ReservationSize(req->ws); if (req->vary_b + 2 < req->vary_e) req->vary_b[2] = '\0'; } void VRY_Clear(struct req *req) { CHECK_OBJ_NOTNULL(req, REQ_MAGIC); if (req->vary_b != NULL) free(req->vary_b); req->vary_b = NULL; AZ(req->vary_e); } /********************************************************************** * Finish predictive vary processing */ void VRY_Finish(struct req *req, enum vry_finish_flag flg) { uint8_t *p = NULL; size_t l; if (req->vary_b + 2 >= req->vary_e) { req->vary_b = NULL; req->vary_e = NULL; WS_Release(req->ws, 0); WS_MarkOverflow(req->ws); return; } l = VRY_Validate(req->vary_b); if (flg == KEEP && l > 3) { p = malloc(l); if (p != NULL) memcpy(p, req->vary_b, l); } WS_Release(req->ws, 0); req->vary_e = NULL; req->vary_b = p; } /********************************************************************** * Match vary strings, and build a new cached string if possible. * * Return zero if there is certainly no match. * Return non-zero if there could be a match or if we couldn't tell. */ int VRY_Match(const struct req *req, const uint8_t *vary) { uint8_t *vsp = req->vary_b; const char *h, *e; unsigned lh, ln; int i, oflo = 0; AN(vsp); AN(vary); while (vary[2]) { if (vsp + 2 >= req->vary_e) { /* * Too little workspace to find out */ oflo = 1; break; } i = vry_cmp(vary, vsp); if (i == 1) { /* * Different header, build a new entry, * then compare again with that new entry. */ ln = 2 + vary[2] + 2; i = http_GetHdr(req->http, (const char*)(vary+2), &h); if (i) { /* Trim trailing space */ e = strchr(h, '\0'); while (e > h && vct_issp(e[-1])) e--; lh = e - h; assert(lh < 0xffff); ln += lh; } else { e = h = NULL; lh = 0xffff; } if (vsp + ln + 3 >= req->vary_e) { /* * Not enough space to build new entry * and put terminator behind it. */ oflo = 1; break; } vbe16enc(vsp, (uint16_t)lh); memcpy(vsp + 2, vary + 2, vary[2] + 2); if (h != NULL) memcpy(vsp + 2 + vsp[2] + 2, h, lh); vsp[ln++] = 0xff; vsp[ln++] = 0xff; vsp[ln++] = 0; assert(VRY_Validate(vsp) == ln); i = vry_cmp(vary, vsp); assert(i == 0 || i == 2); } if (i == 0) { /* Same header, same contents */ vsp += VRY_Len(vsp); vary += VRY_Len(vary); } else if (i == 2) { /* Same header, different contents, cannot match */ return (0); } } if (oflo) { vsp = req->vary_b; if (vsp + 2 < req->vary_e) { vsp[0] = 0xff; vsp[1] = 0xff; vsp[2] = 0; } return (0); } else { return (1); } } /* * Check the validity of a Vary string and return its total length */ static unsigned VRY_Validate(const uint8_t *vary) { unsigned l, retval = 0; while (vary[2] != 0) { assert(strlen((const char*)vary + 3) == vary[2]); l = VRY_Len(vary); retval += l; vary += l; } return (retval + 3); } varnish-7.5.0/bin/varnishd/cache/cache_vcl.c000066400000000000000000000571541457605730600207320ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2016 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #include "config.h" #include #include #include #include #include "cache_varnishd.h" #include "common/heritage.h" #include "vcl.h" #include "cache_director.h" #include "cache_vcl.h" #include "vcli_serve.h" #include "vte.h" #include "vtim.h" #include "vcc_interface.h" const struct vcltemp VCL_TEMP_INIT[1] = {{ .name = "init", .is_cold = 1 }}; const struct vcltemp VCL_TEMP_COLD[1] = {{ .name = "cold", .is_cold = 1 }}; const struct vcltemp VCL_TEMP_WARM[1] = {{ .name = "warm", .is_warm = 1 }}; const struct vcltemp VCL_TEMP_BUSY[1] = {{ .name = "busy", .is_warm = 1 }}; const struct vcltemp VCL_TEMP_COOLING[1] = {{ .name = "cooling" }}; // not really a temperature static const struct vcltemp VCL_TEMP_LABEL[1] = {{ .name = "label" }}; /* * XXX: Presently all modifications to this list happen from the * CLI event-engine, so no locking is necessary */ static VTAILQ_HEAD(, vcl) vcl_head = VTAILQ_HEAD_INITIALIZER(vcl_head); struct lock vcl_mtx; struct vcl *vcl_active; /* protected by vcl_mtx */ static struct vrt_ctx ctx_cli; static struct wrk_vpi wrk_vpi_cli; static struct ws ws_cli; static uintptr_t ws_snapshot_cli; static struct vsl_log vsl_cli; /*--------------------------------------------------------------------*/ void VCL_Bo2Ctx(struct vrt_ctx *ctx, struct busyobj *bo) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(bo, BUSYOBJ_MAGIC); CHECK_OBJ_NOTNULL(bo->wrk, WORKER_MAGIC); ctx->vcl = bo->vcl; ctx->syntax = ctx->vcl->conf->syntax; ctx->vsl = bo->vsl; ctx->http_bereq = bo->bereq; ctx->http_beresp = bo->beresp; ctx->bo = bo; ctx->sp = bo->sp; ctx->now = bo->t_prev; ctx->ws = bo->ws; ctx->vpi = bo->wrk->vpi; ctx->vpi->handling = 0; ctx->vpi->trace = bo->trace; } void VCL_Req2Ctx(struct vrt_ctx *ctx, struct req *req) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(req, REQ_MAGIC); CHECK_OBJ_NOTNULL(req->doclose, STREAM_CLOSE_MAGIC); ctx->vcl = req->vcl; ctx->syntax = ctx->vcl->conf->syntax; ctx->vsl = req->vsl; ctx->http_req = req->http; CHECK_OBJ_NOTNULL(req->top, REQTOP_MAGIC); ctx->http_req_top = req->top->topreq->http; ctx->http_resp = req->resp; ctx->req = req; ctx->sp = req->sp; ctx->now = req->t_prev; ctx->ws = req->ws; ctx->vpi = req->wrk->vpi; ctx->vpi->handling = 0; ctx->vpi->trace = req->trace; } /*--------------------------------------------------------------------*/ struct vrt_ctx * VCL_Get_CliCtx(int msg) { ASSERT_CLI(); INIT_OBJ(&ctx_cli, VRT_CTX_MAGIC); INIT_OBJ(&wrk_vpi_cli, WRK_VPI_MAGIC); ctx_cli.vpi = &wrk_vpi_cli; wrk_vpi_cli.trace = FEATURE(FEATURE_TRACE); ctx_cli.now = VTIM_real(); if (msg) { ctx_cli.msg = VSB_new_auto(); AN(ctx_cli.msg); } else { ctx_cli.vsl = &vsl_cli; } ctx_cli.ws = &ws_cli; WS_Assert(ctx_cli.ws); return (&ctx_cli); } /* * releases CLI ctx * * returns finished error msg vsb if VCL_Get_CliCtx(1) was called * * caller needs to VSB_destroy a non-NULL return value * */ struct vsb * VCL_Rel_CliCtx(struct vrt_ctx **ctx) { struct vsb *r = NULL; ASSERT_CLI(); assert(*ctx == &ctx_cli); AN((*ctx)->vpi); if (ctx_cli.msg) { TAKE_OBJ_NOTNULL(r, &ctx_cli.msg, VSB_MAGIC); AZ(VSB_finish(r)); } if (ctx_cli.vsl) VSL_Flush(ctx_cli.vsl, 0); WS_Assert(ctx_cli.ws); WS_Rollback(&ws_cli, ws_snapshot_cli); INIT_OBJ(*ctx, VRT_CTX_MAGIC); *ctx = NULL; return (r); } /*--------------------------------------------------------------------*/ /* VRT_fail() can be called * - from the vcl sub via a vmod * - via a PRIV_TASK .fini callback * * if this happens during init, we fail it * if during fini, we ignore, because otherwise VMOD authors would be forced to * handle VCL_MET_FINI specifically everywhere. */ static int vcl_event_handling(VRT_CTX) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); if (ctx->vpi->handling == 0) return (0); assert(ctx->vpi->handling == VCL_RET_FAIL); if (ctx->method == VCL_MET_INIT) return (1); /* * EVENT_WARM / EVENT_COLD: method == 0 * must not set handling */ assert(ctx->method == VCL_MET_FINI); ctx->vpi->handling = 0; VRT_fail(ctx, "VRT_fail() from vcl_fini{} has no effect"); return (0); } static int vcl_send_event(struct vcl *vcl, enum vcl_event_e ev, struct vsb **msg) { int r, havemsg; unsigned method = 0; struct vrt_ctx *ctx; ASSERT_CLI(); ASSERT_VCL_ACTIVE(); CHECK_OBJ_NOTNULL(vcl, VCL_MAGIC); CHECK_OBJ_NOTNULL(vcl->conf, VCL_CONF_MAGIC); AN(msg); AZ(*msg); switch (ev) { case VCL_EVENT_LOAD: method = VCL_MET_INIT; /* FALLTHROUGH */ case VCL_EVENT_WARM: havemsg = 1; break; case VCL_EVENT_DISCARD: method = VCL_MET_FINI; /* FALLTHROUGH */ case VCL_EVENT_COLD: havemsg = 0; break; default: WRONG("vcl_event"); } ctx = VCL_Get_CliCtx(havemsg); AN(ctx->vpi); AZ(ctx->vpi->handling); AN(ctx->ws); ctx->vcl = vcl; ctx->syntax = ctx->vcl->conf->syntax; ctx->method = method; VCL_TaskEnter(cli_task_privs); r = ctx->vcl->conf->event_vcl(ctx, ev); VCL_TaskLeave(ctx, cli_task_privs); r |= vcl_event_handling(ctx); *msg = VCL_Rel_CliCtx(&ctx); if (r && (ev == VCL_EVENT_COLD || ev == VCL_EVENT_DISCARD)) WRONG("A VMOD cannot fail COLD or DISCARD events"); return (r); } /*--------------------------------------------------------------------*/ struct vcl * vcl_find(const char *name) { struct vcl *vcl; ASSERT_CLI(); VTAILQ_FOREACH(vcl, &vcl_head, list) { if (vcl->discard) continue; if (!strcmp(vcl->loaded_name, name)) return (vcl); } return (NULL); } /*--------------------------------------------------------------------*/ static void vcl_panic_conf(struct vsb *vsb, const struct VCL_conf *conf) { unsigned u; const struct vpi_ii *ii; if (PAN_dump_struct(vsb, conf, VCL_CONF_MAGIC, "conf")) return; VSB_printf(vsb, "syntax = \"%u\",\n", conf->syntax); VSB_cat(vsb, "srcname = {\n"); VSB_indent(vsb, 2); for (u = 0; u < conf->nsrc; ++u) VSB_printf(vsb, "[%u] = \"%s\",\n", u, conf->srcname[u]); VSB_indent(vsb, -2); VSB_cat(vsb, "},\n"); VSB_cat(vsb, "instances = {\n"); VSB_indent(vsb, 2); ii = conf->instance_info; while (ii != NULL && ii->p != NULL) { VSB_printf(vsb, "\"%s\" = %p,\n", ii->name, (const void *)*(const uintptr_t *)ii->p); ii++; } VSB_indent(vsb, -2); VSB_cat(vsb, "},\n"); VSB_indent(vsb, -2); VSB_cat(vsb, "},\n"); } void VCL_Panic(struct vsb *vsb, const char *nm, const struct vcl *vcl) { AN(vsb); if (PAN_dump_struct(vsb, vcl, VCL_MAGIC, "vcl[%s]", nm)) return; VSB_printf(vsb, "name = \"%s\",\n", vcl->loaded_name); VSB_printf(vsb, "busy = %u,\n", vcl->busy); VSB_printf(vsb, "discard = %u,\n", vcl->discard); VSB_printf(vsb, "state = %s,\n", vcl->state); VSB_printf(vsb, "temp = %s,\n", vcl->temp ? vcl->temp->name : "(null)"); vcl_panic_conf(vsb, vcl->conf); VSB_indent(vsb, -2); VSB_cat(vsb, "},\n"); } /*--------------------------------------------------------------------*/ void VCL_Update(struct vcl **vcc, struct vcl *vcl) { struct vcl *old; AN(vcc); old = *vcc; *vcc = NULL; CHECK_OBJ_ORNULL(old, VCL_MAGIC); ASSERT_VCL_ACTIVE(); Lck_Lock(&vcl_mtx); if (old != NULL) { assert(old->busy > 0); old->busy--; } if (vcl == NULL) vcl = vcl_active; /* Sample vcl_active under lock to avoid * race */ CHECK_OBJ_NOTNULL(vcl, VCL_MAGIC); if (vcl->label == NULL) { AN(strcmp(vcl->state, VCL_TEMP_LABEL->name)); *vcc = vcl; } else { AZ(strcmp(vcl->state, VCL_TEMP_LABEL->name)); *vcc = vcl->label; } CHECK_OBJ_NOTNULL(*vcc, VCL_MAGIC); AZ((*vcc)->discard); (*vcc)->busy++; Lck_Unlock(&vcl_mtx); assert((*vcc)->temp->is_warm); } /*--------------------------------------------------------------------*/ static int vcl_iterdir(struct cli *cli, const char *pat, const struct vcl *vcl, vcl_be_func *func, void *priv) { int i, found = 0; struct vcldir *vdir; Lck_AssertHeld(&vcl_mtx); VTAILQ_FOREACH(vdir, &vcl->director_list, list) { if (fnmatch(pat, vdir->cli_name, 0)) continue; found++; i = func(cli, vdir->dir, priv); if (i < 0) return (i); found += i; } return (found); } int VCL_IterDirector(struct cli *cli, const char *pat, vcl_be_func *func, void *priv) { int i, found = 0; struct vsb *vsb; struct vcl *vcl; ASSERT_CLI(); ASSERT_VCL_ACTIVE(); vsb = VSB_new_auto(); AN(vsb); if (pat == NULL || *pat == '\0' || !strcmp(pat, "*")) { // all backends in active VCL VSB_printf(vsb, "%s.*", VCL_Name(vcl_active)); vcl = vcl_active; } else if (strchr(pat, '.') == NULL) { // pattern applies to active vcl VSB_printf(vsb, "%s.%s", VCL_Name(vcl_active), pat); vcl = vcl_active; } else { // use pattern as is VSB_cat(vsb, pat); vcl = NULL; } AZ(VSB_finish(vsb)); Lck_Lock(&vcl_mtx); if (vcl != NULL) { found = vcl_iterdir(cli, VSB_data(vsb), vcl, func, priv); } else { VTAILQ_FOREACH(vcl, &vcl_head, list) { i = vcl_iterdir(cli, VSB_data(vsb), vcl, func, priv); if (i < 0) { found = i; break; } else { found += i; } } } Lck_Unlock(&vcl_mtx); VSB_destroy(&vsb); return (found); } static void vcl_BackendEvent(const struct vcl *vcl, enum vcl_event_e e) { struct vcldir *vdir; ASSERT_CLI(); CHECK_OBJ_NOTNULL(vcl, VCL_MAGIC); AZ(vcl->busy); Lck_Lock(&vcl_mtx); VTAILQ_FOREACH(vdir, &vcl->director_list, list) VDI_Event(vdir->dir, e); Lck_Unlock(&vcl_mtx); } static void vcl_KillBackends(const struct vcl *vcl) { struct vcldir *vdir, *vdir2; CHECK_OBJ_NOTNULL(vcl, VCL_MAGIC); assert(vcl->temp == VCL_TEMP_COLD || vcl->temp == VCL_TEMP_INIT); /* * Unlocked because no further directors can be added, and the * remaining ones need to be able to remove themselves. */ VTAILQ_FOREACH_SAFE(vdir, &vcl->director_list, list, vdir2) VDI_Event(vdir->dir, VCL_EVENT_DISCARD); assert(VTAILQ_EMPTY(&vcl->director_list)); } /*--------------------------------------------------------------------*/ static struct vcl * VCL_Open(const char *fn, struct vsb *msg) { struct vcl *vcl; void *dlh; struct VCL_conf const *cnf; AN(fn); AN(msg); #ifdef RTLD_NOLOAD /* Detect bogus caching by dlopen(3) */ dlh = dlopen(fn, RTLD_NOW | RTLD_NOLOAD); AZ(dlh); #endif dlh = dlopen(fn, RTLD_NOW | RTLD_LOCAL); if (dlh == NULL) { VSB_cat(msg, "Could not load compiled VCL.\n"); VSB_printf(msg, "\tdlopen() = %s\n", dlerror()); VSB_cat(msg, "\thint: check for \"noexec\" mount\n"); return (NULL); } cnf = dlsym(dlh, "VCL_conf"); if (cnf == NULL) { VSB_cat(msg, "Compiled VCL lacks metadata.\n"); (void)dlclose(dlh); return (NULL); } if (cnf->magic != VCL_CONF_MAGIC) { VSB_cat(msg, "Compiled VCL has mangled metadata.\n"); (void)dlclose(dlh); return (NULL); } if (cnf->syntax < heritage.min_vcl_version || cnf->syntax > heritage.max_vcl_version) { VSB_printf(msg, "Compiled VCL version (%.1f) not supported.\n", .1 * cnf->syntax); (void)dlclose(dlh); return (NULL); } ALLOC_OBJ(vcl, VCL_MAGIC); AN(vcl); vcl->dlh = dlh; vcl->conf = cnf; return (vcl); } static void VCL_Close(struct vcl **vclp) { struct vcl *vcl; TAKE_OBJ_NOTNULL(vcl, vclp, VCL_MAGIC); assert(VTAILQ_EMPTY(&vcl->filters)); AZ(dlclose(vcl->dlh)); FREE_OBJ(vcl); } /*-------------------------------------------------------------------- * NB: This function is called in/from the test-load subprocess. */ int VCL_TestLoad(const char *fn) { struct vsb *vsb; struct vcl *vcl; int retval = 0; AN(fn); vsb = VSB_new_auto(); AN(vsb); vcl = VCL_Open(fn, vsb); if (vcl == NULL) { AZ(VSB_finish(vsb)); fprintf(stderr, "%s", VSB_data(vsb)); retval = -1; } else VCL_Close(&vcl); VSB_destroy(&vsb); return (retval); } /*--------------------------------------------------------------------*/ static struct vsb * vcl_print_refs(const struct vcl *vcl) { struct vsb *msg; struct vclref *ref; CHECK_OBJ_NOTNULL(vcl, VCL_MAGIC); msg = VSB_new_auto(); AN(msg); VSB_printf(msg, "VCL %s is waiting for:", vcl->loaded_name); Lck_Lock(&vcl_mtx); VTAILQ_FOREACH(ref, &vcl->ref_list, list) VSB_printf(msg, "\n\t- %s", ref->desc); Lck_Unlock(&vcl_mtx); AZ(VSB_finish(msg)); return (msg); } static int vcl_set_state(struct vcl *vcl, const char *state, struct vsb **msg) { struct vsb *nomsg = NULL; int i = 0; ASSERT_CLI(); CHECK_OBJ_NOTNULL(vcl, VCL_MAGIC); AN(state); AN(msg); AZ(*msg); AN(vcl->temp); switch (state[0]) { case '0': if (vcl->temp == VCL_TEMP_COLD) break; if (vcl->busy == 0 && vcl->temp->is_warm) { vcl->temp = VTAILQ_EMPTY(&vcl->ref_list) ? VCL_TEMP_COLD : VCL_TEMP_COOLING; vcl_BackendEvent(vcl, VCL_EVENT_COLD); AZ(vcl_send_event(vcl, VCL_EVENT_COLD, msg)); AZ(*msg); } else if (vcl->busy) vcl->temp = VCL_TEMP_BUSY; else if (VTAILQ_EMPTY(&vcl->ref_list)) vcl->temp = VCL_TEMP_COLD; else vcl->temp = VCL_TEMP_COOLING; break; case '1': if (vcl->temp == VCL_TEMP_WARM) break; /* The warm VCL hasn't seen a cold event yet */ if (vcl->temp == VCL_TEMP_BUSY) vcl->temp = VCL_TEMP_WARM; /* The VCL must first reach a stable cold state */ else if (vcl->temp == VCL_TEMP_COOLING) { *msg = vcl_print_refs(vcl); i = -1; } else { vcl->temp = VCL_TEMP_WARM; i = vcl_send_event(vcl, VCL_EVENT_WARM, msg); if (i == 0) { vcl_BackendEvent(vcl, VCL_EVENT_WARM); break; } AZ(vcl_send_event(vcl, VCL_EVENT_COLD, &nomsg)); AZ(nomsg); vcl->temp = VCL_TEMP_COLD; } break; default: WRONG("Wrong enum state"); } if (i == 0 && state[1]) bprintf(vcl->state, "%s", state + 1); return (i); } static void vcl_cancel_load(struct vcl *vcl, struct cli *cli, struct vsb *msg, const char *name, const char *step) { CHECK_OBJ_NOTNULL(vcl, VCL_MAGIC); CHECK_OBJ_NOTNULL(vcl->conf, VCL_CONF_MAGIC); VCLI_SetResult(cli, CLIS_CANT); VCLI_Out(cli, "VCL \"%s\" Failed %s", name, step); if (VSB_len(msg)) VCLI_Out(cli, "\nMessage:\n\t%s", VSB_data(msg)); VSB_destroy(&msg); AZ(vcl_send_event(vcl, VCL_EVENT_DISCARD, &msg)); AZ(msg); vcl_KillBackends(vcl); free(vcl->loaded_name); VCL_Close(&vcl); } static void vcl_load(struct cli *cli, const char *name, const char *fn, const char *state) { struct vcl *vcl; struct vsb *msg; ASSERT_CLI(); ASSERT_VCL_ACTIVE(); vcl = vcl_find(name); AZ(vcl); msg = VSB_new_auto(); AN(msg); vcl = VCL_Open(fn, msg); AZ(VSB_finish(msg)); if (vcl == NULL) { VCLI_SetResult(cli, CLIS_PARAM); VCLI_Out(cli, "%s", VSB_data(msg)); VSB_destroy(&msg); return; } VSB_destroy(&msg); vcl->loaded_name = strdup(name); XXXAN(vcl->loaded_name); VTAILQ_INIT(&vcl->director_list); VTAILQ_INIT(&vcl->ref_list); VTAILQ_INIT(&vcl->filters); vcl->temp = VCL_TEMP_INIT; if (vcl_send_event(vcl, VCL_EVENT_LOAD, &msg)) { vcl_cancel_load(vcl, cli, msg, name, "initialization"); return; } VSB_destroy(&msg); if (vcl_set_state(vcl, state, &msg)) { assert(*state == '1'); vcl_cancel_load(vcl, cli, msg, name, "warmup"); return; } if (msg) VSB_destroy(&msg); VCLI_Out(cli, "Loaded \"%s\" as \"%s\"", fn , name); VTAILQ_INSERT_TAIL(&vcl_head, vcl, list); VSC_C_main->n_vcl++; VSC_C_main->n_vcl_avail++; } /*--------------------------------------------------------------------*/ void VCL_Poll(void) { struct vsb *nomsg = NULL; struct vcl *vcl, *vcl2; ASSERT_CLI(); ASSERT_VCL_ACTIVE(); VTAILQ_FOREACH_SAFE(vcl, &vcl_head, list, vcl2) { if (vcl->temp == VCL_TEMP_BUSY || vcl->temp == VCL_TEMP_COOLING) AZ(vcl_set_state(vcl, "0", &nomsg)); AZ(nomsg); if (vcl->discard && vcl->temp == VCL_TEMP_COLD) { AZ(vcl->busy); assert(vcl != vcl_active); assert(VTAILQ_EMPTY(&vcl->ref_list)); VTAILQ_REMOVE(&vcl_head, vcl, list); AZ(vcl_send_event(vcl, VCL_EVENT_DISCARD, &nomsg)); AZ(nomsg); vcl_KillBackends(vcl); free(vcl->loaded_name); VCL_Close(&vcl); VSC_C_main->n_vcl--; VSC_C_main->n_vcl_discard--; } } } /*--------------------------------------------------------------------*/ static void v_matchproto_(cli_func_t) vcl_cli_list(struct cli *cli, const char * const *av, void *priv) { struct vcl *vcl; const char *flg; struct vte *vte; /* NB: Shall generate same output as mcf_vcl_list() */ (void)av; (void)priv; ASSERT_CLI(); ASSERT_VCL_ACTIVE(); vte = VTE_new(7, 80); AN(vte); VTAILQ_FOREACH(vcl, &vcl_head, list) { if (vcl == vcl_active) { flg = "active"; } else if (vcl->discard) { flg = "discarded"; } else flg = "available"; VTE_printf(vte, "%s\t%s\t%s\t\v%u\t%s", flg, vcl->state, vcl->temp->name, vcl->busy, vcl->loaded_name); if (vcl->label != NULL) { VTE_printf(vte, "\t->\t%s", vcl->label->loaded_name); if (vcl->nrefs) VTE_printf(vte, " (%d return(vcl)%s)", vcl->nrefs, vcl->nrefs > 1 ? "'s" : ""); } else if (vcl->nlabels > 0) { VTE_printf(vte, "\t<-\t(%d label%s)", vcl->nlabels, vcl->nlabels > 1 ? "s" : ""); } VTE_cat(vte, "\n"); } AZ(VTE_finish(vte)); AZ(VTE_format(vte, VCLI_VTE_format, cli)); VTE_destroy(&vte); } static void v_matchproto_(cli_func_t) vcl_cli_list_json(struct cli *cli, const char * const *av, void *priv) { struct vcl *vcl; (void)priv; ASSERT_CLI(); ASSERT_VCL_ACTIVE(); VCLI_JSON_begin(cli, 2, av); VCLI_Out(cli, ",\n"); VTAILQ_FOREACH(vcl, &vcl_head, list) { VCLI_Out(cli, "{\n"); VSB_indent(cli->sb, 2); VCLI_Out(cli, "\"status\": "); if (vcl == vcl_active) { VCLI_Out(cli, "\"active\",\n"); } else if (vcl->discard) { VCLI_Out(cli, "\"discarded\",\n"); } else VCLI_Out(cli, "\"available\",\n"); VCLI_Out(cli, "\"state\": \"%s\",\n", vcl->state); VCLI_Out(cli, "\"temperature\": \"%s\",\n", vcl->temp->name); VCLI_Out(cli, "\"busy\": %u,\n", vcl->busy); VCLI_Out(cli, "\"name\": \"%s\"", vcl->loaded_name); if (vcl->label != NULL) { VCLI_Out(cli, ",\n"); VCLI_Out(cli, "\"label\": {\n"); VSB_indent(cli->sb, 2); VCLI_Out(cli, "\"name\": \"%s\"", vcl->label->loaded_name); if (vcl->nrefs) VCLI_Out(cli, ",\n\"refs\": %d", vcl->nrefs); VCLI_Out(cli, "\n"); VCLI_Out(cli, "}"); VSB_indent(cli->sb, -2); } else if (vcl->nlabels > 0) { VCLI_Out(cli, ",\n"); VCLI_Out(cli, "\"labels\": %d", vcl->nlabels); } VSB_indent(cli->sb, -2); VCLI_Out(cli, "\n}"); if (VTAILQ_NEXT(vcl, list) != NULL) VCLI_Out(cli, ",\n"); } VCLI_JSON_end(cli); } static void v_matchproto_(cli_func_t) vcl_cli_load(struct cli *cli, const char * const *av, void *priv) { AZ(priv); ASSERT_CLI(); // XXX move back code from vcl_load? vcl_load(cli, av[2], av[3], av[4]); } static void v_matchproto_(cli_func_t) vcl_cli_state(struct cli *cli, const char * const *av, void *priv) { struct vcl *vcl; struct vsb *msg = NULL; AZ(priv); ASSERT_CLI(); ASSERT_VCL_ACTIVE(); AN(av[2]); AN(av[3]); vcl = vcl_find(av[2]); AN(vcl); if (vcl_set_state(vcl, av[3], &msg)) { CHECK_OBJ_NOTNULL(msg, VSB_MAGIC); VCLI_SetResult(cli, CLIS_CANT); VCLI_Out(cli, "Failed ", vcl->loaded_name, av[3] + 1); if (VSB_len(msg)) VCLI_Out(cli, "\nMessage:\n\t%s", VSB_data(msg)); } if (msg) VSB_destroy(&msg); } static void v_matchproto_(cli_func_t) vcl_cli_discard(struct cli *cli, const char * const *av, void *priv) { struct vcl *vcl; ASSERT_CLI(); ASSERT_VCL_ACTIVE(); (void)cli; AZ(priv); vcl = vcl_find(av[2]); CHECK_OBJ_NOTNULL(vcl, VCL_MAGIC); // MGT ensures this Lck_Lock(&vcl_mtx); assert (vcl != vcl_active); // MGT ensures this AZ(vcl->nlabels); // MGT ensures this VSC_C_main->n_vcl_discard++; VSC_C_main->n_vcl_avail--; vcl->discard = 1; if (vcl->label != NULL) { AZ(strcmp(vcl->state, VCL_TEMP_LABEL->name)); vcl->label->nlabels--; vcl->label= NULL; } Lck_Unlock(&vcl_mtx); if (!strcmp(vcl->state, VCL_TEMP_LABEL->name)) { VTAILQ_REMOVE(&vcl_head, vcl, list); free(vcl->loaded_name); FREE_OBJ(vcl); } else if (vcl->temp == VCL_TEMP_COLD) { VCL_Poll(); } } static void v_matchproto_(cli_func_t) vcl_cli_label(struct cli *cli, const char * const *av, void *priv) { struct vcl *lbl; struct vcl *vcl; ASSERT_CLI(); ASSERT_VCL_ACTIVE(); (void)cli; (void)priv; vcl = vcl_find(av[3]); AN(vcl); // MGT ensures this lbl = vcl_find(av[2]); if (lbl == NULL) { ALLOC_OBJ(lbl, VCL_MAGIC); AN(lbl); bprintf(lbl->state, "%s", VCL_TEMP_LABEL->name); lbl->temp = VCL_TEMP_WARM; REPLACE(lbl->loaded_name, av[2]); VTAILQ_INSERT_TAIL(&vcl_head, lbl, list); } if (lbl->label != NULL) lbl->label->nlabels--; lbl->label = vcl; vcl->nlabels++; } static void v_matchproto_(cli_func_t) vcl_cli_use(struct cli *cli, const char * const *av, void *priv) { struct vcl *vcl; ASSERT_CLI(); ASSERT_VCL_ACTIVE(); AN(cli); AZ(priv); vcl = vcl_find(av[2]); AN(vcl); // MGT ensures this assert(vcl->temp == VCL_TEMP_WARM); // MGT ensures this Lck_Lock(&vcl_mtx); vcl_active = vcl; Lck_Unlock(&vcl_mtx); } static void v_matchproto_(cli_func_t) vcl_cli_show(struct cli *cli, const char * const *av, void *priv) { struct vcl *vcl; int verbose = 0; int i = 2; unsigned u; ASSERT_CLI(); ASSERT_VCL_ACTIVE(); AZ(priv); if (av[i] != NULL && !strcmp(av[i], "-v")) { verbose = 1; i++; } if (av[i] == NULL) { vcl = vcl_active; AN(vcl); } else { vcl = vcl_find(av[i]); i++; } if (av[i] != NULL) { VCLI_Out(cli, "Too many parameters: '%s'", av[i]); VCLI_SetResult(cli, CLIS_PARAM); return; } if (vcl == NULL) { VCLI_Out(cli, "No VCL named '%s'", av[i - 1]); VCLI_SetResult(cli, CLIS_PARAM); return; } CHECK_OBJ_NOTNULL(vcl, VCL_MAGIC); if (vcl->label) { vcl = vcl->label; CHECK_OBJ_NOTNULL(vcl, VCL_MAGIC); AZ(vcl->label); } CHECK_OBJ_NOTNULL(vcl->conf, VCL_CONF_MAGIC); if (verbose) { for (u = 0; u < vcl->conf->nsrc; u++) VCLI_Out(cli, "// VCL.SHOW %u %zd %s\n%s\n", u, strlen(vcl->conf->srcbody[u]), vcl->conf->srcname[u], vcl->conf->srcbody[u]); } else { VCLI_Out(cli, "%s", vcl->conf->srcbody[0]); } } /*--------------------------------------------------------------------*/ static struct cli_proto vcl_cmds[] = { { CLICMD_VCL_LOAD, "", vcl_cli_load }, { CLICMD_VCL_LIST, "", vcl_cli_list, vcl_cli_list_json }, { CLICMD_VCL_STATE, "", vcl_cli_state }, { CLICMD_VCL_DISCARD, "", vcl_cli_discard }, { CLICMD_VCL_USE, "", vcl_cli_use }, { CLICMD_VCL_SHOW, "", vcl_cli_show }, { CLICMD_VCL_LABEL, "", vcl_cli_label }, { NULL } }; void VCL_Init(void) { assert(cache_param->workspace_client > 0); WS_Init(&ws_cli, "cli", malloc(cache_param->workspace_client), cache_param->workspace_client); ws_snapshot_cli = WS_Snapshot(&ws_cli); CLI_AddFuncs(vcl_cmds); Lck_New(&vcl_mtx, lck_vcl); VSL_Setup(&vsl_cli, NULL, 0); } varnish-7.5.0/bin/varnishd/cache/cache_vcl.h000066400000000000000000000056061457605730600207320ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2016 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * NB: This is a private .h file for cache_vcl*.c * NB: No other code should include this file. * */ struct vfilter; struct vcltemp; VTAILQ_HEAD(vfilter_head, vfilter); struct vcl { unsigned magic; #define VCL_MAGIC 0x214188f2 VTAILQ_ENTRY(vcl) list; void *dlh; const struct VCL_conf *conf; char state[8]; char *loaded_name; unsigned busy; unsigned discard; const struct vcltemp *temp; VTAILQ_HEAD(,vcldir) director_list; VTAILQ_HEAD(,vclref) ref_list; int nrefs; struct vcl *label; int nlabels; struct vfilter_head filters; }; struct vclref { unsigned magic; #define VCLREF_MAGIC 0x47fb6848 struct vcl *vcl; VTAILQ_ENTRY(vclref) list; char *desc; }; extern struct lock vcl_mtx; extern struct vcl *vcl_active; /* protected by vcl_mtx */ struct vcl *vcl_find(const char *); void VCL_Update(struct vcl **, struct vcl *); struct vcltemp { const char * const name; unsigned is_warm; unsigned is_cold; }; /* * NB: The COOLING temperature is neither COLD nor WARM. * And LABEL is not a temperature, it's a different kind of VCL. */ extern const struct vcltemp VCL_TEMP_INIT[1]; extern const struct vcltemp VCL_TEMP_COLD[1]; extern const struct vcltemp VCL_TEMP_WARM[1]; extern const struct vcltemp VCL_TEMP_BUSY[1]; extern const struct vcltemp VCL_TEMP_COOLING[1]; #define ASSERT_VCL_ACTIVE() \ do { \ assert(vcl_active == NULL || \ vcl_active->temp->is_warm); \ } while (0) varnish-7.5.0/bin/varnishd/cache/cache_vgz.h000066400000000000000000000046721457605730600207560ustar00rootroot00000000000000/*- * Copyright (c) 2011-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #ifdef CACHE_VGZ_H_INCLUDED # error "Multiple includes of cache/cache_vgz.h" #endif #define CACHE_VGZ_H_INCLUDED struct vgz; enum vgzret_e { VGZ_ERROR = -1, VGZ_OK = 0, VGZ_END = 1, VGZ_STUCK = 2, }; enum vgz_flag { VGZ_NORMAL, VGZ_ALIGN, VGZ_RESET, VGZ_FINISH }; // struct vgz *VGZ_NewUngzip(struct vsl_log *vsl, const char *id); struct vgz *VGZ_NewGzip(struct vsl_log *vsl, const char *id); void VGZ_Ibuf(struct vgz *, const void *, ssize_t len); int VGZ_IbufEmpty(const struct vgz *vg); void VGZ_Obuf(struct vgz *, void *, ssize_t len); int VGZ_ObufFull(const struct vgz *vg); enum vgzret_e VGZ_Gzip(struct vgz *, const void **, ssize_t *len, enum vgz_flag); // enum vgzret_e VGZ_Gunzip(struct vgz *, const void **, ssize_t *len); enum vgzret_e VGZ_Destroy(struct vgz **); enum vgz_ua_e { VUA_UPDATE, // Update start/stop/last bits if changed VUA_END_GZIP, // Record uncompressed size VUA_END_GUNZIP, // Record uncompressed size }; void VGZ_UpdateObj(const struct vfp_ctx *, struct vgz*, enum vgz_ua_e); varnish-7.5.0/bin/varnishd/cache/cache_vpi.c000066400000000000000000000157551457605730600207450ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2019 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #include "config.h" #include #include "cache_varnishd.h" #include "vcl.h" #include "vbm.h" #include "vcc_interface.h" #include "cache_vcl.h" /*-------------------------------------------------------------------- * Private & exclusive interfaces between VCC and varnishd */ const size_t vpi_wrk_len = sizeof(struct wrk_vpi); void VPI_wrk_init(struct worker *wrk, void *p, size_t spc) { struct wrk_vpi *vpi = p; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); AN(vpi); assert(spc >= sizeof *vpi); INIT_OBJ(vpi, WRK_VPI_MAGIC); wrk->vpi = vpi; } void VPI_trace(VRT_CTX, unsigned u) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->vcl, VCL_MAGIC); CHECK_OBJ_NOTNULL(ctx->vcl->conf, VCL_CONF_MAGIC); assert(u < ctx->vcl->conf->nref); if (ctx->vsl != NULL) VSLb(ctx->vsl, SLT_VCL_trace, "%s %u %u.%u.%u", ctx->vcl->loaded_name, u, ctx->vcl->conf->ref[u].source, ctx->vcl->conf->ref[u].line, ctx->vcl->conf->ref[u].pos); else VSL(SLT_VCL_trace, NO_VXID, "%s %u %u.%u.%u", ctx->vcl->loaded_name, u, ctx->vcl->conf->ref[u].source, ctx->vcl->conf->ref[u].line, ctx->vcl->conf->ref[u].pos); } static void vpi_ref_panic(struct vsb *vsb, unsigned n, const struct vcl *vcl) { const struct VCL_conf *conf = NULL; const struct vpi_ref *ref; const char *p, *src = NULL; const int lim = 40; const char *abbstr = "[...]"; char buf[lim + sizeof(abbstr)]; int w = 0; AN(vsb); if (vcl != NULL) conf = vcl->conf; if (conf != NULL && conf->magic != VCL_CONF_MAGIC) conf = NULL; if (conf == NULL) { VSB_printf(vsb, "ref = %u, nref = ?,\n", n); return; } if (n >= conf->nref) { VSB_printf(vsb, "ref = %u *invalid*, nref = %u\n", n, conf->nref); return; } VSB_printf(vsb, "ref = %u,\n", n); ref = &conf->ref[n]; if (PAN_dump_struct(vsb, ref, VPI_REF_MAGIC, "vpi_ref")) return; if (ref->source < conf->nsrc) { VSB_printf(vsb, "source = %u (\"%s\"),\n", ref->source, conf->srcname[ref->source]); src = conf->srcbody[ref->source]; } else { VSB_printf(vsb, "source = %u *invalid*,\n", ref->source); } if (src != NULL) { w = strlen(src); assert(w > 0); if (ref->offset >= (unsigned)w) src = NULL; } if (src != NULL) { src += ref->offset; p = strchr(src, '\n'); if (p != NULL) w = p - src; else w -= ref->offset; if (w > lim) { w = snprintf(buf, sizeof buf, "%.*s%s", lim, src, abbstr); src = buf; } } VSB_printf(vsb, "offset = %u,\n", ref->offset); VSB_printf(vsb, "line = %u,\n", ref->line); VSB_printf(vsb, "pos = %u,\n", ref->pos); if (src != NULL) { VSB_cat(vsb, "src = "); VSB_quote(vsb, src, w, VSB_QUOTE_CSTR); VSB_putc(vsb, '\n'); } else { VSB_printf(vsb, "token = \"%s\"\n", ref->token); } VSB_indent(vsb, -2); VSB_cat(vsb, "},\n"); } void VPI_Panic(struct vsb *vsb, const struct wrk_vpi *vpi, const struct vcl *vcl) { const char *hand; AN(vsb); if (PAN_dump_struct(vsb, vpi, WRK_VPI_MAGIC, "vpi")) return; hand = VCL_Return_Name(vpi->handling); if (vpi->handling == 0) hand = "none"; else if (hand == NULL) hand = "*invalid*"; VSB_printf(vsb, "handling (VCL::return) = 0x%x (%s),\n", vpi->handling, hand); vpi_ref_panic(vsb, vpi->ref, vcl); VSB_indent(vsb, -2); VSB_cat(vsb, "},\n"); } /* * After vcl_fini {} == VGC_function_vcl_fini() is called from VGC_Discard(), * handling must either be OK from VCL "return (ok)" or FAIL from VRT_fail(). * * replace OK with 0 for _fini callbacks because that handling has meaning only * when returning from VCL subs */ void VPI_vcl_fini(VRT_CTX) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); AN(ctx->vpi); if (ctx->vpi->handling == VCL_RET_FAIL) return; assert(ctx->vpi->handling == VCL_RET_OK); ctx->vpi->handling = 0; } VCL_VCL VPI_vcl_get(VRT_CTX, const char *name) { VCL_VCL vcl; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); vcl = vcl_find(name); AN(vcl); Lck_Lock(&vcl_mtx); vcl->nrefs++; Lck_Unlock(&vcl_mtx); return (vcl); } void VPI_vcl_rel(VRT_CTX, VCL_VCL vcl) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); AN(vcl); Lck_Lock(&vcl_mtx); vcl->nrefs--; Lck_Unlock(&vcl_mtx); } void VPI_vcl_select(VRT_CTX, VCL_VCL vcl) { struct req *req = ctx->req; CHECK_OBJ_NOTNULL(vcl, VCL_MAGIC); CHECK_OBJ_NOTNULL(req, REQ_MAGIC); CHECK_OBJ_NOTNULL(req->top, REQTOP_MAGIC); if ((IS_TOPREQ(req) && req->top->vcl0 != NULL) || req->restarts > 0) return; // Illegal, req-FSM will fail this later. if (! IS_TOPREQ(req)) assert(req->vcl == req->top->vcl0); Req_Rollback(ctx); if (IS_TOPREQ(req)) { AN(req->top); AZ(req->top->vcl0); req->top->vcl0 = req->vcl; req->vcl = NULL; } VCL_Update(&req->vcl, vcl); VSLb(ctx->req->vsl, SLT_VCL_use, "%s via %s", req->vcl->loaded_name, vcl->loaded_name); } void v_noreturn_ VPI_Fail(const char *func, const char *file, int line, const char *cond) { VAS_Fail(func, file, line, cond, VAS_ASSERT); } enum vcl_func_fail_e VPI_Call_Check(VRT_CTX, const struct VCL_conf *conf, unsigned methods, unsigned n) { struct vbitmap *vbm; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->vcl, VCL_MAGIC); assert(conf == ctx->vcl->conf); vbm = ctx->called; AN(vbm); if ((methods & ctx->method) == 0) return (VSUB_E_METHOD); if (vbit_test(vbm, n)) return (VSUB_E_RECURSE); return (VSUB_E_OK); } void VPI_Call_Begin(VRT_CTX, unsigned n) { struct vbitmap *vbm; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); vbm = ctx->called; AN(vbm); vbit_set(vbm, n); } void VPI_Call_End(VRT_CTX, unsigned n) { struct vbitmap *vbm; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); vbm = ctx->called; AN(vbm); vbit_clr(vbm, n); } varnish-7.5.0/bin/varnishd/cache/cache_vrt.c000066400000000000000000000614241457605730600207540ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Runtime support for compiled VCL programs */ #include "config.h" #include #include "cache_varnishd.h" #include "cache_objhead.h" #include "vav.h" #include "vcl.h" #include "vct.h" #include "venc.h" #include "vend.h" #include "vrt_obj.h" #include "vsa.h" #include "vsha256.h" #include "vtcp.h" #include "vtim.h" #include "vcc_interface.h" #include "common/heritage.h" #include "common/vsmw.h" #include "proxy/cache_proxy.h" // NOT using TOSTRANDS() to create a NULL pointer element despite n == 0 const struct strands *vrt_null_strands = &(struct strands){ .n = 0, .p = (const char *[1]){NULL} }; const struct vrt_blob *vrt_null_blob = &(struct vrt_blob){ .type = VRT_NULL_BLOB_TYPE, .len = 0, .blob = "\0" }; /*--------------------------------------------------------------------*/ VCL_VOID VRT_synth(VRT_CTX, VCL_INT code, VCL_STRING reason) { const char *ret; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); assert(ctx->req != NULL || ctx->bo != NULL); ret = ctx->req == NULL ? "error" : "synth"; if (code < 0) { VRT_fail(ctx, "return(%s()) status code (%jd) is negative", ret, (intmax_t)code); return; } if (code > 65535) { VRT_fail(ctx, "return(%s()) status code (%jd) > 65535", ret, (intmax_t)code); return; } if ((code % 1000) < 100) { VRT_fail(ctx, "illegal return(%s()) status code (%jd) (..0##)", ret, (intmax_t)code); return; } if (reason && !WS_Allocated(ctx->ws, reason, -1)) { reason = WS_Copy(ctx->ws, reason, -1); if (!reason) { VRT_fail(ctx, "Workspace overflow"); return; } } if (ctx->req == NULL) { CHECK_OBJ_NOTNULL(ctx->bo, BUSYOBJ_MAGIC); ctx->bo->err_code = (uint16_t)code; ctx->bo->err_reason = reason ? reason : http_Status2Reason(ctx->bo->err_code % 1000, NULL); return; } ctx->req->err_code = (uint16_t)code; ctx->req->err_reason = reason ? reason : http_Status2Reason(ctx->req->err_code % 1000, NULL); } /*--------------------------------------------------------------------*/ void VPI_acl_log(VRT_CTX, const char *msg) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); AN(msg); if (ctx->vsl != NULL) VSLbs(ctx->vsl, SLT_VCL_acl, TOSTRAND(msg)); else VSL(SLT_VCL_acl, NO_VXID, "%s", msg); } int VRT_acl_match(VRT_CTX, VCL_ACL acl, VCL_IP ip) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); if (acl == NULL || ip == NULL) { VRT_fail(ctx, "Cannot match a null %s", acl == NULL ? "ACL" : "IP address"); return (0); } CHECK_OBJ(acl, VRT_ACL_MAGIC); assert(VSA_Sane(ip)); return (acl->match(ctx, ip)); } static int acl_tbl_cmp(int fam, const uint8_t *key, const uint8_t *b) { int rv; rv = fam - (int)b[3]; if (rv == 0 && b[1] > 0) rv = memcmp(key, b + 4, b[1]); if (rv == 0 && b[2]) rv = (int)(key[b[1]] & b[2]) - (int)b[4 + b[1]]; return (rv); } static const uint8_t * bbsearch(int fam, const uint8_t *key, const uint8_t *base0, size_t nmemb, size_t size) { const uint8_t *base = base0; size_t lim; int cmp; const uint8_t *p; for (lim = nmemb; lim != 0; lim >>= 1) { p = base + (lim >> 1) * size; cmp = acl_tbl_cmp(fam, key, p); if (cmp == 0) return (p); if (cmp > 0) { /* key > p: move right */ base = p + size; lim--; } /* else move left */ } return (NULL); } int VPI_acl_table(VRT_CTX, VCL_IP p, unsigned n, unsigned m, const uint8_t *tbl, const char * const *str, const char *fin) { int fam; const uint8_t *key; const uint8_t *ptr; size_t sz; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); AN(p); AN(n); assert(m == 20); AN(tbl); AN(fin); fam = VRT_VSA_GetPtr(ctx, p, &key); ptr = bbsearch(fam, key, tbl, n, m); if (ptr != NULL) { sz = ptr - tbl; AZ(sz % m); sz /= m; if (str != NULL) VPI_acl_log(ctx, str[sz]); return (*ptr); } if (str != NULL) VPI_acl_log(ctx, fin); return (0); } /*--------------------------------------------------------------------*/ VCL_VOID VRT_hit_for_pass(VRT_CTX, VCL_DURATION d) { struct objcore *oc; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); if (ctx->bo == NULL) { VSLb(ctx->vsl, SLT_Error, "Note: Ignoring DURATION argument to return(pass);"); return; } CHECK_OBJ_NOTNULL(ctx->bo, BUSYOBJ_MAGIC); oc = ctx->bo->fetch_objcore; oc->ttl = d; oc->grace = 0.0; oc->keep = 0.0; VSLb(ctx->vsl, SLT_TTL, "HFP %.0f %.0f %.0f %.0f uncacheable", oc->ttl, oc->grace, oc->keep, oc->t_origin); } /*--------------------------------------------------------------------*/ VCL_HTTP VRT_selecthttp(VRT_CTX, enum gethdr_e where) { VCL_HTTP hp; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); switch (where) { case HDR_REQ: hp = ctx->http_req; break; case HDR_REQ_TOP: hp = ctx->http_req_top; break; case HDR_BEREQ: hp = ctx->http_bereq; break; case HDR_BERESP: hp = ctx->http_beresp; break; case HDR_RESP: hp = ctx->http_resp; break; case HDR_OBJ: hp = NULL; break; default: WRONG("VRT_selecthttp 'where' invalid"); } return (hp); } /*--------------------------------------------------------------------*/ VCL_STRING VRT_GetHdr(VRT_CTX, VCL_HEADER hs) { VCL_HTTP hp; const char *p; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); if (hs->where == HDR_OBJ) { CHECK_OBJ_NOTNULL(ctx->req, REQ_MAGIC); CHECK_OBJ_NOTNULL(ctx->req->objcore, OBJCORE_MAGIC); return (HTTP_GetHdrPack(ctx->req->wrk, ctx->req->objcore, hs->what)); } hp = VRT_selecthttp(ctx, hs->where); CHECK_OBJ_NOTNULL(hp, HTTP_MAGIC); if (!http_GetHdr(hp, hs->what, &p)) return (NULL); return (p); } /*-------------------------------------------------------------------- * Alloc Strands with space for n elements on workspace * * Error handling is deliberately left to the caller */ struct strands * VRT_AllocStrandsWS(struct ws *ws, int n) { struct strands *s; const char **p; s = WS_Alloc(ws, sizeof *s); p = WS_Alloc(ws, n * sizeof *p); if (s == NULL || p == NULL) return (NULL); s->n = n; s->p = p; return (s); } /*-------------------------------------------------------------------- * Compare two STRANDS */ int VRT_CompareStrands(VCL_STRANDS a, VCL_STRANDS b) { const char *pa = NULL, *pb = NULL; int na = 0, nb = 0; while (1) { if (pa != NULL && *pa == '\0') pa = NULL; if (pb != NULL && *pb == '\0') pb = NULL; if (pa == NULL && na < a->n) pa = a->p[na++]; else if (pb == NULL && nb < b->n) pb = b->p[nb++]; else if (pa == NULL && pb == NULL) return (0); else if (pa == NULL) return (-1); else if (pb == NULL) return (1); else if (*pa == '\0') pa = NULL; else if (*pb == '\0') pb = NULL; else if (*pa != *pb) return (*pa - *pb); else { pa++; pb++; } } } /*-------------------------------------------------------------------- * STRANDS to BOOL */ VCL_BOOL VRT_Strands2Bool(VCL_STRANDS s) { int i; AN(s); for (i = 0; i < s->n; i++) if (s->p[i] != NULL) return (1); return (0); } /*-------------------------------------------------------------------- * Hash a STRANDS */ uint32_t VRT_HashStrands32(VCL_STRANDS s) { struct VSHA256Context sha_ctx; unsigned char sha256[VSHA256_LEN]; const char *p; int i; AN(s); VSHA256_Init(&sha_ctx); for (i = 0; i < s->n; i++) { p = s->p[i]; if (p != NULL && *p != '\0') VSHA256_Update(&sha_ctx, p, strlen(p)); } VSHA256_Final(sha256, &sha_ctx); /* NB: for some reason vmod_director's shard director specifically * relied on little-endian decoding of the last 4 octets. In order * to maintain a stable hash function to share across consumers we * need to stick to that. */ return (vle32dec(sha256 + VSHA256_LEN - 4)); } /*-------------------------------------------------------------------- * Collapse STRANDS into the space provided, or return NULL */ char * VRT_Strands(char *d, size_t dl, VCL_STRANDS s) { char *b; const char *e; unsigned x; AN(d); AN(s); b = d; e = b + dl; for (int i = 0; i < s->n; i++) if (s->p[i] != NULL && *s->p[i] != '\0') { x = strlen(s->p[i]); if (b + x >= e) return (NULL); memcpy(b, s->p[i], x); b += x; } assert(b < e); *b++ = '\0'; return (b); } /*-------------------------------------------------------------------- * Copy and merge STRANDS into a workspace. */ VCL_STRING VRT_StrandsWS(struct ws *ws, const char *h, VCL_STRANDS s) { const char *q = NULL; struct vsb vsb[1]; int i; WS_Assert(ws); AN(s); for (i = 0; i < s->n; i++) { if (s->p[i] != NULL && *s->p[i] != '\0') { q = s->p[i]; break; } } if (q == NULL) { if (h == NULL) return (""); if (WS_Allocated(ws, h, -1)) return (h); } else if (h == NULL && WS_Allocated(ws, q, -1)) { for (i++; i < s->n; i++) if (s->p[i] != NULL && *s->p[i] != '\0') break; if (i == s->n) return (q); } WS_VSB_new(vsb, ws); if (h != NULL) VSB_cat(vsb, h); for (i = 0; i < s->n; i++) { if (s->p[i] != NULL && *s->p[i] != '\0') VSB_cat(vsb, s->p[i]); } return (WS_VSB_finish(vsb, ws, NULL)); } /*-------------------------------------------------------------------- * Copy and merge STRANDS on the current workspace */ VCL_STRING VRT_STRANDS_string(VRT_CTX, VCL_STRANDS s) { const char *b; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->ws, WS_MAGIC); b = VRT_StrandsWS(ctx->ws, NULL, s); if (b == NULL) VRT_fail(ctx, "Workspace overflow"); return (b); } /*-------------------------------------------------------------------- * upper/lower-case STRANDS (onto workspace) */ #include VCL_STRING VRT_UpperLowerStrands(VRT_CTX, VCL_STRANDS s, int up) { unsigned u; char *b, *e, *r; const char *p, *q = NULL; int i, copy = 0; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->ws, WS_MAGIC); AN(s); u = WS_ReserveAll(ctx->ws); r = b = WS_Reservation(ctx->ws); e = b + u; for (i = 0; i < s->n; i++) { if (s->p[i] == NULL || s->p[i][0] == '\0') continue; if (q != NULL) copy = 1; for(p = q = s->p[i]; *p != '\0'; p++) { if ((up && vct_islower(*p)) || (!up && vct_isupper(*p))) { *b++ = *p ^ 0x20; copy = 1; } else if (b < e) { *b++ = *p; } if (copy && b == e) break; } if (copy && b == e) { WS_Release(ctx->ws, 0); VRT_fail(ctx, "Workspace overflow"); return (NULL); } } assert(b <= e); if (!copy) { WS_Release(ctx->ws, 0); return (q); } assert(b < e); *b++ = '\0'; assert(b <= e); WS_ReleaseP(ctx->ws, b); return (r); } // rfc7230,l,1243,1244 // ASCII VCHAR + TAB + obs-text (0x80-ff) static inline VCL_BOOL validhdr(const char *p) { AN(p); for(;*p != '\0'; p++) if (! vct_ishdrval(*p)) return (0); return (1); } /*--------------------------------------------------------------------*/ VCL_BOOL VRT_ValidHdr(VRT_CTX, VCL_STRANDS s) { int i; (void) ctx; for (i = 0; i < s->n; i++) { if (s->p[i] == NULL || s->p[i][0] == '\0') continue; if (! validhdr(s->p[i])) return (0); } return (1); } /*--------------------------------------------------------------------*/ VCL_VOID VRT_UnsetHdr(VRT_CTX, VCL_HEADER hs) { VCL_HTTP hp; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); AN(hs); AN(hs->what); hp = VRT_selecthttp(ctx, hs->where); CHECK_OBJ_NOTNULL(hp, HTTP_MAGIC); http_Unset(hp, hs->what); } VCL_VOID VRT_SetHdr(VRT_CTX, VCL_HEADER hs, const char *pfx, VCL_STRANDS s) { VCL_HTTP hp; unsigned u, l, pl; char *p, *b; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); AN(hs); AN(hs->what); hp = VRT_selecthttp(ctx, hs->where); CHECK_OBJ_NOTNULL(hp, HTTP_MAGIC); u = WS_ReserveAll(hp->ws); pl = (pfx == NULL) ? 0 : strlen(pfx); l = hs->what[0] + 1 + pl; if (u <= l) { WS_Release(hp->ws, 0); WS_MarkOverflow(hp->ws); VSLbs(ctx->vsl, SLT_LostHeader, TOSTRAND(hs->what + 1)); return; } b = WS_Reservation(hp->ws); if (s != NULL) { p = VRT_Strands(b + l, u - l, s); if (p == NULL) { WS_Release(hp->ws, 0); WS_MarkOverflow(hp->ws); VSLbs(ctx->vsl, SLT_LostHeader, TOSTRAND(hs->what + 1)); return; } } else { b[l] = '\0'; } p = b; memcpy(p, hs->what + 1, hs->what[0]); p += hs->what[0]; *p++ = ' '; if (pfx != NULL) memcpy(p, pfx, pl); p += pl; if (FEATURE(FEATURE_VALIDATE_HEADERS) && !validhdr(b)) { VRT_fail(ctx, "Bad header %s", b); WS_Release(hp->ws, 0); return; } WS_ReleaseP(hp->ws, strchr(p, '\0') + 1); http_Unset(hp, hs->what); http_SetHeader(hp, b); } /*--------------------------------------------------------------------*/ VCL_VOID VRT_handling(VRT_CTX, unsigned hand) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); assert(hand != VCL_RET_FAIL); AN(ctx->vpi); AZ(ctx->vpi->handling); assert(hand > 0); assert(hand < VCL_RET_MAX); ctx->vpi->handling = hand; } unsigned VRT_handled(VRT_CTX) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); AN(ctx->vpi); return (ctx->vpi->handling); } /* the trace value is cached in the VPI for efficiency */ VCL_VOID VRT_trace(VRT_CTX, VCL_BOOL a) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); AN(ctx->vpi); ctx->vpi->trace = a; } /*--------------------------------------------------------------------*/ VCL_VOID VRT_fail(VRT_CTX, const char *fmt, ...) { va_list ap; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); assert(ctx->vsl != NULL || ctx->msg != NULL); AN(ctx->vpi); if (ctx->vpi->handling == VCL_RET_FAIL) return; AZ(ctx->vpi->handling); AN(fmt); AZ(strchr(fmt, '\n')); va_start(ap, fmt); if (ctx->vsl != NULL) { VSLbv(ctx->vsl, SLT_VCL_Error, fmt, ap); } else { AN(ctx->msg); VSB_vprintf(ctx->msg, fmt, ap); VSB_putc(ctx->msg, '\n'); } va_end(ap); ctx->vpi->handling = VCL_RET_FAIL; } /*-------------------------------------------------------------------- * Feed data into the hash calculation */ VCL_VOID VRT_hashdata(VRT_CTX, VCL_STRANDS s) { int i; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->req, REQ_MAGIC); AN(ctx->specific); AN(s); for (i = 0; i < s->n; i++) HSH_AddString(ctx->req, ctx->specific, s->p[i]); /* * Add a 'field-separator' to make it more difficult to * manipulate the hash. */ HSH_AddString(ctx->req, ctx->specific, NULL); } /*--------------------------------------------------------------------*/ VCL_TIME VRT_r_now(VRT_CTX) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); return (ctx->now); } /*--------------------------------------------------------------------*/ VCL_STRING v_matchproto_() VRT_IP_string(VRT_CTX, VCL_IP ip) { char buf[VTCP_ADDRBUFSIZE]; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); if (ip == NULL) { VRT_fail(ctx, "%s: Illegal IP", __func__); return (NULL); } VTCP_name(ip, buf, sizeof buf, NULL, 0); return (WS_Copy(ctx->ws, buf, -1)); } int VRT_INT_is_valid(VCL_INT arg) { return (arg >= VRT_INTEGER_MIN && arg <= VRT_INTEGER_MAX); } VCL_STRING v_matchproto_() VRT_INT_string(VRT_CTX, VCL_INT num) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); if (!VRT_INT_is_valid(num)) VRT_fail(ctx, "INT overflow converting to string (0x%jx)", (intmax_t)num); return (WS_Printf(ctx->ws, "%jd", (intmax_t)num)); } int VRT_REAL_is_valid(VCL_REAL arg) { return (!isnan(arg) && arg >= VRT_DECIMAL_MIN && arg <= VRT_DECIMAL_MAX); } VCL_STRING v_matchproto_() VRT_REAL_string(VRT_CTX, VCL_REAL num) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); if (!VRT_REAL_is_valid(num)) VRT_fail(ctx, "REAL overflow converting to string (%e)", num); return (WS_Printf(ctx->ws, "%.3f", num)); } VCL_STRING v_matchproto_() VRT_TIME_string(VRT_CTX, VCL_TIME t) { char *p; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); p = WS_Alloc(ctx->ws, VTIM_FORMAT_SIZE); if (p == NULL) { VRT_fail(ctx, "Workspace overflow"); return (NULL); } VTIM_format(t, p); if (*p == '\0') { VRT_fail(ctx, "Unformatable VCL_TIME"); return (NULL); } return (p); } VCL_STRING v_matchproto_() VRT_BACKEND_string(VCL_BACKEND d) { if (d == NULL) return (NULL); CHECK_OBJ_NOTNULL(d, DIRECTOR_MAGIC); return (d->vcl_name); } VCL_STRING v_matchproto_() VRT_BOOL_string(VCL_BOOL val) { return (val ? "true" : "false"); } VCL_STRING v_matchproto_() VRT_BLOB_string(VRT_CTX, VCL_BLOB val) { struct vsb vsb[1]; const char *s; if (val == NULL) return (NULL); WS_VSB_new(vsb, ctx->ws); VSB_putc(vsb, ':'); VENC_Encode_Base64(vsb, val->blob, val->len); VSB_putc(vsb, ':'); s = WS_VSB_finish(vsb, ctx->ws, NULL); return (s); } /*--------------------------------------------------------------------*/ VCL_VOID VRT_Rollback(VRT_CTX, VCL_HTTP hp) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(hp, HTTP_MAGIC); if (ctx->method & VCL_MET_PIPE) { VRT_fail(ctx, "Cannot rollback in vcl_pipe {}"); return; } if (hp == ctx->http_req) { CHECK_OBJ_NOTNULL(ctx->req, REQ_MAGIC); Req_Rollback(ctx); if (ctx->method & VCL_MET_DELIVER) XXXAZ(Resp_Setup_Deliver(ctx->req)); if (ctx->method & VCL_MET_SYNTH) Resp_Setup_Synth(ctx->req); } else if (hp == ctx->http_bereq) { Bereq_Rollback(ctx); } else WRONG("VRT_Rollback 'hp' invalid"); } /*--------------------------------------------------------------------*/ VCL_VOID VRT_synth_strands(VRT_CTX, VCL_STRANDS s) { struct vsb *vsb; int i; CAST_OBJ_NOTNULL(vsb, ctx->specific, VSB_MAGIC); AN(s); for (i = 0; i < s->n; i++) { if (s->p[i] != NULL) VSB_cat(vsb, s->p[i]); else VSB_cat(vsb, "(null)"); } } VCL_VOID VRT_synth_blob(VRT_CTX, VCL_BLOB b) { struct vsb *vsb; CAST_OBJ_NOTNULL(vsb, ctx->specific, VSB_MAGIC); if (b->len > 0 && b->blob != NULL) VSB_bcat(vsb, b->blob, b->len); } VCL_VOID VRT_synth_page(VRT_CTX, VCL_STRANDS s) { VRT_synth_strands(ctx, s); } /*--------------------------------------------------------------------*/ static VCL_STRING vrt_ban_error(VRT_CTX, VCL_STRING err) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); AN(ctx->vsl); AN(err); VSLb(ctx->vsl, SLT_VCL_Error, "ban(): %s", err); return (err); } VCL_STRING VRT_ban_string(VRT_CTX, VCL_STRING str) { char *a1, *a2, *a3; char **av; struct ban_proto *bp; const char *err = NULL, *berr = NULL; int i; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); if (str == NULL) return (vrt_ban_error(ctx, "Null argument")); bp = BAN_Build(); if (bp == NULL) return (vrt_ban_error(ctx, "Out of Memory")); av = VAV_Parse(str, NULL, ARGV_NOESC); AN(av); if (av[0] != NULL) { err = av[0]; VAV_Free(av); BAN_Abandon(bp); return (vrt_ban_error(ctx, err)); } for (i = 0; ;) { a1 = av[++i]; if (a1 == NULL) { err = "No ban conditions found."; break; } a2 = av[++i]; if (a2 == NULL) { err = "Expected comparison operator."; break; } a3 = av[++i]; if (a3 == NULL) { err = "Expected second operand."; break; } berr = BAN_AddTest(bp, a1, a2, a3); if (berr != NULL) break; if (av[++i] == NULL) { berr = BAN_Commit(bp); if (berr == NULL) bp = NULL; break; } if (strcmp(av[i], "&&")) { err = WS_Printf(ctx->ws, "Expected && between " "conditions, found \"%s\"", av[i]); if (err == NULL) err = "Expected && between conditions " "(workspace overflow)"; break; } } if (berr != NULL) { AZ(err); err = WS_Copy(ctx->ws, berr, -1); if (err == NULL) err = "Unknown error (workspace overflow)"; berr = NULL; } AZ(berr); if (bp != NULL) BAN_Abandon(bp); VAV_Free(av); if (err == NULL) return (NULL); return (vrt_ban_error(ctx, err)); } VCL_BYTES VRT_CacheReqBody(VRT_CTX, VCL_BYTES maxsize) { const char * const err = "req.body can only be cached in vcl_recv{}"; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); if (ctx->method != VCL_MET_RECV) { if (ctx->vsl != NULL) { VSLbs(ctx->vsl, SLT_VCL_Error, TOSTRAND(err)); } else { AN(ctx->msg); VSB_printf(ctx->msg, "%s\n", err); }; return (-1); } CHECK_OBJ_NOTNULL(ctx->req, REQ_MAGIC); return (VRB_Cache(ctx->req, maxsize)); } /*-------------------------------------------------------------------- * purges */ VCL_INT VRT_purge(VRT_CTX, VCL_DURATION ttl, VCL_DURATION grace, VCL_DURATION keep) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); if ((ctx->method & (VCL_MET_HIT|VCL_MET_MISS)) == 0) { VRT_fail(ctx, "purge can only happen in vcl_hit{} or vcl_miss{}"); return (0); } CHECK_OBJ_NOTNULL(ctx->req, REQ_MAGIC); CHECK_OBJ_NOTNULL(ctx->req->wrk, WORKER_MAGIC); return (HSH_Purge(ctx->req->wrk, ctx->req->objcore->objhead, ctx->req->t_req, ttl, grace, keep)); } /*-------------------------------------------------------------------- */ struct vsmw_cluster * v_matchproto_() VRT_VSM_Cluster_New(VRT_CTX, size_t sz) { struct vsmw_cluster *vc; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); assert(sz > 0); AN(vsc_lock); AN(vsc_unlock); AN(heritage.proc_vsmw); vsc_lock(); vc = VSMW_NewCluster(heritage.proc_vsmw, sz, "VSC_cluster"); vsc_unlock(); return (vc); } void v_matchproto_() VRT_VSM_Cluster_Destroy(VRT_CTX, struct vsmw_cluster **vsmcp) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); AN(vsmcp); VSMW_DestroyCluster(heritage.proc_vsmw, vsmcp); } /*-------------------------------------------------------------------- * Simple stuff */ int VRT_strcmp(const char *s1, const char *s2) { if (s1 == NULL || s2 == NULL) return (1); return (strcmp(s1, s2)); } void VRT_memmove(void *dst, const void *src, unsigned len) { (void)memmove(dst, src, len); } VCL_BOOL VRT_ipcmp(VRT_CTX, VCL_IP sua1, VCL_IP sua2) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); if (sua1 == NULL || sua2 == NULL) { VRT_fail(ctx, "%s: Illegal IP", __func__); return (1); } return (VSA_Compare_IP(sua1, sua2)); } /* * the pointer passed as src must have at least VCL_TASK lifetime */ VCL_BLOB VRT_blob(VRT_CTX, const char *err, const void *src, size_t len, unsigned type) { struct vrt_blob *p; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->ws, WS_MAGIC); if (src == NULL || len == 0) return (vrt_null_blob); p = (void *)WS_Alloc(ctx->ws, sizeof *p); if (p == NULL) { VRT_fail(ctx, "Workspace overflow (%s)", err); return (NULL); } p->type = type; p->len = len; p->blob = src; return (p); } int VRT_VSA_GetPtr(VRT_CTX, const struct suckaddr *sua, const unsigned char ** dst) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); AN(dst); if (sua == NULL) { VRT_fail(ctx, "%s: Illegal IP", __func__); *dst = NULL; return (-1); } return (VSA_GetPtr(sua, dst)); } void VRT_Format_Proxy(struct vsb *vsb, VCL_INT version, VCL_IP sac, VCL_IP sas, VCL_STRING auth) { VPX_Format_Proxy(vsb, (int)version, sac, sas, auth); } /* * Clone a struct vrt_endpoint in a single malloc() allocation */ //lint -e{662} Possible of out-of-bounds pointer (___ beyond end of data) //lint -e{826} Suspicious pointer-to-pointer conversion (area to o small struct vrt_endpoint * VRT_Endpoint_Clone(const struct vrt_endpoint * const vep) { size_t sz; struct vrt_endpoint *nvep; struct vrt_blob *blob = NULL; struct suckaddr *sa; size_t uds_len = 0; char *p, *e; CHECK_OBJ_NOTNULL(vep, VRT_ENDPOINT_MAGIC); sz = sizeof *nvep; if (vep->ipv4) sz += vsa_suckaddr_len; if (vep->ipv6) sz += vsa_suckaddr_len; if (vep->uds_path != NULL) { uds_len = strlen(vep->uds_path) + 1; sz += uds_len; } if (vep->preamble != NULL && vep->preamble->len) { sz += sizeof(*blob); sz += vep->preamble->len; } p = calloc(1, sz); AN(p); e = p + sz; nvep = (void*)p; p += sizeof *nvep; INIT_OBJ(nvep, VRT_ENDPOINT_MAGIC); if (vep->ipv4) { sa = (void*)p; memcpy(sa, vep->ipv4, vsa_suckaddr_len); nvep->ipv4 = sa; p += vsa_suckaddr_len; } if (vep->ipv6) { sa = (void*)p; memcpy(sa, vep->ipv6, vsa_suckaddr_len); nvep->ipv6 = sa; p += vsa_suckaddr_len; } if (vep->preamble != NULL && vep->preamble->len) { /* Before uds because we need p to be aligned still */ blob = (void*)p; p += sizeof(*blob); nvep->preamble = blob; memcpy(p, vep->preamble->blob, vep->preamble->len); blob->len = vep->preamble->len; blob->blob = p; p += vep->preamble->len; } if (uds_len) { memcpy(p, vep->uds_path, uds_len); nvep->uds_path = p; p += uds_len; } assert(p == e); return (nvep); } varnish-7.5.0/bin/varnishd/cache/cache_vrt_filter.c000066400000000000000000000256061457605730600223230ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2016 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #include "config.h" #include #include #include #include "cache_varnishd.h" #include "cache_vcl.h" #include "vrt_obj.h" #include "vct.h" #include "cache_filter.h" /*-------------------------------------------------------------------- */ struct vfilter { unsigned magic; #define VFILTER_MAGIC 0xd40894e9 const struct vfp *vfp; const struct vdp *vdp; const char *name; int nlen; VTAILQ_ENTRY(vfilter) list; }; static struct vfilter_head vrt_filters = VTAILQ_HEAD_INITIALIZER(vrt_filters); static const char * is_dup_filter(const struct vfilter_head *head, const struct vfp * vfp, const struct vdp *vdp, const char *name) { struct vfilter *vp; VTAILQ_FOREACH(vp, head, list) { if (vfp != NULL && vp->vfp != NULL) { if (vp->vfp == vfp) return ("VFP already registered"); if (!strcasecmp(vp->name, name)) return ("VFP name already used"); } if (vdp != NULL && vp->vdp != NULL) { if (vp->vdp == vdp) return ("VDP already registered"); if (!strcasecmp(vp->name, name)) return ("VDP name already used"); } } return (NULL); } static const char * vrt_addfilter(VRT_CTX, const struct vfp *vfp, const struct vdp *vdp) { struct vfilter *vp; struct vfilter_head *hd = &vrt_filters; const char *err, *name = NULL; CHECK_OBJ_ORNULL(ctx, VRT_CTX_MAGIC); assert(vfp != NULL || vdp != NULL); assert(vfp == NULL || vfp->name != NULL); assert(vdp == NULL || vdp->name != NULL); assert(vfp == NULL || vdp == NULL || !strcasecmp(vfp->name, vdp->name)); if (vfp != NULL) name = vfp->name; else if (vdp != NULL) name = vdp->name; AN(name); err = is_dup_filter(hd, vfp, vdp, name); if (err != NULL) { if (ctx != NULL) VRT_fail(ctx, "%s: %s (global)", name, err); return (err); } if (ctx != NULL) { ASSERT_CLI(); CHECK_OBJ_NOTNULL(ctx->vcl, VCL_MAGIC); hd = &ctx->vcl->filters; err = is_dup_filter(hd, vfp, vdp, name); if (err != NULL) { VRT_fail(ctx, "%s: %s (per-vcl)", name, err); return (err); } } ALLOC_OBJ(vp, VFILTER_MAGIC); AN(vp); vp->vfp = vfp; vp->vdp = vdp; vp->name = name; vp->nlen = strlen(name); VTAILQ_INSERT_TAIL(hd, vp, list); return (err); } const char * VRT_AddFilter(VRT_CTX, const struct vfp *vfp, const struct vdp *vdp) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); return (vrt_addfilter(ctx, vfp, vdp)); } void VRT_AddVFP(VRT_CTX, const struct vfp *filter) { AZ(VRT_AddFilter(ctx, filter, NULL)); } void VRT_AddVDP(VRT_CTX, const struct vdp *filter) { AZ(VRT_AddFilter(ctx, NULL, filter)); } void VRT_RemoveFilter(VRT_CTX, const struct vfp *vfp, const struct vdp *vdp) { struct vfilter *vp; struct vfilter_head *hd; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->vcl, VCL_MAGIC); hd = &ctx->vcl->filters; assert(vfp != NULL || vdp != NULL); assert(vfp == NULL || vfp->name != NULL); assert(vdp == NULL || vdp->name != NULL); assert(vfp == NULL || vdp == NULL || !strcasecmp(vfp->name, vdp->name)); ASSERT_CLI(); VTAILQ_FOREACH(vp, hd, list) { CHECK_OBJ_NOTNULL(vp, VFILTER_MAGIC); if (vp->vfp == vfp && vp->vdp == vdp) break; } AN(vp); assert(vfp == NULL || !strcasecmp(vfp->name, vp->name)); assert(vdp == NULL || !strcasecmp(vdp->name, vp->name)); VTAILQ_REMOVE(hd, vp, list); FREE_OBJ(vp); } void VRT_RemoveVFP(VRT_CTX, const struct vfp *filter) { VRT_RemoveFilter(ctx, filter, NULL); } void VRT_RemoveVDP(VRT_CTX, const struct vdp *filter) { VRT_RemoveFilter(ctx, NULL, filter); } static const struct vfilter vfilter_error[1]; // XXX: idea(fgs): Allow filters (...) arguments in the list static const struct vfilter * vcl_filter_list_iter(int want_vfp, const struct vfilter_head *h1, const struct vfilter_head *h2, const char **flp) { const char *fl, *q; const struct vfilter *vp; AN(h1); AN(h2); AN(flp); fl = *flp; AN(fl); while (vct_isspace(*fl)) fl++; if (*fl == '\0') { *flp = NULL; return (NULL); } for (q = fl; *q && !vct_isspace(*q); q++) continue; *flp = q; VTAILQ_FOREACH(vp, h1, list) { if (want_vfp && vp->vfp == NULL) continue; else if (!want_vfp && vp->vdp == NULL) continue; if (vp->nlen == q - fl && !memcmp(fl, vp->name, vp->nlen)) return (vp); } VTAILQ_FOREACH(vp, h2, list) { if (want_vfp && vp->vfp == NULL) continue; else if (!want_vfp && vp->vdp == NULL) continue; if (vp->nlen == q - fl && !memcmp(fl, vp->name, vp->nlen)) return (vp); } *flp = fl; return (vfilter_error); } int VCL_StackVFP(struct vfp_ctx *vc, const struct vcl *vcl, const char *fl) { const struct vfilter *vp; AN(fl); VSLbs(vc->wrk->vsl, SLT_Filters, TOSTRAND(fl)); while (1) { vp = vcl_filter_list_iter(1, &vrt_filters, &vcl->filters, &fl); if (vp == NULL) return (0); if (vp == vfilter_error) return (VFP_Error(vc, "Filter '...%s' not found", fl)); if (VFP_Push(vc, vp->vfp) == NULL) return (-1); } } int VCL_StackVDP(struct req *req, const struct vcl *vcl, const char *fl) { const struct vfilter *vp; struct vrt_ctx ctx[1]; AN(fl); VSLbs(req->vsl, SLT_Filters, TOSTRAND(fl)); INIT_OBJ(ctx, VRT_CTX_MAGIC); VCL_Req2Ctx(ctx, req); while (1) { vp = vcl_filter_list_iter(0, &vrt_filters, &vcl->filters, &fl); if (vp == NULL) return (0); if (vp == vfilter_error) { VSLb(req->vsl, SLT_Error, "Filter '...%s' not found", fl); return (-1); } if (VDP_Push(ctx, req->vdc, req->ws, vp->vdp, NULL)) return (-1); } } void VCL_VRT_Init(void) { AZ(vrt_addfilter(NULL, &VFP_testgunzip, NULL)); AZ(vrt_addfilter(NULL, &VFP_gunzip, NULL)); AZ(vrt_addfilter(NULL, &VFP_gzip, NULL)); AZ(vrt_addfilter(NULL, &VFP_esi, NULL)); AZ(vrt_addfilter(NULL, &VFP_esi_gzip, NULL)); AZ(vrt_addfilter(NULL, NULL, &VDP_esi)); AZ(vrt_addfilter(NULL, NULL, &VDP_gunzip)); AZ(vrt_addfilter(NULL, NULL, &VDP_range)); } /*-------------------------------------------------------------------- */ typedef void filter_list_t(void *, struct vsb *vsb); static const char * filter_on_ws(struct ws *ws, filter_list_t *func, void *arg) { struct vsb vsb[1]; const char *p; AN(func); AN(arg); WS_VSB_new(vsb, ws); func(arg, vsb); p = WS_VSB_finish(vsb, ws, NULL); if (p == NULL) p = ""; return (p); } /*-------------------------------------------------------------------- */ static void v_matchproto_(filter_list_t) vbf_default_filter_list(void *arg, struct vsb *vsb) { const struct busyobj *bo; const char *p; int do_gzip, do_gunzip, is_gzip = 0, is_gunzip = 0; CAST_OBJ_NOTNULL(bo, arg, BUSYOBJ_MAGIC); do_gzip = bo->do_gzip; do_gunzip = bo->do_gunzip; /* * The VCL variables beresp.do_g[un]zip tells us how we want the * object processed before it is stored. * * The backend Content-Encoding header tells us what we are going * to receive, which we classify in the following three classes: * * "Content-Encoding: gzip" --> object is gzip'ed. * no Content-Encoding --> object is not gzip'ed. * anything else --> do nothing wrt gzip */ /* No body -> done */ if (bo->htc->body_status == BS_NONE || bo->htc->content_length == 0) return; if (!cache_param->http_gzip_support) do_gzip = do_gunzip = 0; if (http_GetHdr(bo->beresp, H_Content_Encoding, &p)) is_gzip = !strcasecmp(p, "gzip"); else is_gunzip = 1; /* We won't gunzip unless it is gzip'ed */ if (do_gunzip && !is_gzip) do_gunzip = 0; /* We wont gzip unless if it already is gzip'ed */ if (do_gzip && !is_gunzip) do_gzip = 0; if (do_gunzip || (is_gzip && bo->do_esi)) VSB_cat(vsb, " gunzip"); if (bo->do_esi && (do_gzip || (is_gzip && !do_gunzip))) { VSB_cat(vsb, " esi_gzip"); return; } if (bo->do_esi) { VSB_cat(vsb, " esi"); return; } if (do_gzip) VSB_cat(vsb, " gzip"); if (is_gzip && !do_gunzip) VSB_cat(vsb, " testgunzip"); } const char * VBF_Get_Filter_List(struct busyobj *bo) { CHECK_OBJ_NOTNULL(bo, BUSYOBJ_MAGIC); return (filter_on_ws(bo->ws, vbf_default_filter_list, bo)); } /*-------------------------------------------------------------------- */ static void v_matchproto_(filter_list_t) resp_default_filter_list(void *arg, struct vsb *vsb) { struct req *req; CAST_OBJ_NOTNULL(req, arg, REQ_MAGIC); if (!req->disable_esi && req->objcore != NULL && ObjHasAttr(req->wrk, req->objcore, OA_ESIDATA)) VSB_cat(vsb, " esi"); if (cache_param->http_gzip_support && req->objcore != NULL && ObjCheckFlag(req->wrk, req->objcore, OF_GZIPED) && !RFC2616_Req_Gzip(req->http)) VSB_cat(vsb, " gunzip"); if (cache_param->http_range_support && http_GetStatus(req->resp) == 200 && http_GetHdr(req->http, H_Range, NULL)) VSB_cat(vsb, " range"); } const char * resp_Get_Filter_List(struct req *req) { CHECK_OBJ_NOTNULL(req, REQ_MAGIC); return (filter_on_ws(req->ws, resp_default_filter_list, req)); } /*--------------------------------------------------------------------*/ #define FILTER_VAR(vcl, in, func, fld) \ VCL_STRING \ VRT_r_##vcl##_filters(VRT_CTX) \ { \ \ CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); \ if (ctx->in->fld != NULL) \ return(ctx->in->fld); \ return (func(ctx->in)); \ } \ \ VCL_VOID \ VRT_l_##vcl##_filters(VRT_CTX, const char *str, VCL_STRANDS s) \ { \ const char *b; \ \ (void)str; \ CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); \ b = VRT_StrandsWS(ctx->in->ws, str, s); \ if (b == NULL) \ WS_MarkOverflow(ctx->in->ws); \ else \ ctx->in->fld = b; \ } FILTER_VAR(beresp, bo, VBF_Get_Filter_List, vfp_filter_list) FILTER_VAR(resp, req, resp_Get_Filter_List, vdp_filter_list) varnish-7.5.0/bin/varnishd/cache/cache_vrt_priv.c000066400000000000000000000202421457605730600220050ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2021 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Runtime support for compiled VCL programs: private variables */ #include "config.h" #include #include "cache_varnishd.h" #include "vcl.h" #include "vcc_interface.h" enum vrt_priv_storage_e { VRT_PRIV_ST_WS = 1, VRT_PRIV_ST_HEAP }; struct vrt_priv { unsigned magic; #define VRT_PRIV_MAGIC 0x24157a52 enum vrt_priv_storage_e storage; VRBT_ENTRY(vrt_priv) entry; struct vmod_priv priv[1]; uintptr_t vmod_id; }; struct vrt_privs cli_task_privs[1]; static inline int vrt_priv_dyncmp(const struct vrt_priv *, const struct vrt_priv *); VRBT_GENERATE_INSERT_COLOR(vrt_privs, vrt_priv, entry, static) VRBT_GENERATE_FIND(vrt_privs, vrt_priv, entry, vrt_priv_dyncmp, static) VRBT_GENERATE_INSERT_FINISH(vrt_privs, vrt_priv, entry, static) VRBT_GENERATE_INSERT(vrt_privs, vrt_priv, entry, vrt_priv_dyncmp, static) VRBT_GENERATE_MINMAX(vrt_privs, vrt_priv, entry, static) VRBT_GENERATE_NEXT(vrt_privs, vrt_priv, entry, static) /*-------------------------------------------------------------------- */ void pan_privs(struct vsb *vsb, const struct vrt_privs *privs) { struct vrt_priv *vp; const struct vmod_priv *p; const struct vmod_priv_methods *m; if (privs == NULL) { VSB_cat(vsb, "privs = NULL\n"); return; } VSB_printf(vsb, "privs = %p {\n", privs); VSB_indent(vsb, 2); VRBT_FOREACH(vp, vrt_privs, privs) { if (PAN_dump_oneline(vsb, vp, VRT_PRIV_MAGIC, "priv")) continue; p = vp->priv; //lint -e{774} if (p == NULL) { // should never happen VSB_printf(vsb, "NULL vmod %jx},\n", (uintmax_t)vp->vmod_id); continue; } m = p->methods; VSB_printf(vsb, "{p %p l %ld m %p t \"%s\"} vmod %jx},\n", p->priv, p->len, m, m != NULL ? m->type : "", (uintmax_t)vp->vmod_id ); } VSB_indent(vsb, -2); VSB_cat(vsb, "},\n"); } /*-------------------------------------------------------------------- */ static void VRTPRIV_init(struct vrt_privs *privs) { VRBT_INIT(privs); } static inline int vrt_priv_dyncmp(const struct vrt_priv *vp1, const struct vrt_priv *vp2) { if (vp1->vmod_id < vp2->vmod_id) return (-1); if (vp1->vmod_id > vp2->vmod_id) return (1); return (0); } static struct vmod_priv * vrt_priv_dynamic_get(const struct vrt_privs *privs, uintptr_t vmod_id) { struct vrt_priv *vp; const struct vrt_priv needle = {.vmod_id = vmod_id}; vp = VRBT_FIND(vrt_privs, privs, &needle); if (vp == NULL) return (NULL); CHECK_OBJ(vp, VRT_PRIV_MAGIC); assert(vp->vmod_id == vmod_id); return (vp->priv); } static struct vmod_priv * vrt_priv_dynamic(struct ws *ws, struct vrt_privs *privs, uintptr_t vmod_id) { //lint --e{593} vp allocated, vp->priv returned struct vrt_priv *vp, *ovp; enum vrt_priv_storage_e storage; AN(vmod_id); if (LIKELY(WS_ReserveSize(ws, sizeof *vp) != 0)) { vp = WS_Reservation(ws); storage = VRT_PRIV_ST_WS; } else { vp = malloc(sizeof *vp); storage = VRT_PRIV_ST_HEAP; } AN(vp); INIT_OBJ(vp, VRT_PRIV_MAGIC); vp->storage = storage; vp->vmod_id = vmod_id; ovp = VRBT_INSERT(vrt_privs, privs, vp); if (ovp == NULL) { if (storage == VRT_PRIV_ST_WS) WS_Release(ws, sizeof *vp); return (vp->priv); } if (storage == VRT_PRIV_ST_WS) WS_Release(ws, 0); else if (storage == VRT_PRIV_ST_HEAP) free(vp); else WRONG("priv storage"); return (ovp->priv); } static struct vrt_privs * vrt_priv_task_context(VRT_CTX) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); /* In pipe mode, both req and bo are set. We use req */ assert(ctx->req == NULL || ctx->bo == NULL || ctx->method == VCL_MET_PIPE || ctx->method == 0); if (ctx->req) { CHECK_OBJ(ctx->req, REQ_MAGIC); return (ctx->req->privs); } if (ctx->bo) { CHECK_OBJ(ctx->bo, BUSYOBJ_MAGIC); return (ctx->bo->privs); } ASSERT_CLI(); return (cli_task_privs); } struct vmod_priv * VRT_priv_task_get(VRT_CTX, const void *vmod_id) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); return (vrt_priv_dynamic_get( vrt_priv_task_context(ctx), (uintptr_t)vmod_id)); } struct vmod_priv * VRT_priv_task(VRT_CTX, const void *vmod_id) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); return (vrt_priv_dynamic( ctx->ws, vrt_priv_task_context(ctx), (uintptr_t)vmod_id)); } /* * XXX #3498 on VRT_fail(): Would be better to move the PRIV_TOP check to VCC * * This will fail in the preamble of any VCL SUB containing a call to a vmod * function with a PRIV_TOP argument, which might not exactly be pola */ #define VRT_PRIV_TOP_PREP(ctx, req, sp, top) do { \ CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); \ req = (ctx)->req; \ if (req == NULL) { \ VRT_fail(ctx, "PRIV_TOP is only accessible " \ "in client VCL context"); \ return (NULL); \ } \ CHECK_OBJ(req, REQ_MAGIC); \ sp = (ctx)->sp; \ CHECK_OBJ_NOTNULL(sp, SESS_MAGIC); \ top = (req)->top; \ CHECK_OBJ_NOTNULL(top, REQTOP_MAGIC); \ req = (top)->topreq; \ CHECK_OBJ_NOTNULL(req, REQ_MAGIC); \ } while(0) struct vmod_priv * VRT_priv_top_get(VRT_CTX, const void *vmod_id) { struct req *req; struct sess *sp; struct reqtop *top; struct vmod_priv *priv; VRT_PRIV_TOP_PREP(ctx, req, sp, top); Lck_Lock(&sp->mtx); priv = vrt_priv_dynamic_get(top->privs, (uintptr_t)vmod_id); Lck_Unlock(&sp->mtx); return (priv); } struct vmod_priv * VRT_priv_top(VRT_CTX, const void *vmod_id) { struct req *req; struct sess *sp; struct reqtop *top; struct vmod_priv *priv; VRT_PRIV_TOP_PREP(ctx, req, sp, top); Lck_Lock(&sp->mtx); priv = vrt_priv_dynamic(req->ws, top->privs, (uintptr_t)vmod_id); Lck_Unlock(&sp->mtx); return (priv); } /*-------------------------------------------------------------------- */ void VRT_priv_fini(VRT_CTX, const struct vmod_priv *p) { const struct vmod_priv_methods *m; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); m = p->methods; if (m == NULL) return; CHECK_OBJ(m, VMOD_PRIV_METHODS_MAGIC); if (p->priv == NULL || m->fini == NULL) return; // XXX remove me after soak in VRT_CTX_Assert(ctx); m->fini(ctx, p->priv); assert(ctx->vpi->handling == 0 || ctx->vpi->handling == VCL_RET_FAIL); } /*--------------------------------------------------------------------*/ void VCL_TaskEnter(struct vrt_privs *privs) { VRTPRIV_init(privs); } void VCL_TaskLeave(VRT_CTX, struct vrt_privs *privs) { struct vrt_priv *vp, *vp1; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); AN(ctx->vpi); assert(ctx->vpi->handling == 0 || ctx->vpi->handling == VCL_RET_FAIL); /* * NB: We don't bother removing entries as we finish them because it's * a costly operation. Instead we safely walk the whole tree and clear * the head at the very end. */ VRBT_FOREACH_SAFE(vp, vrt_privs, privs, vp1) { CHECK_OBJ(vp, VRT_PRIV_MAGIC); VRT_priv_fini(ctx, vp->priv); if (vp->storage == VRT_PRIV_ST_HEAP) free(vp); } ZERO_OBJ(privs, sizeof *privs); } varnish-7.5.0/bin/varnishd/cache/cache_vrt_re.c000066400000000000000000000061131457605730600214340ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Runtime support for compiled VCL programs, regexps */ #include "config.h" #include #include "cache_varnishd.h" #include "vcc_interface.h" void VPI_re_init(vre_t **rep, const char *re) { vre_t *t; int error, erroroffset; /* This was already check-compiled by the VCL compiler */ t = VRE_compile(re, 0, &error, &erroroffset, cache_param->pcre2_jit_compilation); AN(t); *rep = t; } void VPI_re_fini(vre_t *rep) { vre_t *vv; vv = rep; if (rep != NULL) VRE_free(&vv); } VCL_BOOL VRT_re_match(VRT_CTX, const char *s, VCL_REGEX re) { struct vsb vsb[1]; char errbuf[VRE_ERROR_LEN]; int i; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); if (s == NULL) s = ""; AN(re); i = VRE_match(re, s, 0, 0, &cache_param->vre_limits); if (i >= 0) return (1); if (i < VRE_ERROR_NOMATCH ) { AN(VSB_init(vsb, errbuf, sizeof errbuf)); AZ(VRE_error(vsb, i)); AZ(VSB_finish(vsb)); VSB_fini(vsb); VRT_fail(ctx, "Regexp matching failed: %s", errbuf); } return (0); } VCL_STRING VRT_regsub(VRT_CTX, int all, VCL_STRING str, VCL_REGEX re, VCL_STRING sub) { struct vsb vsb[1]; const char *res; uintptr_t snap; int i; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); AN(re); if (str == NULL) str = ""; if (sub == NULL) sub = ""; snap = WS_Snapshot(ctx->ws); WS_VSB_new(vsb, ctx->ws); i = VRE_sub(re, str, sub, vsb, &cache_param->vre_limits, all); res = WS_VSB_finish(vsb, ctx->ws, NULL); if (i < VRE_ERROR_NOMATCH) VRT_fail(ctx, "regsub: Regexp matching returned %d", i); else if (res == NULL) VRT_fail(ctx, "regsub: Out of workspace"); else if (i > 0) return (res); WS_Reset(ctx->ws, snap); return (str); } varnish-7.5.0/bin/varnishd/cache/cache_vrt_var.c000066400000000000000000000703701457605730600216240ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Runtime support for compiled VCL programs */ #include "config.h" #include #include "cache_varnishd.h" #include "cache_objhead.h" #include "cache_transport.h" #include "common/heritage.h" #include "vcl.h" #include "vtim.h" #include "vtcp.h" #include "vrt_obj.h" #define VRT_TMO(tmo) (isinf(tmo) ? VRT_DECIMAL_MAX : tmo) static char vrt_hostname[255] = ""; /*-------------------------------------------------------------------- * VRT variables relating to first line of HTTP/1.1 req/resp */ static void vrt_do_strands(VRT_CTX, struct http *hp, int fld, const char *err, const char *str, VCL_STRANDS s) { const char *b; CHECK_OBJ_NOTNULL(hp, HTTP_MAGIC); b = VRT_StrandsWS(hp->ws, str, s); if (b == NULL) { VRT_fail(ctx, "Workspace overflow (%s)", err); WS_MarkOverflow(hp->ws); return; } if (*b == '\0') { VRT_fail(ctx, "Setting %s to empty string", err); return; } http_SetH(hp, fld, b); } #define VRT_HDR_L(obj, hdr, fld) \ VCL_VOID \ VRT_l_##obj##_##hdr(VRT_CTX, const char *str, VCL_STRANDS s) \ { \ \ CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); \ vrt_do_strands(ctx, ctx->http_##obj, fld, #obj "." #hdr, str, s); \ } #define VRT_HDR_R(obj, hdr, fld) \ VCL_STRING \ VRT_r_##obj##_##hdr(VRT_CTX) \ { \ CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); \ CHECK_OBJ_NOTNULL(ctx->http_##obj, HTTP_MAGIC); \ return (ctx->http_##obj->hd[fld].b); \ } #define VRT_HDR_LR(obj, hdr, fld) \ VRT_HDR_L(obj, hdr, fld) \ VRT_HDR_R(obj, hdr, fld) #define VRT_STATUS_L(obj) \ VCL_VOID \ VRT_l_##obj##_status(VRT_CTX, VCL_INT num) \ { \ \ CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); \ CHECK_OBJ_NOTNULL(ctx->http_##obj, HTTP_MAGIC); \ if (num < 0) \ VRT_fail(ctx, "%s.status (%jd) is negative", \ #obj, (intmax_t)num); \ else if (num > 65535) \ VRT_fail(ctx, "%s.status (%jd) > 65535", \ #obj, (intmax_t)num); \ else if ((num % 1000) < 100) \ VRT_fail(ctx, "illegal %s.status (%jd) (..0##)", \ #obj, (intmax_t)num); \ else \ http_SetStatus(ctx->http_##obj, (uint16_t)num, NULL); \ } #define VRT_STATUS_R(obj) \ VCL_INT \ VRT_r_##obj##_status(VRT_CTX) \ { \ \ CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); \ CHECK_OBJ_NOTNULL(ctx->http_##obj, HTTP_MAGIC); \ return (ctx->http_##obj->status); \ } VRT_HDR_LR(req, method, HTTP_HDR_METHOD) VRT_HDR_LR(req, url, HTTP_HDR_URL) VRT_HDR_LR(req, proto, HTTP_HDR_PROTO) VRT_HDR_R(req_top, method, HTTP_HDR_METHOD) VRT_HDR_R(req_top, url, HTTP_HDR_URL) VRT_HDR_R(req_top, proto, HTTP_HDR_PROTO) VRT_HDR_LR(resp, proto, HTTP_HDR_PROTO) VRT_HDR_LR(resp, reason, HTTP_HDR_REASON) VRT_STATUS_L(resp) VRT_STATUS_R(resp) VRT_HDR_LR(bereq, method, HTTP_HDR_METHOD) VRT_HDR_LR(bereq, url, HTTP_HDR_URL) VRT_HDR_LR(bereq, proto, HTTP_HDR_PROTO) VRT_HDR_LR(beresp, proto, HTTP_HDR_PROTO) VRT_HDR_LR(beresp, reason, HTTP_HDR_REASON) VRT_STATUS_L(beresp) VRT_STATUS_R(beresp) /*-------------------------------------------------------------------- * Pulling things out of the packed object->http */ VCL_INT VRT_r_obj_status(VRT_CTX) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->req, REQ_MAGIC); CHECK_OBJ_NOTNULL(ctx->req->objcore, OBJCORE_MAGIC); return (HTTP_GetStatusPack(ctx->req->wrk, ctx->req->objcore)); } VCL_STRING VRT_r_obj_proto(VRT_CTX) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->req, REQ_MAGIC); CHECK_OBJ_NOTNULL(ctx->req->objcore, OBJCORE_MAGIC); return (HTTP_GetHdrPack(ctx->req->wrk, ctx->req->objcore, H__Proto)); } VCL_STRING VRT_r_obj_reason(VRT_CTX) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->req, REQ_MAGIC); CHECK_OBJ_NOTNULL(ctx->req->objcore, OBJCORE_MAGIC); return (HTTP_GetHdrPack(ctx->req->wrk, ctx->req->objcore, H__Reason)); } /*-------------------------------------------------------------------- * beresp bool-fields */ static inline int beresp_filter_fixed(VRT_CTX, const char *s) { if (ctx->bo->vfp_filter_list == NULL) return (0); VRT_fail(ctx, "beresp.filters are already fixed, beresp.%s is undefined", s); return (1); } #define VBERESPWF0(ctx, str) (void) 0 #define VBERESPWF1(ctx, str) do { \ if (beresp_filter_fixed((ctx), str)) \ return; \ } while(0) #define VBERESPW0(field, str, fltchk) #define VBERESPW1(field, str, fltchk) \ void \ VRT_l_beresp_##field(VRT_CTX, VCL_BOOL a) \ { \ CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); \ CHECK_OBJ_NOTNULL(ctx->bo, BUSYOBJ_MAGIC); \ VBERESPWF##fltchk(ctx, str); \ ctx->bo->field = a ? 1 : 0; \ } #define VBERESPRF0(ctx, str) (void) 0 #define VBERESPRF1(ctx, str) do { \ if (beresp_filter_fixed((ctx), str)) \ return (0); \ } while(0) #define VBERESPR1(field, str, fltchk) \ VCL_BOOL \ VRT_r_beresp_##field(VRT_CTX) \ { \ CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); \ CHECK_OBJ_NOTNULL(ctx->bo, BUSYOBJ_MAGIC); \ VBERESPRF##fltchk(ctx, str); \ return (ctx->bo->field); \ } #define BERESP_FLAG(l, r, w, f, d) \ VBERESPR##r(l, #l, f) \ VBERESPW##w(l, #l, f) #include "tbl/beresp_flags.h" #undef VBERESPWF0 #undef VBERESPWF1 #undef VBERESPW0 #undef VBERESPW1 #undef VBERESPRF0 #undef VBERESPRF1 #undef VBERESPR1 /*-------------------------------------------------------------------- * bereq bool-fields */ #define VBEREQR0(field, str) #define VBEREQR1(field, str) \ VCL_BOOL \ VRT_r_bereq_##field(VRT_CTX) \ { \ CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); \ CHECK_OBJ_NOTNULL(ctx->bo, BUSYOBJ_MAGIC); \ return (ctx->bo->field); \ } #define BEREQ_FLAG(l, r, w, d) \ VBEREQR##r(l, #l) #include "tbl/bereq_flags.h" #undef VBEREQR0 #undef VBEREQR1 /*--------------------------------------------------------------------*/ VCL_BOOL VRT_r_bereq_uncacheable(VRT_CTX) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->bo, BUSYOBJ_MAGIC); return (ctx->bo->uncacheable); } VCL_VOID VRT_l_beresp_uncacheable(VRT_CTX, VCL_BOOL a) { struct objcore *oc; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->bo, BUSYOBJ_MAGIC); CHECK_OBJ_NOTNULL(ctx->bo->fetch_objcore, OBJCORE_MAGIC); if (ctx->bo->uncacheable && !a) { VSLb(ctx->vsl, SLT_VCL_Error, "Ignoring attempt to reset beresp.uncacheable"); } else if (a) { ctx->bo->uncacheable = 1; } oc = ctx->bo->fetch_objcore; VSLb(ctx->vsl, SLT_TTL, "VCL %.0f %.0f %.0f %.0f %s", \ oc->ttl, oc->grace, oc->keep, oc->t_origin, \ ctx->bo->uncacheable ? "uncacheable" : "cacheable");\ } VCL_BOOL VRT_r_beresp_uncacheable(VRT_CTX) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->bo, BUSYOBJ_MAGIC); return (ctx->bo->uncacheable); } VCL_VOID VRT_l_req_trace(VRT_CTX, VCL_BOOL a) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->req, REQ_MAGIC); ctx->req->trace = a; VRT_trace(ctx, a); } VCL_VOID VRT_l_bereq_trace(VRT_CTX, VCL_BOOL a) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->bo, BUSYOBJ_MAGIC); ctx->bo->trace = a; VRT_trace(ctx, a); } /*--------------------------------------------------------------------*/ VCL_BYTES VRT_r_beresp_transit_buffer(VRT_CTX) { struct objcore *oc; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->bo, BUSYOBJ_MAGIC); oc = ctx->bo->fetch_objcore; CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); return oc->boc->transit_buffer; } VCL_VOID VRT_l_beresp_transit_buffer(VRT_CTX, VCL_BYTES value) { struct objcore *oc; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->bo, BUSYOBJ_MAGIC); oc = ctx->bo->fetch_objcore; CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); oc->boc->transit_buffer = value; } /*--------------------------------------------------------------------*/ VCL_STRING VRT_r_client_identity(VRT_CTX) { const char *id; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); if (ctx->req != NULL) { CHECK_OBJ(ctx->req, REQ_MAGIC); id = ctx->req->client_identity; } else { CHECK_OBJ_NOTNULL(ctx->bo, BUSYOBJ_MAGIC); id = ctx->bo->client_identity; } if (id != NULL) return (id); return (SES_Get_String_Attr(ctx->sp, SA_CLIENT_IP)); } VCL_VOID VRT_l_client_identity(VRT_CTX, const char *str, VCL_STRANDS s) { const char *b; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->req, REQ_MAGIC); b = VRT_StrandsWS(ctx->req->http->ws, str, s); if (b == NULL) { VSLb(ctx->vsl, SLT_LostHeader, "client.identity"); WS_MarkOverflow(ctx->req->http->ws); return; } ctx->req->client_identity = b; } /*--------------------------------------------------------------------*/ #define BEREQ_TIMEOUT(prefix, which) \ VCL_VOID \ VRT_l_bereq_##which(VRT_CTX, VCL_DURATION num) \ { \ \ CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); \ CHECK_OBJ_NOTNULL(ctx->bo, BUSYOBJ_MAGIC); \ ctx->bo->which = num; \ } \ \ VCL_DURATION \ VRT_r_bereq_##which(VRT_CTX) \ { \ vtim_dur res; \ \ CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); \ CHECK_OBJ_NOTNULL(ctx->bo, BUSYOBJ_MAGIC); \ res = BUSYOBJ_TMO(ctx->bo, prefix, which); \ return (VRT_TMO(res)); \ } \ \ VCL_VOID \ VRT_u_bereq_##which(VRT_CTX) \ { \ \ CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); \ CHECK_OBJ_NOTNULL(ctx->bo, BUSYOBJ_MAGIC); \ ctx->bo->which = NAN; \ } BEREQ_TIMEOUT(, connect_timeout) BEREQ_TIMEOUT(, first_byte_timeout) BEREQ_TIMEOUT(, between_bytes_timeout) BEREQ_TIMEOUT(pipe_, task_deadline) /*--------------------------------------------------------------------*/ VCL_STRING VRT_r_beresp_backend_name(VRT_CTX) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->bo, BUSYOBJ_MAGIC); if (ctx->bo->director_resp != NULL) return (ctx->bo->director_resp->vcl_name); return (NULL); } /*-------------------------------------------------------------------- * Backends do not in general have a IP number (any more) and this * variable is really not about the backend, but the backend connection. * XXX: we may need a more general beresp.backend.{details|ident} */ VCL_IP VRT_r_beresp_backend_ip(VRT_CTX) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->bo, BUSYOBJ_MAGIC); return (VDI_GetIP(ctx->bo)); } /*--------------------------------------------------------------------*/ VCL_STEVEDORE VRT_r_req_storage(VRT_CTX) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->req, REQ_MAGIC); return (ctx->req->storage); } VCL_VOID VRT_l_req_storage(VRT_CTX, VCL_STEVEDORE stv) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->req, REQ_MAGIC); ctx->req->storage = stv; } /*--------------------------------------------------------------------*/ VCL_STEVEDORE VRT_r_beresp_storage(VRT_CTX) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->bo, BUSYOBJ_MAGIC); return (ctx->bo->storage); } VCL_VOID VRT_l_beresp_storage(VRT_CTX, VCL_STEVEDORE stv) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->bo, BUSYOBJ_MAGIC); ctx->bo->storage = stv; } /*-------------------------------------------------------------------- * VCL <= 4.0 ONLY */ #include "storage/storage.h" VCL_STRING VRT_r_beresp_storage_hint(VRT_CTX) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->bo, BUSYOBJ_MAGIC); if (ctx->bo->storage == NULL) return (NULL); CHECK_OBJ_NOTNULL(ctx->bo->storage, STEVEDORE_MAGIC); return (ctx->bo->storage->vclname); } VCL_VOID VRT_l_beresp_storage_hint(VRT_CTX, const char *str, VCL_STRANDS s) { const char *p; VCL_STEVEDORE stv; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->bo, BUSYOBJ_MAGIC); p = VRT_StrandsWS(ctx->ws, str, s); if (p == NULL) { VSLb(ctx->vsl, SLT_LostHeader, "storage_hint"); WS_MarkOverflow(ctx->ws); return; } stv = VRT_stevedore(p); if (stv != NULL) ctx->bo->storage = stv; } /*--------------------------------------------------------------------*/ VCL_STEVEDORE VRT_r_obj_storage(VRT_CTX) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->req, REQ_MAGIC); CHECK_OBJ_NOTNULL(ctx->req->objcore, OBJCORE_MAGIC); AN(ctx->req->objcore->stobj); CHECK_OBJ_NOTNULL(ctx->req->objcore->stobj->stevedore, STEVEDORE_MAGIC); return (ctx->req->objcore->stobj->stevedore); } /*--------------------------------------------------------------------*/ VCL_BOOL VRT_r_obj_can_esi(VRT_CTX) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->req, REQ_MAGIC); CHECK_OBJ_NOTNULL(ctx->req->objcore, OBJCORE_MAGIC); return (ObjHasAttr(ctx->req->wrk, ctx->req->objcore, OA_ESIDATA)); } /*--------------------------------------------------------------------*/ #define REQ_VAR_L(nm, elem, type, extra) \ \ VCL_VOID \ VRT_l_req_##nm(VRT_CTX, type arg) \ { \ CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); \ CHECK_OBJ_NOTNULL(ctx->req, REQ_MAGIC); \ extra; \ ctx->req->elem = arg; \ } #define REQ_VAR_R(nm, elem, type) \ \ type \ VRT_r_req_##nm(VRT_CTX) \ { \ CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); \ CHECK_OBJ_NOTNULL(ctx->req, REQ_MAGIC); \ return (ctx->req->elem); \ } REQ_VAR_R(backend_hint, director_hint, VCL_BACKEND) REQ_VAR_L(ttl, d_ttl, VCL_DURATION, if (!(arg>0.0)) arg = 0;) REQ_VAR_R(ttl, d_ttl, VCL_DURATION) REQ_VAR_L(grace, d_grace, VCL_DURATION, if (!(arg>0.0)) arg = 0;) REQ_VAR_R(grace, d_grace, VCL_DURATION) VCL_VOID VRT_l_req_backend_hint(VRT_CTX, VCL_BACKEND be) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->req, REQ_MAGIC); VRT_Assign_Backend(&ctx->req->director_hint, be); } /*--------------------------------------------------------------------*/ VCL_VOID VRT_l_bereq_backend(VRT_CTX, VCL_BACKEND be) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->bo, BUSYOBJ_MAGIC); VRT_Assign_Backend(&ctx->bo->director_req, be); } VCL_BACKEND VRT_r_bereq_backend(VRT_CTX) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->bo, BUSYOBJ_MAGIC); return (ctx->bo->director_req); } VCL_BACKEND VRT_r_beresp_backend(VRT_CTX) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->bo, BUSYOBJ_MAGIC); return (ctx->bo->director_resp); } /*--------------------------------------------------------------------*/ VCL_VOID VRT_u_bereq_body(VRT_CTX) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->bo, BUSYOBJ_MAGIC); if (ctx->bo->bereq_body != NULL) { (void)HSH_DerefObjCore(ctx->bo->wrk, &ctx->bo->bereq_body, 0); http_Unset(ctx->bo->bereq, H_Content_Length); } if (ctx->bo->req != NULL) { CHECK_OBJ(ctx->bo->req, REQ_MAGIC); ctx->bo->req = NULL; ObjSetState(ctx->bo->wrk, ctx->bo->fetch_objcore, BOS_REQ_DONE); http_Unset(ctx->bo->bereq, H_Content_Length); } } /*--------------------------------------------------------------------*/ VCL_VOID VRT_l_req_esi(VRT_CTX, VCL_BOOL process_esi) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->req, REQ_MAGIC); assert(ctx->syntax <= 40); /* * Only allow you to turn of esi in the main request * else everything gets confused * NOTE: this is not true, but we do not change behavior * for vcl 4.0. For 4.1, see VRT_l_resp_do_esi() */ if (IS_TOPREQ(ctx->req)) ctx->req->disable_esi = !process_esi; } VCL_BOOL VRT_r_req_esi(VRT_CTX) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->req, REQ_MAGIC); assert(ctx->syntax <= 40); return (!ctx->req->disable_esi); } VCL_INT VRT_r_req_esi_level(VRT_CTX) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->req, REQ_MAGIC); return (ctx->req->esi_level); } /*--------------------------------------------------------------------*/ VCL_BOOL VRT_r_req_can_gzip(VRT_CTX) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->req, REQ_MAGIC); return (RFC2616_Req_Gzip(ctx->req->http)); // XXX ? } /*--------------------------------------------------------------------*/ VCL_INT VRT_r_req_restarts(VRT_CTX) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->req, REQ_MAGIC); return (ctx->req->restarts); } VCL_INT VRT_r_bereq_retries(VRT_CTX) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->bo, BUSYOBJ_MAGIC); return (ctx->bo->retries); } /*--------------------------------------------------------------------*/ VCL_STRING VRT_r_req_transport(VRT_CTX) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->req, REQ_MAGIC); CHECK_OBJ_NOTNULL(ctx->req->transport, TRANSPORT_MAGIC); return (ctx->req->transport->name); } /*-------------------------------------------------------------------- * In exp.*: * t_origin is absolute * ttl is relative to t_origin * grace&keep are relative to ttl * In VCL: * ttl is relative to "ttl_now", which is t_req on the client * side, except in vcl_deliver, where it is ctx->now. On the * fetch side "ttl_now" is ctx->now (which is bo->t_prev). * grace&keep are relative to ttl */ static double ttl_now(VRT_CTX) { if (ctx->bo) { return (ctx->now); } else { CHECK_OBJ(ctx->req, REQ_MAGIC); return (ctx->method == VCL_MET_DELIVER ? ctx->now : ctx->req->t_req); } } #define VRT_DO_EXP_L(which, oc, fld, offset) \ \ VCL_VOID \ VRT_l_##which##_##fld(VRT_CTX, VCL_DURATION a) \ { \ \ CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); \ a += (offset); \ if (a < 0.0) \ a = 0.0; \ oc->fld = a; \ VSLb(ctx->vsl, SLT_TTL, "VCL %.0f %.0f %.0f %.0f %s", \ oc->ttl, oc->grace, oc->keep, oc->t_origin, \ ctx->bo->uncacheable ? "uncacheable" : "cacheable");\ } #define VRT_DO_EXP_R(which, oc, fld, offset) \ \ VCL_DURATION \ VRT_r_##which##_##fld(VRT_CTX) \ { \ double d; \ \ CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); \ d = oc->fld; \ if (d <= 0.0) \ d = 0.0; \ d -= (offset); \ return (d); \ } /*lint -save -e835 */ // Zero right hand arg to '-' VRT_DO_EXP_R(obj, ctx->req->objcore, ttl, ttl_now(ctx) - ctx->req->objcore->t_origin) VRT_DO_EXP_R(obj, ctx->req->objcore, grace, 0) VRT_DO_EXP_R(obj, ctx->req->objcore, keep, 0) VRT_DO_EXP_L(beresp, ctx->bo->fetch_objcore, ttl, ttl_now(ctx) - ctx->bo->fetch_objcore->t_origin) VRT_DO_EXP_R(beresp, ctx->bo->fetch_objcore, ttl, ttl_now(ctx) - ctx->bo->fetch_objcore->t_origin) VRT_DO_EXP_L(beresp, ctx->bo->fetch_objcore, grace, 0) VRT_DO_EXP_R(beresp, ctx->bo->fetch_objcore, grace, 0) VRT_DO_EXP_L(beresp, ctx->bo->fetch_objcore, keep, 0) VRT_DO_EXP_R(beresp, ctx->bo->fetch_objcore, keep, 0) /*lint -restore */ // XXX more assertions? #define VRT_DO_TIME_R(which, where, field) \ \ VCL_TIME \ VRT_r_##which##_time(VRT_CTX) \ { \ CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); \ AN((ctx)->where); \ \ return ((ctx)->where->field); \ } VRT_DO_TIME_R(req, req, t_req) VRT_DO_TIME_R(req_top, req->top->topreq, t_req) VRT_DO_TIME_R(resp, req, t_resp) VRT_DO_TIME_R(bereq, bo, t_first) VRT_DO_TIME_R(beresp, bo, t_resp) VRT_DO_TIME_R(obj, req->objcore, t_origin) /*-------------------------------------------------------------------- */ #define VRT_DO_AGE_R(which, oc) \ \ VCL_DURATION \ VRT_r_##which##_##age(VRT_CTX) \ { \ \ CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); \ return (ttl_now(ctx) - oc->t_origin); \ } VRT_DO_AGE_R(obj, ctx->req->objcore) VRT_DO_AGE_R(beresp, ctx->bo->fetch_objcore) /*-------------------------------------------------------------------- * [[be]req|sess].xid */ VCL_INT VRT_r_req_xid(VRT_CTX) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->req, REQ_MAGIC); CHECK_OBJ_NOTNULL(ctx->req->http, HTTP_MAGIC); AN(ctx->req->vsl); return (VXID(ctx->req->vsl->wid)); } VCL_INT VRT_r_bereq_xid(VRT_CTX) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->bo, BUSYOBJ_MAGIC); AN(ctx->bo->vsl); return (VXID(ctx->bo->vsl->wid)); } VCL_INT VRT_r_sess_xid(VRT_CTX) { struct sess *sp; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); if (ctx->req) { CHECK_OBJ(ctx->req, REQ_MAGIC); sp = ctx->req->sp; } else { CHECK_OBJ_NOTNULL(ctx->bo, BUSYOBJ_MAGIC); sp = ctx->bo->sp; } CHECK_OBJ_NOTNULL(sp, SESS_MAGIC); return (VXID(sp->vxid)); } /*-------------------------------------------------------------------- * req fields */ #define VREQW0(field) #define VREQW1(field) \ VCL_VOID \ VRT_l_req_##field(VRT_CTX, VCL_BOOL a) \ { \ CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); \ CHECK_OBJ_NOTNULL(ctx->req, REQ_MAGIC); \ ctx->req->field = a ? 1 : 0; \ } #define VREQR0(field) #define VREQR1(field) \ VCL_BOOL \ VRT_r_req_##field(VRT_CTX) \ { \ CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); \ CHECK_OBJ_NOTNULL(ctx->req, REQ_MAGIC); \ return (ctx->req->field); \ } #define REQ_FLAG(l, r, w, d) \ VREQR##r(l) \ VREQW##w(l) #include "tbl/req_flags.h" /*--------------------------------------------------------------------*/ #define GIP(fld) \ VCL_IP \ VRT_r_##fld##_ip(VRT_CTX) \ { \ struct suckaddr *sa; \ \ CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); \ CHECK_OBJ_NOTNULL(ctx->sp, SESS_MAGIC); \ AZ(SES_Get_##fld##_addr(ctx->sp, &sa)); \ return (sa); \ } GIP(local) GIP(remote) GIP(client) GIP(server) #undef GIP /*-------------------------------------------------------------------- * local.[endpoint|socket] */ #define LOC(var,fld) \ VCL_STRING \ VRT_r_local_##var(VRT_CTX) \ { \ struct sess *sp; \ \ CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); \ if (ctx->req) { \ CHECK_OBJ(ctx->req, REQ_MAGIC); \ sp = ctx->req->sp; \ } else { \ CHECK_OBJ_NOTNULL(ctx->bo, BUSYOBJ_MAGIC); \ sp = ctx->bo->sp; \ } \ \ CHECK_OBJ_NOTNULL(sp->listen_sock, LISTEN_SOCK_MAGIC); \ AN(sp->listen_sock->fld); \ return (sp->listen_sock->fld); \ } LOC(endpoint, endpoint) LOC(socket, name) #undef LOC /*--------------------------------------------------------------------*/ VCL_STRING VRT_r_server_identity(VRT_CTX) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); if (heritage.identity != NULL) return (heritage.identity); else return ("varnishd"); } VCL_STRING VRT_r_server_hostname(VRT_CTX) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); if (vrt_hostname[0] == '\0') AZ(gethostname(vrt_hostname, sizeof(vrt_hostname))); return (vrt_hostname); } /*--------------------------------------------------------------------*/ VCL_INT VRT_r_obj_hits(VRT_CTX) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->req, REQ_MAGIC); CHECK_OBJ_NOTNULL(ctx->req->objcore, OBJCORE_MAGIC); if (ctx->method == VCL_MET_HIT) return (ctx->req->objcore->hits); return (ctx->req->is_hit ? ctx->req->objcore->hits : 0); } VCL_BOOL VRT_r_obj_uncacheable(VRT_CTX) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->req, REQ_MAGIC); CHECK_OBJ_NOTNULL(ctx->req->objcore, OBJCORE_MAGIC); return (ctx->req->objcore->flags & OC_F_HFM ? 1 : 0); } /*--------------------------------------------------------------------*/ VCL_BOOL VRT_r_resp_is_streaming(VRT_CTX) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->req, REQ_MAGIC); if (ctx->req->objcore == NULL) return (0); /* When called from vcl_synth */ CHECK_OBJ_NOTNULL(ctx->req->objcore, OBJCORE_MAGIC); return (ctx->req->objcore->boc == NULL ? 0 : 1); } /*--------------------------------------------------------------------*/ static inline int resp_filter_fixed(VRT_CTX, const char *s) { if (ctx->req->vdp_filter_list == NULL) return (0); VRT_fail(ctx, "resp.filters are already fixed, %s is undefined", s); return (1); } VCL_VOID VRT_l_resp_do_esi(VRT_CTX, VCL_BOOL process_esi) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->req, REQ_MAGIC); assert(ctx->syntax >= 41); if (resp_filter_fixed(ctx, "resp.do_esi")) return; ctx->req->disable_esi = !process_esi; } VCL_BOOL VRT_r_resp_do_esi(VRT_CTX) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->req, REQ_MAGIC); assert(ctx->syntax >= 41); if (resp_filter_fixed(ctx, "resp.do_esi")) return (0); return (!ctx->req->disable_esi); } /*--------------------------------------------------------------------*/ #define VRT_BODY_L(which) \ VCL_VOID \ VRT_l_##which##_body(VRT_CTX, enum lbody_e type, \ const char *str, VCL_BODY body) \ { \ int n; \ struct vsb *vsb; \ VCL_STRANDS s; \ VCL_BLOB b; \ \ CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); \ AN(body); \ CAST_OBJ_NOTNULL(vsb, ctx->specific, VSB_MAGIC); \ if (type == LBODY_SET_STRING || type == LBODY_SET_BLOB) \ VSB_clear(vsb); \ if (type == LBODY_SET_BLOB || type == LBODY_ADD_BLOB) { \ AZ(str); \ b = body; \ VSB_bcat(vsb, b->blob, b->len); \ return; \ } \ if (str != NULL) \ VSB_cat(vsb, str); \ assert(type == LBODY_SET_STRING || \ type == LBODY_ADD_STRING); \ s = body; \ for (n = 0; s != NULL && n < s->n; n++) \ if (s->p[n] != NULL) \ VSB_cat(vsb, s->p[n]); \ } VRT_BODY_L(beresp) VRT_BODY_L(resp) /*--------------------------------------------------------------------*/ /* digest */ #define BLOB_HASH_TYPE 0x00d16357 VCL_BLOB VRT_r_req_hash(VRT_CTX) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->req, REQ_MAGIC); return (VRT_blob(ctx, "req.hash", ctx->req->digest, DIGEST_LEN, BLOB_HASH_TYPE)); } VCL_BLOB VRT_r_bereq_hash(VRT_CTX) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->bo, BUSYOBJ_MAGIC); return (VRT_blob(ctx, "bereq.hash", ctx->bo->digest, DIGEST_LEN, BLOB_HASH_TYPE)); } /*--------------------------------------------------------------------*/ #define HTTP_VAR(x) \ VCL_HTTP \ VRT_r_##x(VRT_CTX) \ { \ CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); \ CHECK_OBJ_NOTNULL(ctx->http_##x, HTTP_MAGIC); \ return (ctx->http_##x); \ } HTTP_VAR(req) HTTP_VAR(resp) HTTP_VAR(bereq) HTTP_VAR(beresp) /*--------------------------------------------------------------------*/ static inline void set_idle_send_timeout(const struct sess *sp, VCL_DURATION d) { struct timeval tv = VTIM_timeval_sock(d); VTCP_Assert(setsockopt(sp->fd, SOL_SOCKET, SO_SNDTIMEO, &tv, sizeof tv)); } #define SESS_VAR_DUR(x, setter) \ VCL_VOID \ VRT_l_sess_##x(VRT_CTX, VCL_DURATION d) \ { \ CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); \ CHECK_OBJ_NOTNULL(ctx->sp, SESS_MAGIC); \ d = vmax(d, 0.0); \ setter; \ ctx->sp->x = d; \ } \ \ VCL_DURATION \ VRT_r_sess_##x(VRT_CTX) \ { \ vtim_dur res; \ \ CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); \ CHECK_OBJ_NOTNULL(ctx->sp, SESS_MAGIC); \ res = SESS_TMO(ctx->sp, x); \ return (VRT_TMO(res)); \ } \ \ VCL_VOID \ VRT_u_sess_##x(VRT_CTX) \ { \ CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); \ CHECK_OBJ_NOTNULL(ctx->sp, SESS_MAGIC); \ ctx->sp->x = NAN; \ } SESS_VAR_DUR(timeout_idle, ) SESS_VAR_DUR(timeout_linger, ) SESS_VAR_DUR(send_timeout, ) SESS_VAR_DUR(idle_send_timeout, set_idle_send_timeout(ctx->sp, d)) varnish-7.5.0/bin/varnishd/cache/cache_vrt_vcl.c000066400000000000000000000375011457605730600216170ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2016 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #include "config.h" #include #include #include "cache_varnishd.h" #include "vcl.h" #include "vtim.h" #include "vbm.h" #include "cache_director.h" #include "cache_transport.h" #include "cache_vcl.h" #include "vcc_interface.h" /*--------------------------------------------------------------------*/ const char * VCL_Return_Name(unsigned r) { switch (r) { #define VCL_RET_MAC(l, U, B) \ case VCL_RET_##U: \ return(#l); #include "tbl/vcl_returns.h" default: return (NULL); } } const char * VCL_Method_Name(unsigned m) { switch (m) { #define VCL_MET_MAC(func, upper, typ, bitmap) \ case VCL_MET_##upper: \ return (#upper); #include "tbl/vcl_returns.h" default: return (NULL); } } /*--------------------------------------------------------------------*/ void VCL_Refresh(struct vcl **vcc) { while (vcl_active == NULL) (void)usleep(100000); ASSERT_VCL_ACTIVE(); if (*vcc == vcl_active) return; VCL_Update(vcc, NULL); } void VCL_Recache(const struct worker *wrk, struct vcl **vclp) { AN(wrk); AN(vclp); CHECK_OBJ_NOTNULL(*vclp, VCL_MAGIC); ASSERT_VCL_ACTIVE(); if (*vclp != vcl_active || wrk->wpriv->vcl == vcl_active) { VCL_Rel(vclp); return; } if (wrk->wpriv->vcl != NULL) VCL_Rel(&wrk->wpriv->vcl); wrk->wpriv->vcl = *vclp; *vclp = NULL; } void VCL_Ref(struct vcl *vcl) { CHECK_OBJ_NOTNULL(vcl, VCL_MAGIC); assert(!vcl->temp->is_cold); Lck_Lock(&vcl_mtx); assert(vcl->busy > 0); vcl->busy++; Lck_Unlock(&vcl_mtx); } void VCL_Rel(struct vcl **vcc) { struct vcl *vcl; TAKE_OBJ_NOTNULL(vcl, vcc, VCL_MAGIC); Lck_Lock(&vcl_mtx); assert(vcl->busy > 0); vcl->busy--; /* * We do not garbage collect discarded VCL's here, that happens * in VCL_Poll() which is called from the CLI thread. */ Lck_Unlock(&vcl_mtx); } /*--------------------------------------------------------------------*/ static void vcldir_free(struct vcldir *vdir) { CHECK_OBJ_NOTNULL(vdir, VCLDIR_MAGIC); CHECK_OBJ_NOTNULL(vdir->dir, DIRECTOR_MAGIC); AZ(vdir->refcnt); Lck_Delete(&vdir->dlck); free(vdir->cli_name); FREE_OBJ(vdir->dir); FREE_OBJ(vdir); } static VCL_BACKEND vcldir_surplus(struct vcldir *vdir) { CHECK_OBJ_NOTNULL(vdir, VCLDIR_MAGIC); assert(vdir->refcnt == 1); vdir->refcnt = 0; vcldir_free(vdir); return (NULL); } VCL_BACKEND VRT_AddDirector(VRT_CTX, const struct vdi_methods *m, void *priv, const char *fmt, ...) { struct vsb *vsb; struct vcl *vcl; struct vcldir *vdir; const struct vcltemp *temp; va_list ap; int i; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(m, VDI_METHODS_MAGIC); AN(fmt); vcl = ctx->vcl; CHECK_OBJ_NOTNULL(vcl, VCL_MAGIC); // opportunistic, re-checked again under lock if (vcl->temp == VCL_TEMP_COOLING && !DO_DEBUG(DBG_VTC_MODE)) return (NULL); ALLOC_OBJ(vdir, VCLDIR_MAGIC); AN(vdir); ALLOC_OBJ(vdir->dir, DIRECTOR_MAGIC); AN(vdir->dir); vdir->dir->vdir = vdir; vdir->methods = m; vdir->dir->priv = priv; vsb = VSB_new_auto(); AN(vsb); VSB_printf(vsb, "%s.", VCL_Name(vcl)); i = VSB_len(vsb); va_start(ap, fmt); VSB_vprintf(vsb, fmt, ap); va_end(ap); AZ(VSB_finish(vsb)); REPLACE(vdir->cli_name, VSB_data(vsb)); VSB_destroy(&vsb); vdir->dir->vcl_name = vdir->cli_name + i; vdir->vcl = vcl; vdir->admin_health = VDI_AH_AUTO; vdir->health_changed = VTIM_real(); vdir->refcnt++; Lck_New(&vdir->dlck, lck_director); vdir->dir->mtx = &vdir->dlck; /* NB: at this point we look at the VCL temperature after getting * through the trouble of creating the director even though it might * not be legal to do so. Because we change the VCL temperature before * sending COLD events we have to tolerate and undo attempts for the * COOLING case. * * To avoid deadlocks during vcl_BackendEvent, we only wait for vcl_mtx * if the vcl is busy (ref vcl_set_state()) */ while (1) { temp = vcl->temp; if (temp == VCL_TEMP_COOLING) return (vcldir_surplus(vdir)); if (vcl->busy == 0 && vcl->temp->is_warm) { if (! Lck_Trylock(&vcl_mtx)) break; usleep(10 * 1000); continue; } Lck_Lock(&vcl_mtx); break; } Lck_AssertHeld(&vcl_mtx); temp = vcl->temp; if (temp != VCL_TEMP_COOLING) VTAILQ_INSERT_TAIL(&vcl->director_list, vdir, list); if (temp->is_warm) VDI_Event(vdir->dir, VCL_EVENT_WARM); Lck_Unlock(&vcl_mtx); if (temp == VCL_TEMP_COOLING) return (vcldir_surplus(vdir)); if (!temp->is_warm && temp != VCL_TEMP_INIT) WRONG("Dynamic Backends can only be added to warm VCLs"); return (vdir->dir); } void VRT_StaticDirector(VCL_BACKEND b) { struct vcldir *vdir; CHECK_OBJ_NOTNULL(b, DIRECTOR_MAGIC); vdir = b->vdir; CHECK_OBJ_NOTNULL(vdir, VCLDIR_MAGIC); assert(vdir->refcnt == 1); AZ(vdir->flags & VDIR_FLG_NOREFCNT); vdir->flags |= VDIR_FLG_NOREFCNT; } static void vcldir_retire(struct vcldir *vdir) { const struct vcltemp *temp; CHECK_OBJ_NOTNULL(vdir, VCLDIR_MAGIC); assert(vdir->refcnt == 0); CHECK_OBJ_NOTNULL(vdir->vcl, VCL_MAGIC); Lck_Lock(&vcl_mtx); temp = vdir->vcl->temp; VTAILQ_REMOVE(&vdir->vcl->director_list, vdir, list); Lck_Unlock(&vcl_mtx); if (temp->is_warm) VDI_Event(vdir->dir, VCL_EVENT_COLD); if (vdir->methods->destroy != NULL) vdir->methods->destroy(vdir->dir); vcldir_free(vdir); } static int vcldir_deref(struct vcldir *vdir) { int busy; CHECK_OBJ_NOTNULL(vdir, VCLDIR_MAGIC); AZ(vdir->flags & VDIR_FLG_NOREFCNT); Lck_Lock(&vdir->dlck); assert(vdir->refcnt > 0); busy = --vdir->refcnt; Lck_Unlock(&vdir->dlck); if (!busy) vcldir_retire(vdir); return (busy); } void VRT_DelDirector(VCL_BACKEND *dirp) { VCL_BACKEND dir; struct vcldir *vdir; TAKE_OBJ_NOTNULL(dir, dirp, DIRECTOR_MAGIC); vdir = dir->vdir; CHECK_OBJ_NOTNULL(vdir, VCLDIR_MAGIC); if (vdir->methods->release != NULL) vdir->methods->release(vdir->dir); if (vdir->flags & VDIR_FLG_NOREFCNT) { vdir->flags &= ~VDIR_FLG_NOREFCNT; AZ(vcldir_deref(vdir)); } else { (void) vcldir_deref(vdir); } } void VRT_Assign_Backend(VCL_BACKEND *dst, VCL_BACKEND src) { struct vcldir *vdir; AN(dst); CHECK_OBJ_ORNULL((*dst), DIRECTOR_MAGIC); CHECK_OBJ_ORNULL(src, DIRECTOR_MAGIC); if (*dst != NULL) { vdir = (*dst)->vdir; CHECK_OBJ_NOTNULL(vdir, VCLDIR_MAGIC); if (!(vdir->flags & VDIR_FLG_NOREFCNT)) (void)vcldir_deref(vdir); } if (src != NULL) { vdir = src->vdir; CHECK_OBJ_NOTNULL(vdir, VCLDIR_MAGIC); if (!(vdir->flags & VDIR_FLG_NOREFCNT)) { Lck_Lock(&vdir->dlck); assert(vdir->refcnt > 0); vdir->refcnt++; Lck_Unlock(&vdir->dlck); } } *dst = src; } void VRT_DisableDirector(VCL_BACKEND d) { struct vcldir *vdir; CHECK_OBJ_NOTNULL(d, DIRECTOR_MAGIC); vdir = d->vdir; CHECK_OBJ_NOTNULL(vdir, VCLDIR_MAGIC); vdir->admin_health = VDI_AH_DELETED; vdir->health_changed = VTIM_real(); } VCL_BACKEND VRT_LookupDirector(VRT_CTX, VCL_STRING name) { struct vcl *vcl; struct vcldir *vdir; VCL_BACKEND dd, d = NULL; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); AN(name); assert(ctx->method & VCL_MET_TASK_H); ASSERT_CLI(); vcl = ctx->vcl; CHECK_OBJ_NOTNULL(vcl, VCL_MAGIC); Lck_Lock(&vcl_mtx); VTAILQ_FOREACH(vdir, &vcl->director_list, list) { dd = vdir->dir; if (strcmp(dd->vcl_name, name)) continue; d = dd; break; } Lck_Unlock(&vcl_mtx); return (d); } /*--------------------------------------------------------------------*/ VCL_BACKEND VCL_DefaultDirector(const struct vcl *vcl) { CHECK_OBJ_NOTNULL(vcl, VCL_MAGIC); CHECK_OBJ_NOTNULL(vcl->conf, VCL_CONF_MAGIC); return (*vcl->conf->default_director); } const char * VCL_Name(const struct vcl *vcl) { CHECK_OBJ_NOTNULL(vcl, VCL_MAGIC); return (vcl->loaded_name); } VCL_PROBE VCL_DefaultProbe(const struct vcl *vcl) { CHECK_OBJ_NOTNULL(vcl, VCL_MAGIC); CHECK_OBJ_NOTNULL(vcl->conf, VCL_CONF_MAGIC); return (vcl->conf->default_probe); } /*--------------------------------------------------------------------*/ void VRT_CTX_Assert(VRT_CTX) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); if (ctx->msg != NULL) CHECK_OBJ(ctx->msg, VSB_MAGIC); else AN(ctx->vsl); CHECK_OBJ_NOTNULL(ctx->vcl, VCL_MAGIC); WS_Assert(ctx->ws); CHECK_OBJ_ORNULL(ctx->sp, SESS_MAGIC); CHECK_OBJ_ORNULL(ctx->req, REQ_MAGIC); CHECK_OBJ_ORNULL(ctx->http_req, HTTP_MAGIC); CHECK_OBJ_ORNULL(ctx->http_req_top, HTTP_MAGIC); CHECK_OBJ_ORNULL(ctx->http_resp, HTTP_MAGIC); CHECK_OBJ_ORNULL(ctx->bo, BUSYOBJ_MAGIC); CHECK_OBJ_ORNULL(ctx->http_bereq, HTTP_MAGIC); CHECK_OBJ_ORNULL(ctx->http_beresp, HTTP_MAGIC); } struct vclref * VRT_VCL_Prevent_Cold(VRT_CTX, const char *desc) { struct vclref* ref; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(ctx->vcl, VCL_MAGIC); ALLOC_OBJ(ref, VCLREF_MAGIC); AN(ref); ref->vcl = ctx->vcl; REPLACE(ref->desc, desc); VCL_Ref(ctx->vcl); Lck_Lock(&vcl_mtx); VTAILQ_INSERT_TAIL(&ctx->vcl->ref_list, ref, list); Lck_Unlock(&vcl_mtx); return (ref); } void VRT_VCL_Allow_Cold(struct vclref **refp) { struct vcl *vcl; struct vclref *ref; TAKE_OBJ_NOTNULL(ref, refp, VCLREF_MAGIC); vcl = ref->vcl; CHECK_OBJ_NOTNULL(vcl, VCL_MAGIC); Lck_Lock(&vcl_mtx); assert(!VTAILQ_EMPTY(&vcl->ref_list)); VTAILQ_REMOVE(&vcl->ref_list, ref, list); Lck_Unlock(&vcl_mtx); VCL_Rel(&vcl); REPLACE(ref->desc, NULL); FREE_OBJ(ref); } struct vclref * VRT_VCL_Prevent_Discard(VRT_CTX, const char *desc) { struct vcl *vcl; struct vclref* ref; ASSERT_CLI(); CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); AN(desc); AN(*desc); vcl = ctx->vcl; CHECK_OBJ_NOTNULL(vcl, VCL_MAGIC); assert(vcl->temp->is_warm); ALLOC_OBJ(ref, VCLREF_MAGIC); AN(ref); ref->vcl = vcl; REPLACE(ref->desc, desc); Lck_Lock(&vcl_mtx); VTAILQ_INSERT_TAIL(&vcl->ref_list, ref, list); vcl->nrefs++; Lck_Unlock(&vcl_mtx); return (ref); } void VRT_VCL_Allow_Discard(struct vclref **refp) { struct vcl *vcl; struct vclref *ref; TAKE_OBJ_NOTNULL(ref, refp, VCLREF_MAGIC); vcl = ref->vcl; CHECK_OBJ_NOTNULL(vcl, VCL_MAGIC); /* NB: A VCL may be released by a VMOD at any time, but it must happen * after a warmup and before the end of a cooldown. The release may or * may not happen while the same thread holds the temperature lock, so * instead we check that all references are gone in VCL_Nuke. */ Lck_Lock(&vcl_mtx); assert(!VTAILQ_EMPTY(&vcl->ref_list)); VTAILQ_REMOVE(&vcl->ref_list, ref, list); vcl->nrefs--; /* No garbage collection here, for the same reasons as in VCL_Rel. */ Lck_Unlock(&vcl_mtx); REPLACE(ref->desc, NULL); FREE_OBJ(ref); } /*-------------------------------------------------------------------- */ static int req_poll(struct worker *wrk, struct req *req) { struct req *top; /* NB: Since a fail transition leads to vcl_synth, the request may be * short-circuited twice. */ if (req->req_reset) { wrk->vpi->handling = VCL_RET_FAIL; return (-1); } top = req->top->topreq; CHECK_OBJ_NOTNULL(top, REQ_MAGIC); CHECK_OBJ_NOTNULL(top->transport, TRANSPORT_MAGIC); if (!FEATURE(FEATURE_VCL_REQ_RESET)) return (0); if (top->transport->poll == NULL) return (0); if (top->transport->poll(top) >= 0) return (0); VSLb_ts_req(req, "Reset", W_TIM_real(wrk)); wrk->stats->req_reset++; wrk->vpi->handling = VCL_RET_FAIL; req->req_reset = 1; return (-1); } /*-------------------------------------------------------------------- * Method functions to call into VCL programs. * * Either the request or busyobject must be specified, but not both. * The workspace argument is where random VCL stuff gets space from. */ static void vcl_call_method(struct worker *wrk, struct req *req, struct busyobj *bo, void *specific, unsigned method, vcl_func_f *func, unsigned track_call) { uintptr_t rws = 0, aws; struct vrt_ctx ctx; struct vbitmap *vbm; void *p; size_t sz; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); INIT_OBJ(&ctx, VRT_CTX_MAGIC); if (bo != NULL) { CHECK_OBJ(bo, BUSYOBJ_MAGIC); CHECK_OBJ_NOTNULL(bo->vcl, VCL_MAGIC); VCL_Bo2Ctx(&ctx, bo); } if (req != NULL) { if (bo != NULL) assert(method == VCL_MET_PIPE); CHECK_OBJ(req, REQ_MAGIC); CHECK_OBJ_NOTNULL(req->sp, SESS_MAGIC); CHECK_OBJ_NOTNULL(req->vcl, VCL_MAGIC); CHECK_OBJ_NOTNULL(req->top, REQTOP_MAGIC); if (req_poll(wrk, req)) return; VCL_Req2Ctx(&ctx, req); } assert(ctx.now != 0); ctx.specific = specific; ctx.method = method; if (track_call > 0) { rws = WS_Snapshot(wrk->aws); sz = VBITMAP_SZ(track_call); p = WS_Alloc(wrk->aws, sz); // No use to attempt graceful failure, all VCL calls will fail AN(p); vbm = vbit_init(p, sz); ctx.called = vbm; } aws = WS_Snapshot(wrk->aws); wrk->cur_method = method; wrk->seen_methods |= method; AN(ctx.vsl); VSLbs(ctx.vsl, SLT_VCL_call, TOSTRAND(VCL_Method_Name(method))); func(&ctx, VSUB_STATIC, NULL); VSLbs(ctx.vsl, SLT_VCL_return, TOSTRAND(VCL_Return_Name(wrk->vpi->handling))); wrk->cur_method |= 1; // Magic marker if (wrk->vpi->handling == VCL_RET_FAIL) wrk->stats->vcl_fail++; /* * VCL/Vmods are not allowed to make permanent allocations from * wrk->aws, but they can reserve and return from it. */ assert(aws == WS_Snapshot(wrk->aws)); if (rws != 0) WS_Reset(wrk->aws, rws); } #define VCL_MET_MAC(func, upper, typ, bitmap) \ void \ VCL_##func##_method(struct vcl *vcl, struct worker *wrk, \ struct req *req, struct busyobj *bo, void *specific) \ { \ \ CHECK_OBJ_NOTNULL(vcl, VCL_MAGIC); \ CHECK_OBJ_NOTNULL(vcl->conf, VCL_CONF_MAGIC); \ CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); \ vcl_call_method(wrk, req, bo, specific, \ VCL_MET_ ## upper, vcl->conf->func##_func, vcl->conf->nsub);\ AN((1U << wrk->vpi->handling) & bitmap); \ } #include "tbl/vcl_returns.h" /*-------------------------------------------------------------------- */ VCL_STRING VRT_check_call(VRT_CTX, VCL_SUB sub) { VCL_STRING err = NULL; enum vcl_func_fail_e fail; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(sub, VCL_SUB_MAGIC); AN(sub->func); sub->func(ctx, VSUB_CHECK, &fail); switch (fail) { case VSUB_E_OK: break; case VSUB_E_METHOD: err = WS_Printf(ctx->ws, "Dynamic call to \"sub %s{}\"" " not allowed from here", sub->name); if (err == NULL) err = "Dynamic call not allowed and workspace overflow"; break; case VSUB_E_RECURSE: err = WS_Printf(ctx->ws, "Recursive dynamic call to" " \"sub %s{}\"", sub->name); if (err == NULL) err = "Recursive dynamic call and workspace overflow"; break; default: INCOMPL(); } return (err); } VCL_VOID VRT_call(VRT_CTX, VCL_SUB sub) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(sub, VCL_SUB_MAGIC); AZ(VRT_handled(ctx)); AN(sub->func); sub->func(ctx, VSUB_DYNAMIC, NULL); } varnish-7.5.0/bin/varnishd/cache/cache_vrt_vmod.c000066400000000000000000000130141457605730600217710ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Runtime support for compiled VCL programs */ #include "config.h" #include "cache_varnishd.h" #include #include #include #include "vcli_serve.h" #include "vcc_interface.h" #include "vmod_abi.h" /*-------------------------------------------------------------------- * Modules stuff */ struct vmod { unsigned magic; #define VMOD_MAGIC 0xb750219c VTAILQ_ENTRY(vmod) list; int ref; char *nm; unsigned nbr; char *path; char *backup; void *hdl; const void *funcs; int funclen; const char *abi; unsigned vrt_major; unsigned vrt_minor; }; static VTAILQ_HEAD(,vmod) vmods = VTAILQ_HEAD_INITIALIZER(vmods); static unsigned vmod_abi_mismatch(const struct vmod_data *d) { if (d->vrt_major == 0 && d->vrt_minor == 0) return (d->abi == NULL || strcmp(d->abi, VMOD_ABI_Version)); return (d->vrt_major != VRT_MAJOR_VERSION || d->vrt_minor > VRT_MINOR_VERSION); } int VPI_Vmod_Init(VRT_CTX, struct vmod **hdl, unsigned nbr, void *ptr, int len, const char *nm, const char *path, const char *file_id, const char *backup) { struct vmod *v; const struct vmod_data *d; char buf[256]; void *dlhdl; ASSERT_CLI(); CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); AN(ctx->msg); AN(hdl); AZ(*hdl); dlhdl = dlopen(backup, RTLD_NOW | RTLD_LOCAL); if (dlhdl == NULL) { VSB_printf(ctx->msg, "Loading vmod %s from %s (%s):\n", nm, backup, path); VSB_printf(ctx->msg, "dlopen() failed: %s\n", dlerror()); return (1); } VTAILQ_FOREACH(v, &vmods, list) if (v->hdl == dlhdl) break; if (v == NULL) { ALLOC_OBJ(v, VMOD_MAGIC); AN(v); REPLACE(v->backup, backup); v->hdl = dlhdl; bprintf(buf, "Vmod_%s_Data", nm); d = dlsym(v->hdl, buf); if (d == NULL || d->file_id == NULL || strcmp(d->file_id, file_id)) { VSB_printf(ctx->msg, "Loading vmod %s from %s (%s):\n", nm, backup, path); VSB_cat(ctx->msg, "This is no longer the same file seen by" " the VCL-compiler.\n"); (void)dlclose(v->hdl); FREE_OBJ(v); return (1); } if (vmod_abi_mismatch(d) || d->name == NULL || strcmp(d->name, nm) || d->func == NULL || d->func_len <= 0 || d->proto != NULL || d->json == NULL) { VSB_printf(ctx->msg, "Loading vmod %s from %s (%s):\n", nm, backup, path); VSB_cat(ctx->msg, "VMOD data is mangled.\n"); (void)dlclose(v->hdl); FREE_OBJ(v); return (1); } v->nbr = nbr; v->funclen = d->func_len; v->funcs = d->func; v->abi = d->abi; v->vrt_major = d->vrt_major; v->vrt_minor = d->vrt_minor; REPLACE(v->nm, nm); REPLACE(v->path, path); VSC_C_main->vmods++; VTAILQ_INSERT_TAIL(&vmods, v, list); } assert(len == v->funclen); memcpy(ptr, v->funcs, v->funclen); v->ref++; *hdl = v; return (0); } void VPI_Vmod_Unload(VRT_CTX, struct vmod **hdl) { struct vmod *v; ASSERT_CLI(); TAKE_OBJ_NOTNULL(v, hdl, VMOD_MAGIC); VCL_TaskLeave(ctx, cli_task_privs); VCL_TaskEnter(cli_task_privs); #ifndef DONT_DLCLOSE_VMODS /* * atexit(3) handlers are not called during dlclose(3). We don't * normally use them, but we do when running GCOV. This option * enables us to do that. */ AZ(dlclose(v->hdl)); #endif if (--v->ref != 0) return; free(v->nm); free(v->path); free(v->backup); VTAILQ_REMOVE(&vmods, v, list); VSC_C_main->vmods--; FREE_OBJ(v); } void VMOD_Panic(struct vsb *vsb) { struct vmod *v; VSB_cat(vsb, "vmods = {\n"); VSB_indent(vsb, 2); VTAILQ_FOREACH(v, &vmods, list) VSB_printf(vsb, "%s = {%p, %s, %u.%u},\n", v->nm, v, v->abi, v->vrt_major, v->vrt_minor); VSB_indent(vsb, -2); VSB_cat(vsb, "},\n"); } /*---------------------------------------------------------------------*/ static void v_matchproto_(cli_func_t) ccf_debug_vmod(struct cli *cli, const char * const *av, void *priv) { struct vmod *v; (void)av; (void)priv; ASSERT_CLI(); VTAILQ_FOREACH(v, &vmods, list) VCLI_Out(cli, "%5d %s (%s)\n", v->ref, v->nm, v->path); } static struct cli_proto vcl_cmds[] = { { CLICMD_DEBUG_VMOD, "d", ccf_debug_vmod }, { NULL } }; void VMOD_Init(void) { CLI_AddFuncs(vcl_cmds); } varnish-7.5.0/bin/varnishd/cache/cache_wrk.c000066400000000000000000000465571457605730600207560ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Worker thread stuff unrelated to the worker thread pools. * * -- * signaling_note: * * note on worker wakeup signaling through the wrk condition variable (cv) * * In the general case, a cv needs to be signaled while holding the * corresponding mutex, otherwise the signal may be posted before the waiting * thread could register itself on the cv and, consequently, the signal may be * missed. * * In our case, any worker thread which we wake up comes from the idle queue, * where it put itself under the mutex, releasing that mutex implicitly via * Lck_CondWaitUntil() (which calls some variant of pthread_cond_wait). So we avoid * additional mutex contention knowing that any worker thread on the idle queue * is blocking on the cv. * * Except -- when it isn't, because it woke up for releasing its VCL * Reference. To account for this case, we check if the task function has been * set in the meantime, which in turn requires all of the task preparation to be * done holding the pool mutex. (see also #2719) */ #include "config.h" #include #include #include "cache_varnishd.h" #include "cache_pool.h" #include "vcli_serve.h" #include "vtim.h" #include "hash/hash_slinger.h" static void Pool_Work_Thread(struct pool *pp, struct worker *wrk); static uintmax_t reqpoolfail; /*-------------------------------------------------------------------- * Create and start a back-ground thread which as its own worker and * session data structures; */ struct bgthread { unsigned magic; #define BGTHREAD_MAGIC 0x23b5152b const char *name; bgthread_t *func; void *priv; }; static void * wrk_bgthread(void *arg) { struct bgthread *bt; struct worker wrk; struct worker_priv wpriv[1]; struct VSC_main_wrk ds; void *r; CAST_OBJ_NOTNULL(bt, arg, BGTHREAD_MAGIC); THR_SetName(bt->name); THR_Init(); INIT_OBJ(&wrk, WORKER_MAGIC); INIT_OBJ(wpriv, WORKER_PRIV_MAGIC); wrk.wpriv = wpriv; // bgthreads do not have a vpi member memset(&ds, 0, sizeof ds); wrk.stats = &ds; r = bt->func(&wrk, bt->priv); HSH_Cleanup(&wrk); Pool_Sumstat(&wrk); return (r); } void WRK_BgThread(pthread_t *thr, const char *name, bgthread_t *func, void *priv) { struct bgthread *bt; ALLOC_OBJ(bt, BGTHREAD_MAGIC); AN(bt); bt->name = name; bt->func = func; bt->priv = priv; PTOK(pthread_create(thr, NULL, wrk_bgthread, bt)); } /*--------------------------------------------------------------------*/ static void WRK_Thread(struct pool *qp, size_t stacksize, unsigned thread_workspace) { // child_signal_handler stack overflow check uses struct worker addr struct worker *w, ww; struct VSC_main_wrk ds; unsigned char ws[thread_workspace]; struct worker_priv wpriv[1]; unsigned char vpi[vpi_wrk_len]; AN(qp); AN(stacksize); AN(thread_workspace); THR_SetName("cache-worker"); w = &ww; INIT_OBJ(w, WORKER_MAGIC); INIT_OBJ(wpriv, WORKER_PRIV_MAGIC); w->wpriv = wpriv; w->lastused = NAN; memset(&ds, 0, sizeof ds); w->stats = &ds; THR_SetWorker(w); PTOK(pthread_cond_init(&w->cond, NULL)); WS_Init(w->aws, "wrk", ws, thread_workspace); VPI_wrk_init(w, vpi, sizeof vpi); AN(w->vpi); VSL(SLT_WorkThread, NO_VXID, "%p start", w); Pool_Work_Thread(qp, w); AZ(w->pool); VSL(SLT_WorkThread, NO_VXID, "%p end", w); if (w->wpriv->vcl != NULL) VCL_Rel(&w->wpriv->vcl); PTOK(pthread_cond_destroy(&w->cond)); HSH_Cleanup(w); Pool_Sumstat(w); } /*-------------------------------------------------------------------- * Summing of stats into pool counters */ static unsigned wrk_addstat(const struct worker *wrk, const struct pool_task *tp, unsigned locked) { struct pool *pp; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); pp = wrk->pool; CHECK_OBJ_NOTNULL(pp, POOL_MAGIC); if (locked) Lck_AssertHeld(&pp->mtx); if ((tp == NULL && wrk->stats->summs > 0) || (wrk->stats->summs >= cache_param->wthread_stats_rate)) { if (!locked) Lck_Lock(&pp->mtx); pp->a_stat->summs++; VSC_main_Summ_wrk_wrk(pp->a_stat, wrk->stats); memset(wrk->stats, 0, sizeof *wrk->stats); if (!locked) Lck_Unlock(&pp->mtx); } return (tp != NULL); } void WRK_AddStat(const struct worker *wrk) { (void)wrk_addstat(wrk, wrk->task, 0); wrk->stats->summs++; } /*-------------------------------------------------------------------- * Pool reserve calculation */ static unsigned pool_reserve(void) { unsigned lim; if (cache_param->wthread_reserve == 0) { lim = cache_param->wthread_min / 20 + 1; } else { lim = cache_param->wthread_min * 950 / 1000; if (cache_param->wthread_reserve < lim) lim = cache_param->wthread_reserve; } if (lim < TASK_QUEUE_RESERVE) return (TASK_QUEUE_RESERVE); return (lim); } /*--------------------------------------------------------------------*/ static struct worker * pool_getidleworker(struct pool *pp, enum task_prio prio) { struct pool_task *pt = NULL; struct worker *wrk; CHECK_OBJ_NOTNULL(pp, POOL_MAGIC); Lck_AssertHeld(&pp->mtx); if (pp->nidle > (pool_reserve() * prio / TASK_QUEUE_RESERVE)) { pt = VTAILQ_FIRST(&pp->idle_queue); if (pt == NULL) AZ(pp->nidle); } if (pt == NULL) return (NULL); AZ(pt->func); CAST_OBJ_NOTNULL(wrk, pt->priv, WORKER_MAGIC); AN(pp->nidle); VTAILQ_REMOVE(&pp->idle_queue, wrk->task, list); pp->nidle--; return (wrk); } /*-------------------------------------------------------------------- * Special scheduling: If no thread can be found, the current thread * will be prepared for rescheduling instead. * The selected threads workspace is reserved and the argument put there. * Return one if another thread was scheduled, otherwise zero. */ int Pool_Task_Arg(struct worker *wrk, enum task_prio prio, task_func_t *func, const void *arg, size_t arg_len) { struct pool *pp; struct worker *wrk2; int retval; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); AN(arg); AN(arg_len); pp = wrk->pool; CHECK_OBJ_NOTNULL(pp, POOL_MAGIC); Lck_Lock(&pp->mtx); wrk2 = pool_getidleworker(pp, prio); if (wrk2 != NULL) retval = 1; else { wrk2 = wrk; retval = 0; } AZ(wrk2->task->func); assert(arg_len <= WS_ReserveSize(wrk2->aws, arg_len)); memcpy(WS_Reservation(wrk2->aws), arg, arg_len); wrk2->task->func = func; wrk2->task->priv = WS_Reservation(wrk2->aws); Lck_Unlock(&pp->mtx); // see signaling_note at the top for explanation if (retval) PTOK(pthread_cond_signal(&wrk2->cond)); return (retval); } /*-------------------------------------------------------------------- * Enter a new task to be done */ int Pool_Task(struct pool *pp, struct pool_task *task, enum task_prio prio) { struct worker *wrk; int retval = 0; CHECK_OBJ_NOTNULL(pp, POOL_MAGIC); AN(task); AN(task->func); assert(prio < TASK_QUEUE__END); if (prio == TASK_QUEUE_REQ && reqpoolfail) { retval = reqpoolfail & 1; reqpoolfail >>= 1; if (retval) { VSL(SLT_Debug, NO_VXID, "Failing due to reqpoolfail (next= 0x%jx)", reqpoolfail); return (retval); } } Lck_Lock(&pp->mtx); /* The common case first: Take an idle thread, do it. */ wrk = pool_getidleworker(pp, prio); if (wrk != NULL) { AZ(wrk->task->func); wrk->task->func = task->func; wrk->task->priv = task->priv; Lck_Unlock(&pp->mtx); // see signaling_note at the top for explanation PTOK(pthread_cond_signal(&wrk->cond)); return (0); } /* Vital work is always queued. Only priority classes that can * fit under the reserve capacity are eligible to queuing. */ if (prio >= TASK_QUEUE_RESERVE) { retval = -1; } else if (!TASK_QUEUE_LIMITED(prio) || pp->lqueue + pp->nthr < cache_param->wthread_max + cache_param->wthread_queue_limit) { pp->stats->sess_queued++; pp->lqueue++; VTAILQ_INSERT_TAIL(&pp->queues[prio], task, list); PTOK(pthread_cond_signal(&pp->herder_cond)); } else { /* NB: This is counter-intuitive but when we drop a REQ * task, it is an HTTP/1 request and we effectively drop * the whole session. It is otherwise an h2 stream with * STR priority in which case we are dropping a request. */ if (prio == TASK_QUEUE_REQ) pp->stats->sess_dropped++; else pp->stats->req_dropped++; retval = -1; } Lck_Unlock(&pp->mtx); return (retval); } /*-------------------------------------------------------------------- * Empty function used as a pointer value for the thread exit condition. */ static void v_matchproto_(task_func_t) pool_kiss_of_death(struct worker *wrk, void *priv) { (void)wrk; (void)priv; } /*-------------------------------------------------------------------- * This is the work function for worker threads in the pool. */ static void Pool_Work_Thread(struct pool *pp, struct worker *wrk) { struct pool_task *tp; struct pool_task tpx, tps; vtim_real tmo, now; unsigned i, reserve; CHECK_OBJ_NOTNULL(pp, POOL_MAGIC); wrk->pool = pp; while (1) { CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); tp = NULL; WS_Rollback(wrk->aws, 0); AZ(wrk->vsl); Lck_Lock(&pp->mtx); reserve = pool_reserve(); for (i = 0; i < TASK_QUEUE_RESERVE; i++) { if (pp->nidle < (reserve * i / TASK_QUEUE_RESERVE)) break; tp = VTAILQ_FIRST(&pp->queues[i]); if (tp != NULL) { pp->lqueue--; pp->ndequeued--; VTAILQ_REMOVE(&pp->queues[i], tp, list); break; } } if (wrk_addstat(wrk, tp, 1)) { wrk->stats->summs++; AN(tp); } else if (pp->b_stat != NULL && pp->a_stat->summs) { /* Nothing to do, push pool stats into global pool */ tps.func = pool_stat_summ; tps.priv = pp->a_stat; pp->a_stat = pp->b_stat; pp->b_stat = NULL; tp = &tps; } else { /* Nothing to do: To sleep, perchance to dream ... */ if (isnan(wrk->lastused)) wrk->lastused = VTIM_real(); wrk->task->func = NULL; wrk->task->priv = wrk; VTAILQ_INSERT_HEAD(&pp->idle_queue, wrk->task, list); pp->nidle++; now = wrk->lastused; do { // see signaling_note at the top for explanation if (DO_DEBUG(DBG_VCLREL) && pp->b_stat == NULL && pp->a_stat->summs) /* We've released the VCL, but * there are pool stats not pushed * to the global stats and some * thread is busy pushing * stats. Set a 1 second timeout * so that we'll wake up and get a * chance to push stats. */ tmo = now + 1.; else if (wrk->wpriv->vcl == NULL) tmo = INFINITY; else if (DO_DEBUG(DBG_VTC_MODE)) tmo = now + 1.; else tmo = now + 60.; (void)Lck_CondWaitUntil( &wrk->cond, &pp->mtx, tmo); if (wrk->task->func != NULL) { /* We have been handed a new task */ tpx = *wrk->task; tp = &tpx; wrk->stats->summs++; } else if (pp->b_stat != NULL && pp->a_stat->summs) { /* Woken up to release the VCL, * and noticing that there are * pool stats not pushed to the * global stats and no active * thread currently doing * it. Remove ourself from the * idle queue and take on the * task. */ assert(pp->nidle > 0); VTAILQ_REMOVE(&pp->idle_queue, wrk->task, list); pp->nidle--; tps.func = pool_stat_summ; tps.priv = pp->a_stat; pp->a_stat = pp->b_stat; pp->b_stat = NULL; tp = &tps; } else { // Presumably ETIMEDOUT but we do not // assert this because pthread condvars // are not airtight. if (wrk->wpriv->vcl) VCL_Rel(&wrk->wpriv->vcl); now = VTIM_real(); } } while (tp == NULL); } Lck_Unlock(&pp->mtx); if (tp->func == pool_kiss_of_death) break; do { memset(wrk->task, 0, sizeof wrk->task); assert(wrk->pool == pp); AN(tp->func); tp->func(wrk, tp->priv); if (DO_DEBUG(DBG_VCLREL) && wrk->wpriv->vcl != NULL) VCL_Rel(&wrk->wpriv->vcl); tpx = *wrk->task; tp = &tpx; } while (tp->func != NULL); if (WS_Overflowed(wrk->aws)) wrk->stats->ws_thread_overflow++; /* cleanup for next task */ wrk->seen_methods = 0; } wrk->pool = NULL; } /*-------------------------------------------------------------------- * Create another worker thread. */ struct pool_info { unsigned magic; #define POOL_INFO_MAGIC 0x4e4442d3 size_t stacksize; struct pool *qp; }; static void * pool_thread(void *priv) { struct pool_info *pi; CAST_OBJ_NOTNULL(pi, priv, POOL_INFO_MAGIC); THR_Init(); WRK_Thread(pi->qp, pi->stacksize, cache_param->workspace_thread); FREE_OBJ(pi); return (NULL); } static void pool_breed(struct pool *qp) { pthread_t tp; pthread_attr_t tp_attr; struct pool_info *pi; PTOK(pthread_attr_init(&tp_attr)); PTOK(pthread_attr_setdetachstate(&tp_attr, PTHREAD_CREATE_DETACHED)); /* Set the stacksize for worker threads we create */ if (cache_param->wthread_stacksize != UINT_MAX) PTOK(pthread_attr_setstacksize(&tp_attr, cache_param->wthread_stacksize)); ALLOC_OBJ(pi, POOL_INFO_MAGIC); AN(pi); PTOK(pthread_attr_getstacksize(&tp_attr, &pi->stacksize)); pi->qp = qp; errno = pthread_create(&tp, &tp_attr, pool_thread, pi); if (errno) { FREE_OBJ(pi); VSL(SLT_Debug, NO_VXID, "Create worker thread failed %d %s", errno, VAS_errtxt(errno)); Lck_Lock(&pool_mtx); VSC_C_main->threads_failed++; Lck_Unlock(&pool_mtx); VTIM_sleep(cache_param->wthread_fail_delay); } else { qp->nthr++; Lck_Lock(&pool_mtx); VSC_C_main->threads++; VSC_C_main->threads_created++; Lck_Unlock(&pool_mtx); if (cache_param->wthread_add_delay > 0.0) VTIM_sleep(cache_param->wthread_add_delay); else (void)sched_yield(); } PTOK(pthread_attr_destroy(&tp_attr)); } /*-------------------------------------------------------------------- * Herd a single pool * * This thread wakes up every thread_pool_timeout seconds, whenever a pool * queues and when threads need to be destroyed * * The trick here is to not be too aggressive about creating threads. In * pool_breed(), we sleep whenever we create a thread and a little while longer * whenever we fail to, hopefully missing a lot of cond_signals in the meantime. * * Idle threads are destroyed at a rate determined by wthread_destroy_delay * * XXX: probably need a lot more work. * */ void* pool_herder(void *priv) { struct pool *pp; struct pool_task *pt; double t_idle; struct worker *wrk; double delay; unsigned wthread_min; uintmax_t dq = (1ULL << 31); vtim_mono dqt = 0; int r = 0; CAST_OBJ_NOTNULL(pp, priv, POOL_MAGIC); THR_SetName("pool_herder"); THR_Init(); while (!pp->die || pp->nthr > 0) { /* * If the worker pool is configured too small, we can * end up deadlocking it (see #2418 for details). * * Recovering from this would require a lot of complicated * code, and fundamentally, either people configured their * pools wrong, in which case we want them to notice, or * they are under DoS, in which case recovering gracefully * is unlikely be a major improvement. * * Instead we implement a watchdog and kill the worker if * nothing has been dequeued for that long. */ if (VTAILQ_EMPTY(&pp->queues[TASK_QUEUE_HIGHEST_PRIORITY])) { /* Watchdog only applies to no movement on the * highest priority queue (TASK_QUEUE_BO) */ dq = pp->ndequeued + 1; } else if (dq != pp->ndequeued) { dq = pp->ndequeued; dqt = VTIM_mono(); } else if (VTIM_mono() - dqt > cache_param->wthread_watchdog) { VSL(SLT_Error, NO_VXID, "Pool Herder: Queue does not move ql=%u dt=%f", pp->lqueue, VTIM_mono() - dqt); WRONG("Worker Pool Queue does not move" " - see thread_pool_watchdog parameter"); } wthread_min = cache_param->wthread_min; if (pp->die) wthread_min = 0; /* Make more threads if needed and allowed */ if (pp->nthr < wthread_min || (pp->lqueue > 0 && pp->nthr < cache_param->wthread_max)) { pool_breed(pp); continue; } delay = cache_param->wthread_timeout; assert(pp->nthr >= wthread_min); if (pp->nthr > wthread_min) { t_idle = VTIM_real() - cache_param->wthread_timeout; Lck_Lock(&pp->mtx); wrk = NULL; pt = VTAILQ_LAST(&pp->idle_queue, taskhead); if (pt != NULL) { AN(pp->nidle); AZ(pt->func); CAST_OBJ_NOTNULL(wrk, pt->priv, WORKER_MAGIC); if (pp->die || wrk->lastused < t_idle || pp->nthr > cache_param->wthread_max) { /* Give it a kiss on the cheek... */ VTAILQ_REMOVE(&pp->idle_queue, wrk->task, list); pp->nidle--; wrk->task->func = pool_kiss_of_death; PTOK(pthread_cond_signal(&wrk->cond)); } else { delay = wrk->lastused - t_idle; wrk = NULL; } } Lck_Unlock(&pp->mtx); if (wrk != NULL) { pp->nthr--; Lck_Lock(&pool_mtx); VSC_C_main->threads--; VSC_C_main->threads_destroyed++; Lck_Unlock(&pool_mtx); delay = cache_param->wthread_destroy_delay; } else delay = vmax(delay, cache_param->wthread_destroy_delay); } if (pp->die) { if (delay < 2) delay = .01; else delay = 1; VTIM_sleep(delay); continue; } Lck_Lock(&pp->mtx); if (pp->lqueue == 0) { if (DO_DEBUG(DBG_VTC_MODE)) delay = 0.5; r = Lck_CondWaitTimeout( &pp->herder_cond, &pp->mtx, delay); } else if (pp->nthr >= cache_param->wthread_max) { /* XXX: unsafe counters */ if (r != ETIMEDOUT) VSC_C_main->threads_limited++; r = Lck_CondWaitTimeout( &pp->herder_cond, &pp->mtx, 1.0); } Lck_Unlock(&pp->mtx); } return (NULL); } /*-------------------------------------------------------------------- * Debugging aids */ static void v_matchproto_(cli_func_t) debug_reqpoolfail(struct cli *cli, const char * const *av, void *priv) { uintmax_t u = 1; const char *p; (void)priv; (void)cli; reqpoolfail = 0; for (p = av[2]; *p != '\0'; p++) { if (*p == 'F' || *p == 'f') reqpoolfail |= u; u <<= 1; } } static struct cli_proto debug_cmds[] = { { CLICMD_DEBUG_REQPOOLFAIL, "d", debug_reqpoolfail }, { NULL } }; void WRK_Log(enum VSL_tag_e tag, const char *fmt, ...) { struct worker *wrk; va_list ap; AN(fmt); wrk = THR_GetWorker(); CHECK_OBJ_ORNULL(wrk, WORKER_MAGIC); va_start(ap, fmt); if (wrk != NULL && wrk->vsl != NULL) VSLbv(wrk->vsl, tag, fmt, ap); else VSLv(tag, NO_VXID, fmt, ap); va_end(ap); } /*-------------------------------------------------------------------- * */ void WRK_Init(void) { assert(cache_param->wthread_min >= TASK_QUEUE_RESERVE); CLI_AddFuncs(debug_cmds); } varnish-7.5.0/bin/varnishd/cache/cache_ws.c000066400000000000000000000177161457605730600205770ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2021 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * Author: Dridi Boukelmoune * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #include "config.h" #include "cache_varnishd.h" #define WS_REDZONE_END '\x15' static const uintptr_t snap_overflowed = (uintptr_t)&snap_overflowed; void WS_Assert(const struct ws *ws) { CHECK_OBJ_NOTNULL(ws, WS_MAGIC); DSLb(DBG_WORKSPACE, "WS(%s, %p) = {%p %zu %zu %zu}", ws->id, ws, ws->s, pdiff(ws->s, ws->f), ws->r == NULL ? 0 : pdiff(ws->f, ws->r), pdiff(ws->s, ws->e)); assert(ws->s != NULL); assert(PAOK(ws->s)); assert(ws->e != NULL); assert(PAOK(ws->e)); assert(ws->s < ws->e); assert(ws->f >= ws->s); assert(ws->f <= ws->e); assert(PAOK(ws->f)); if (ws->r) { assert(ws->r >= ws->f); assert(ws->r <= ws->e); } assert(*ws->e == WS_REDZONE_END); } int WS_Allocated(const struct ws *ws, const void *ptr, ssize_t len) { const char *p = ptr; WS_Assert(ws); if (len < 0) len = strlen(p) + 1; assert(!(p > ws->f && p <= ws->e)); return (p >= ws->s && (p + len) <= ws->f); } /* * NB: The id must be max 3 char and lower-case. * (upper-case the first char to indicate overflow) */ void WS_Init(struct ws *ws, const char *id, void *space, unsigned len) { unsigned l; DSLb(DBG_WORKSPACE, "WS_Init(%s, %p, %p, %u)", id, ws, space, len); assert(space != NULL); INIT_OBJ(ws, WS_MAGIC); ws->s = space; assert(PAOK(space)); l = PRNDDN(len - 1); ws->e = ws->s + l; memset(ws->e, WS_REDZONE_END, len - l); ws->f = ws->s; assert(id[0] & 0x20); // cheesy islower() bstrcpy(ws->id, id); WS_Assert(ws); } /* * Reset a WS to a cookie from WS_Snapshot * * for use by any code using cache.h * * does not reset the overflow bit and asserts that, if WS_Snapshot had found * the workspace overflown, the marker is intact */ void WS_Reset(struct ws *ws, uintptr_t pp) { char *p; WS_Assert(ws); AN(pp); if (pp == snap_overflowed) { DSLb(DBG_WORKSPACE, "WS_Reset(%s, %p, overflowed)", ws->id, ws); AN(WS_Overflowed(ws)); return; } p = (char *)pp; DSLb(DBG_WORKSPACE, "WS_Reset(%s, %p, %p)", ws->id, ws, p); assert(ws->r == NULL); assert(p >= ws->s); assert(p <= ws->e); ws->f = p; WS_Assert(ws); } /* * Make a reservation and optionally pipeline a memory region that may or * may not originate from the same workspace. */ unsigned WS_ReqPipeline(struct ws *ws, const void *b, const void *e) { unsigned r, l; WS_Assert(ws); if (!strcasecmp(ws->id, "req")) WS_Rollback(ws, 0); else AZ(b); r = WS_ReserveAll(ws); if (b == NULL) { AZ(e); return (0); } AN(e); l = pdiff(b, e); assert(l <= r); memmove(ws->f, b, l); return (l); } void * WS_Alloc(struct ws *ws, unsigned bytes) { char *r; WS_Assert(ws); assert(bytes > 0); bytes = PRNDUP(bytes); assert(ws->r == NULL); if (ws->f + bytes > ws->e) { WS_MarkOverflow(ws); return (NULL); } r = ws->f; ws->f += bytes; DSLb(DBG_WORKSPACE, "WS_Alloc(%s, %p, %u) = %p", ws->id, ws, bytes, r); WS_Assert(ws); return (r); } void * WS_Copy(struct ws *ws, const void *str, int len) { char *r; unsigned bytes; WS_Assert(ws); assert(ws->r == NULL); if (len == -1) len = strlen(str) + 1; assert(len > 0); bytes = PRNDUP((unsigned)len); if (ws->f + bytes > ws->e) { WS_MarkOverflow(ws); return (NULL); } r = ws->f; ws->f += bytes; memcpy(r, str, len); DSLb(DBG_WORKSPACE, "WS_Copy(%s, %p, %d) = %p", ws->id, ws, len, r); WS_Assert(ws); return (r); } uintptr_t WS_Snapshot(struct ws *ws) { WS_Assert(ws); assert(ws->r == NULL); if (WS_Overflowed(ws)) { DSLb(DBG_WORKSPACE, "WS_Snapshot(%s, %p) = overflowed", ws->id, ws); return (snap_overflowed); } DSLb(DBG_WORKSPACE, "WS_Snapshot(%s, %p) = %p", ws->id, ws, ws->f); return ((uintptr_t)ws->f); } /* * WS_Release() must be called in all cases */ unsigned WS_ReserveAll(struct ws *ws) { unsigned b; WS_Assert(ws); assert(ws->r == NULL); ws->r = ws->e; b = pdiff(ws->f, ws->r); WS_Assert(ws); DSLb(DBG_WORKSPACE, "WS_ReserveAll(%s, %p) = %u", ws->id, ws, b); return (b); } /* * WS_Release() must be called for retval > 0 only */ unsigned WS_ReserveSize(struct ws *ws, unsigned bytes) { unsigned l; WS_Assert(ws); assert(ws->r == NULL); assert(bytes > 0); l = pdiff(ws->f, ws->e); if (bytes > l) { WS_MarkOverflow(ws); return (0); } ws->r = ws->f + bytes; DSLb(DBG_WORKSPACE, "WS_ReserveSize(%s, %p, %u/%u) = %u", ws->id, ws, bytes, l, bytes); WS_Assert(ws); return (bytes); } void WS_Release(struct ws *ws, unsigned bytes) { WS_Assert(ws); assert(bytes <= ws->e - ws->f); DSLb(DBG_WORKSPACE, "WS_Release(%s, %p, %u)", ws->id, ws, bytes); assert(ws->r != NULL); assert(ws->f + bytes <= ws->r); ws->f += PRNDUP(bytes); ws->r = NULL; WS_Assert(ws); } void WS_ReleaseP(struct ws *ws, const char *ptr) { WS_Assert(ws); DSLb(DBG_WORKSPACE, "WS_ReleaseP(%s, %p, %p (%zd))", ws->id, ws, ptr, ptr - ws->f); assert(ws->r != NULL); assert(ptr >= ws->f); assert(ptr <= ws->r); ws->f += PRNDUP(ptr - ws->f); ws->r = NULL; WS_Assert(ws); } void * WS_AtOffset(const struct ws *ws, unsigned off, unsigned len) { char *ptr; WS_Assert(ws); ptr = ws->s + off; AN(WS_Allocated(ws, ptr, len)); return (ptr); } unsigned WS_ReservationOffset(const struct ws *ws) { AN(ws->r); return (ws->f - ws->s); } /*--------------------------------------------------------------------*/ unsigned WS_Dump(const struct ws *ws, char where, size_t off, void *buf, size_t len) { char *b, *p; size_t l; WS_Assert(ws); AN(buf); AN(len); switch (where) { case 's': p = ws->s; break; case 'f': p = ws->f; break; case 'r': p = ws->r; break; default: errno = EINVAL; return (0); } if (p == NULL) { errno = EAGAIN; return (0); } p += off; if (p >= ws->e) { errno = EFAULT; return (0); } l = pdiff(p, ws->e); if (len <= l) { memcpy(buf, p, len); return (len); } b = buf; memcpy(b, p, l); memset(b + l, WS_REDZONE_END, len - l); return (l); } /*--------------------------------------------------------------------*/ static inline void ws_printptr(struct vsb *vsb, const char *s, const char *p) { if (p >= s) VSB_printf(vsb, ", +%ld", (long) (p - s)); else VSB_printf(vsb, ", %p", p); } void WS_Panic(struct vsb *vsb, const struct ws *ws) { if (PAN_dump_struct(vsb, ws, WS_MAGIC, "ws")) return; if (ws->id[0] != '\0' && (!(ws->id[0] & 0x20))) // cheesy islower() VSB_cat(vsb, "OVERFLOWED "); VSB_printf(vsb, "id = \"%s\",\n", ws->id); VSB_printf(vsb, "{s, f, r, e} = {%p", ws->s); ws_printptr(vsb, ws->s, ws->f); ws_printptr(vsb, ws->s, ws->r); ws_printptr(vsb, ws->s, ws->e); VSB_cat(vsb, "},\n"); VSB_indent(vsb, -2); VSB_cat(vsb, "},\n"); } varnish-7.5.0/bin/varnishd/cache/cache_ws_common.c000066400000000000000000000073731457605730600221450ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2021 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * Author: Dridi Boukelmoune * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #include "config.h" #include #include "cache_varnishd.h" void WS_Id(const struct ws *ws, char *id) { WS_Assert(ws); AN(id); memcpy(id, ws->id, WS_ID_SIZE); id[0] |= 0x20; // cheesy tolower() } void WS_MarkOverflow(struct ws *ws) { CHECK_OBJ_NOTNULL(ws, WS_MAGIC); ws->id[0] &= ~0x20; // cheesy toupper() } static void ws_ClearOverflow(struct ws *ws) { CHECK_OBJ_NOTNULL(ws, WS_MAGIC); ws->id[0] |= 0x20; // cheesy tolower() } int WS_Overflowed(const struct ws *ws) { CHECK_OBJ_NOTNULL(ws, WS_MAGIC); AN(ws->id[0]); if (ws->id[0] & 0x20) // cheesy islower() return (0); return (1); } /* * Reset the WS to a cookie or its start and clears any overflow * * for varnishd internal use only */ void WS_Rollback(struct ws *ws, uintptr_t pp) { WS_Assert(ws); if (pp == 0) pp = (uintptr_t)ws->s; ws_ClearOverflow(ws); WS_Reset(ws, pp); } /*--------------------------------------------------------------------*/ const char * WS_Printf(struct ws *ws, const char *fmt, ...) { unsigned u, v; va_list ap; char *p; u = WS_ReserveAll(ws); p = ws->f; va_start(ap, fmt); v = vsnprintf(p, u, fmt, ap); va_end(ap); if (v >= u) { WS_Release(ws, 0); WS_MarkOverflow(ws); p = NULL; } else { WS_Release(ws, v + 1); } return (p); } /*--------------------------------------------------------------------- * Build a VSB on a workspace. * Usage pattern: * * struct vsb vsb[1]; * char *p; * * WS_VSB_new(vsb, ctx->ws); * VSB_printf(vsb, "blablabla"); * p = WS_VSB_finish(vsb, ctx->ws, NULL); * if (p == NULL) * return (FAILURE); */ void WS_VSB_new(struct vsb *vsb, struct ws *ws) { unsigned u; static char bogus[2]; // Smallest possible vsb AN(vsb); WS_Assert(ws); u = WS_ReserveAll(ws); if (WS_Overflowed(ws) || u < 2) AN(VSB_init(vsb, bogus, sizeof bogus)); else AN(VSB_init(vsb, WS_Reservation(ws), u)); } char * WS_VSB_finish(struct vsb *vsb, struct ws *ws, size_t *szp) { char *p; AN(vsb); WS_Assert(ws); if (!VSB_finish(vsb)) { p = VSB_data(vsb); if (p == ws->f) { WS_Release(ws, VSB_len(vsb) + 1); if (szp != NULL) *szp = VSB_len(vsb); VSB_fini(vsb); return (p); } } WS_MarkOverflow(ws); VSB_fini(vsb); WS_Release(ws, 0); if (szp) *szp = 0; return (NULL); } varnish-7.5.0/bin/varnishd/cache/cache_ws_emu.c000066400000000000000000000274721457605730600214450ustar00rootroot00000000000000/*- * Copyright (c) 2021 Varnish Software AS * All rights reserved. * * Author: Dridi Boukelmoune * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #include "config.h" #ifdef ENABLE_WORKSPACE_EMULATOR #if HAVE_SANITIZER_ASAN_INTERFACE_H # include #endif #include "cache_varnishd.h" #include struct ws_alloc { unsigned magic; #define WS_ALLOC_MAGIC 0x22e7fd05 unsigned off; unsigned len; char *ptr; VTAILQ_ENTRY(ws_alloc) list; }; VTAILQ_HEAD(ws_alloc_head, ws_alloc); struct ws_emu { unsigned magic; #define WS_EMU_MAGIC 0x1c89b6ab unsigned len; struct ws *ws; struct ws_alloc_head head; }; static const uintptr_t snap_overflowed = (uintptr_t)&snap_overflowed; static struct ws_emu * ws_emu(const struct ws *ws) { struct ws_emu *we; CAST_OBJ_NOTNULL(we, (void *)ws->s, WS_EMU_MAGIC); return (we); } void WS_Assert(const struct ws *ws) { struct ws_emu *we; struct ws_alloc *wa, *wa2 = NULL; size_t len; CHECK_OBJ_NOTNULL(ws, WS_MAGIC); assert(ws->s != NULL); assert(PAOK(ws->s)); assert(ws->e != NULL); assert(PAOK(ws->e)); we = ws_emu(ws); len = pdiff(ws->s, ws->e); assert(len == we->len); len = 0; VTAILQ_FOREACH(wa, &we->head, list) { CHECK_OBJ_NOTNULL(wa, WS_ALLOC_MAGIC); wa2 = wa; assert(len == wa->off); if (wa->ptr == ws->f || wa->ptr == NULL) /* reservation */ break; AN(wa->len); len += PRNDUP(wa->len); assert(len <= we->len); } if (wa != NULL) { AZ(VTAILQ_NEXT(wa, list)); if (wa->ptr == NULL) { AZ(wa->len); assert(ws->f == ws->e); assert(ws->r == ws->e); } else { AN(wa->len); assert(ws->f == wa->ptr); assert(ws->r == ws->f + wa->len); } len += PRNDUP(wa->len); assert(len <= we->len); } else { AZ(ws->f); AZ(ws->r); } DSLb(DBG_WORKSPACE, "WS(%p) = (%s, %p %zu %zu %zu)", ws, ws->id, ws->s, wa2 == NULL ? 0 : wa2->off + PRNDUP(wa2->len), ws->r == NULL ? 0 : pdiff(ws->f, ws->r), pdiff(ws->s, ws->e)); } int WS_Allocated(const struct ws *ws, const void *ptr, ssize_t len) { struct ws_emu *we; struct ws_alloc *wa; uintptr_t p, pa; WS_Assert(ws); AN(ptr); if (len < 0) len = strlen(ptr) + 1; p = (uintptr_t)ptr; we = ws_emu(ws); VTAILQ_FOREACH(wa, &we->head, list) { pa = (uintptr_t)wa->ptr; if (p >= (uintptr_t)ws->f && p <= (uintptr_t)ws->r) return (1); /* XXX: clang 12's ubsan triggers a pointer overflow on * the if statement below. Since the purpose is to check * that a pointer+length is within bounds of another * pointer+length it's unclear whether a pointer overflow * is relevant. Worked around for now with uintptr_t. */ if (p >= pa && p + len <= pa + wa->len) return (1); } return (0); } void WS_Init(struct ws *ws, const char *id, void *space, unsigned len) { struct ws_emu *we; DSLb(DBG_WORKSPACE, "WS_Init(%p, \"%s\", %p, %u)", ws, id, space, len); assert(space != NULL); assert(PAOK(space)); assert(len >= sizeof *we); len = PRNDDN(len - 1); INIT_OBJ(ws, WS_MAGIC); ws->s = space; ws->e = ws->s + len; assert(id[0] & 0x20); // cheesy islower() bstrcpy(ws->id, id); we = space; INIT_OBJ(we, WS_EMU_MAGIC); VTAILQ_INIT(&we->head); we->len = len; WS_Assert(ws); } static void ws_alloc_free(struct ws_emu *we, struct ws_alloc **wap) { struct ws_alloc *wa; TAKE_OBJ_NOTNULL(wa, wap, WS_ALLOC_MAGIC); AZ(VTAILQ_NEXT(wa, list)); VTAILQ_REMOVE(&we->head, wa, list); free(wa->ptr); FREE_OBJ(wa); } void WS_Reset(struct ws *ws, uintptr_t pp) { struct ws_emu *we; struct ws_alloc *wa; char *p; WS_Assert(ws); AN(pp); if (pp == snap_overflowed) { DSLb(DBG_WORKSPACE, "WS_Reset(%p, overflowed)", ws); AN(WS_Overflowed(ws)); return; } p = (char *)pp; DSLb(DBG_WORKSPACE, "WS_Reset(%p, %p)", ws, p); AZ(ws->r); we = ws_emu(ws); while ((wa = VTAILQ_LAST(&we->head, ws_alloc_head)) != NULL && wa->ptr != p) ws_alloc_free(we, &wa); if (wa == NULL) assert(p == ws->s); WS_Assert(ws); } unsigned WS_ReqPipeline(struct ws *ws, const void *b, const void *e) { struct ws_emu *we; struct ws_alloc *wa; unsigned l; WS_Assert(ws); AZ(ws->f); AZ(ws->r); if (strcasecmp(ws->id, "req")) AZ(b); if (b == NULL) { AZ(e); if (!strcasecmp(ws->id, "req")) WS_Rollback(ws, 0); (void)WS_ReserveAll(ws); return (0); } we = ws_emu(ws); ALLOC_OBJ(wa, WS_ALLOC_MAGIC); AN(wa); wa->len = we->len; wa->ptr = malloc(wa->len); AN(wa->ptr); AN(e); l = pdiff(b, e); assert(l <= wa->len); memcpy(wa->ptr, b, l); WS_Rollback(ws, 0); ws->f = wa->ptr; ws->r = ws->f + wa->len; VTAILQ_INSERT_TAIL(&we->head, wa, list); WS_Assert(ws); return (l); } static struct ws_alloc * ws_emu_alloc(struct ws *ws, unsigned len) { struct ws_emu *we; struct ws_alloc *wa; size_t off = 0; WS_Assert(ws); AZ(ws->r); we = ws_emu(ws); wa = VTAILQ_LAST(&we->head, ws_alloc_head); CHECK_OBJ_ORNULL(wa, WS_ALLOC_MAGIC); if (wa != NULL) off = wa->off + PRNDUP(wa->len); if (off + len > we->len) { WS_MarkOverflow(ws); return (NULL); } if (len == 0) len = we->len - off; ALLOC_OBJ(wa, WS_ALLOC_MAGIC); AN(wa); wa->off = off; wa->len = len; if (len > 0) { wa->ptr = malloc(len); AN(wa->ptr); } VTAILQ_INSERT_TAIL(&we->head, wa, list); return (wa); } void * WS_Alloc(struct ws *ws, unsigned bytes) { struct ws_alloc *wa; assert(bytes > 0); wa = ws_emu_alloc(ws, bytes); WS_Assert(ws); if (wa != NULL) { AN(wa->ptr); DSLb(DBG_WORKSPACE, "WS_Alloc(%p, %u) = %p", ws, bytes, wa->ptr); return (wa->ptr); } return (NULL); } void * WS_Copy(struct ws *ws, const void *str, int len) { struct ws_alloc *wa; AN(str); if (len == -1) len = strlen(str) + 1; assert(len > 0); wa = ws_emu_alloc(ws, len); WS_Assert(ws); if (wa != NULL) { AN(wa->ptr); memcpy(wa->ptr, str, len); DSLb(DBG_WORKSPACE, "WS_Copy(%p, %d) = %p", ws, len, wa->ptr); return (wa->ptr); } return (NULL); } uintptr_t WS_Snapshot(struct ws *ws) { struct ws_emu *we; struct ws_alloc *wa; void *p; WS_Assert(ws); assert(ws->r == NULL); if (WS_Overflowed(ws)) { DSLb(DBG_WORKSPACE, "WS_Snapshot(%p) = overflowed", ws); return (snap_overflowed); } we = ws_emu(ws); wa = VTAILQ_LAST(&we->head, ws_alloc_head); CHECK_OBJ_ORNULL(wa, WS_ALLOC_MAGIC); p = (wa == NULL ? ws->s : wa->ptr); DSLb(DBG_WORKSPACE, "WS_Snapshot(%p) = %p", ws, p); return ((uintptr_t)p); } unsigned WS_ReserveAll(struct ws *ws) { struct ws_alloc *wa; unsigned b; wa = ws_emu_alloc(ws, 0); AN(wa); if (wa->ptr != NULL) { AN(wa->len); ws->f = wa->ptr; ws->r = ws->f + wa->len; } else { ws->f = ws->r = ws->e; } b = pdiff(ws->f, ws->r); DSLb(DBG_WORKSPACE, "WS_ReserveAll(%p) = %u", ws, b); WS_Assert(ws); return (b); } unsigned WS_ReserveSize(struct ws *ws, unsigned bytes) { struct ws_emu *we; struct ws_alloc *wa; assert(bytes > 0); wa = ws_emu_alloc(ws, bytes); if (wa == NULL) return (0); AN(wa->ptr); assert(wa->len == bytes); ws->f = wa->ptr; ws->r = ws->f + bytes; we = ws_emu(ws); DSLb(DBG_WORKSPACE, "WS_ReserveSize(%p, %u/%u) = %u", ws, bytes, we->len - wa->off, bytes); WS_Assert(ws); return (bytes); } static void ws_release(struct ws *ws, unsigned bytes) { struct ws_emu *we; struct ws_alloc *wa; WS_Assert(ws); AN(ws->f); AN(ws->r); we = ws_emu(ws); wa = VTAILQ_LAST(&we->head, ws_alloc_head); AN(wa); assert(bytes <= wa->len); ws->f = ws->r = NULL; if (bytes == 0) { ws_alloc_free(we, &wa); return; } AN(wa->ptr); #ifdef ASAN_POISON_MEMORY_REGION ASAN_POISON_MEMORY_REGION(wa->ptr + bytes, wa->len - bytes); #endif wa->len = bytes; WS_Assert(ws); } void WS_Release(struct ws *ws, unsigned bytes) { ws_release(ws, bytes); DSLb(DBG_WORKSPACE, "WS_Release(%p, %u)", ws, bytes); } void WS_ReleaseP(struct ws *ws, const char *ptr) { unsigned l; WS_Assert(ws); assert(ws->r != NULL); assert(ptr >= ws->f); assert(ptr <= ws->r); l = pdiff(ws->f, ptr); ws_release(ws, l); DSLb(DBG_WORKSPACE, "WS_ReleaseP(%p, %p (%u))", ws, ptr, l); } void * WS_AtOffset(const struct ws *ws, unsigned off, unsigned len) { struct ws_emu *we; struct ws_alloc *wa; WS_Assert(ws); we = ws_emu(ws); VTAILQ_FOREACH(wa, &we->head, list) { if (wa->off == off) { assert(wa->len >= len); return (wa->ptr); } } WRONG("invalid offset"); NEEDLESS(return (NULL)); } unsigned WS_ReservationOffset(const struct ws *ws) { struct ws_emu *we; struct ws_alloc *wa; WS_Assert(ws); AN(ws->f); AN(ws->r); we = ws_emu(ws); wa = VTAILQ_LAST(&we->head, ws_alloc_head); AN(wa); return (wa->off); } unsigned WS_Dump(const struct ws *ws, char where, size_t off, void *buf, size_t len) { struct ws_emu *we; struct ws_alloc *wa; unsigned l; char *b; WS_Assert(ws); AN(buf); AN(len); if (strchr("sfr", where) == NULL) { errno = EINVAL; return (0); } if (where == 'r' && ws->r == NULL) { errno = EAGAIN; return (0); } we = ws_emu(ws); wa = VTAILQ_LAST(&we->head, ws_alloc_head); l = we->len; if (where != 's' && wa != NULL) { l -= wa->off; if (where == 'f') l -= wa->len; } if (off > l) { errno = EFAULT; return (0); } b = buf; if (where == 'f' && ws->r != NULL) { if (l > len) l = len; AN(wa); memcpy(b, wa->ptr, l); b += l; len -= l; } if (where == 's') { VTAILQ_FOREACH(wa, &we->head, list) { if (len == 0) break; if (wa->ptr == NULL) break; l = vmin_t(size_t, wa->len, len); memcpy(b, wa->ptr, l); b += l; len -= l; } } if (len > 0) memset(b, 0xa5, len); return (l); } static void ws_emu_panic(struct vsb *vsb, const struct ws *ws) { const struct ws_emu *we; const struct ws_alloc *wa; we = (void *)ws->s; if (PAN_dump_once(vsb, we, WS_EMU_MAGIC, "ws_emu")) return; VSB_printf(vsb, "len = %u,\n", we->len); VTAILQ_FOREACH(wa, &we->head, list) { if (PAN_dump_once_oneline(vsb, wa, WS_ALLOC_MAGIC, "ws_alloc")) break; VSB_printf(vsb, "off, len, ptr} = {%u, %u, %p}\n", wa->off, wa->len, wa->ptr); } VSB_indent(vsb, -2); VSB_cat(vsb, "},\n"); } void WS_Panic(struct vsb *vsb, const struct ws *ws) { if (PAN_dump_struct(vsb, ws, WS_MAGIC, "ws")) return; if (ws->id[0] != '\0' && (!(ws->id[0] & 0x20))) // cheesy islower() VSB_cat(vsb, "OVERFLOWED "); VSB_printf(vsb, "id = \"%s\",\n", ws->id); VSB_printf(vsb, "{s, e} = {%p", ws->s); if (ws->e >= ws->s) VSB_printf(vsb, ", +%ld", (long) (ws->e - ws->s)); else VSB_printf(vsb, ", %p", ws->e); VSB_cat(vsb, "},\n"); VSB_printf(vsb, "{f, r} = {%p", ws->f); if (ws->r >= ws->f) VSB_printf(vsb, ", +%ld", (long) (ws->r - ws->f)); else VSB_printf(vsb, ", %p", ws->r); VSB_cat(vsb, "},\n"); ws_emu_panic(vsb, ws); VSB_indent(vsb, -2); VSB_cat(vsb, "},\n"); } #endif /* ENABLE_WORKSPACE_EMULATOR */ varnish-7.5.0/bin/varnishd/common/000077500000000000000000000000001457605730600170705ustar00rootroot00000000000000varnish-7.5.0/bin/varnishd/common/common_param.h000066400000000000000000000103001457605730600217030ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * This file contains the heritage passed when mgt forks cache */ #ifdef COMMON_COMMON_PARAM_H #error "Multiple includes of common/common_param.h" #endif #define COMMON_COMMON_PARAM_H #include "vre.h" #define VSM_CLASS_PARAM "Params" enum debug_bits { #define DEBUG_BIT(U, l, d) DBG_##U, #include "tbl/debug_bits.h" DBG_Reserved }; static inline int COM_DO_DEBUG(const volatile uint8_t *p, enum debug_bits x) { return ((p[(unsigned)x>>3] & (0x80U >> ((unsigned)x & 7))) != 0); } enum experimental_bits { #define EXPERIMENTAL_BIT(U, l, d) EXPERIMENT_##U, #include "tbl/experimental_bits.h" EXPERIMENT_Reserved }; static inline int COM_EXPERIMENT(const volatile uint8_t *p, enum experimental_bits x) { return ((p[(unsigned)x>>3] & (0x80U >> ((unsigned)x & 7))) != 0); } enum feature_bits { #define FEATURE_BIT(U, l, d) FEATURE_##U, #include "tbl/feature_bits.h" FEATURE_Reserved }; static inline int COM_FEATURE(const volatile uint8_t *p, enum feature_bits x) { return ((p[(unsigned)x>>3] & (0x80U >> ((unsigned)x & 7))) != 0); } enum vcc_feature_bits { #define VCC_FEATURE_BIT(U, l, d) VCC_FEATURE_##U, #include "tbl/vcc_feature_bits.h" VCC_FEATURE_Reserved }; static inline int COM_VCC_FEATURE(const volatile uint8_t *p, enum vcc_feature_bits x) { return ((p[(unsigned)x>>3] & (0x80U >> ((unsigned)x & 7))) != 0); } struct poolparam { unsigned min_pool; unsigned max_pool; vtim_dur max_age; }; #define PARAM_BITMAP(name, len) typedef uint8_t name[(len + 7)>>3] PARAM_BITMAP(vsl_mask_t, 256); PARAM_BITMAP(debug_t, DBG_Reserved); PARAM_BITMAP(experimental_t, EXPERIMENT_Reserved); PARAM_BITMAP(feature_t, FEATURE_Reserved); PARAM_BITMAP(vcc_feature_t, VCC_FEATURE_Reserved); #undef PARAM_BITMAP struct params { #define ptyp_boolean unsigned #define ptyp_bytes ssize_t #define ptyp_bytes_u unsigned #define ptyp_debug debug_t #define ptyp_double double #define ptyp_duration vtim_dur #define ptyp_experimental experimental_t #define ptyp_feature feature_t #define ptyp_poolparam struct poolparam #define ptyp_thread_pool_max unsigned #define ptyp_thread_pool_min unsigned #define ptyp_timeout vtim_dur #define ptyp_uint unsigned #define ptyp_vcc_feature vcc_feature_t #define ptyp_vsl_buffer unsigned #define ptyp_vsl_mask vsl_mask_t #define ptyp_vsl_reclen unsigned #define PARAM(typ, fld, nm, ...) \ ptyp_##typ fld; #include #undef ptyp_boolean #undef ptyp_bytes #undef ptyp_bytes_u #undef ptyp_debug #undef ptyp_double #undef ptyp_duration #undef ptyp_experimental #undef ptyp_feature #undef ptyp_poolparam #undef ptyp_thread_pool_max #undef ptyp_thread_pool_min #undef ptyp_timeout #undef ptyp_uint #undef ptyp_vsl_buffer #undef ptyp_vsl_mask #undef ptyp_vsl_reclen struct vre_limits vre_limits; }; varnish-7.5.0/bin/varnishd/common/common_vext.c000066400000000000000000000101151457605730600215700ustar00rootroot00000000000000/*- * Copyright (c) 2022 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Loadable extensions */ #include "config.h" #include #include #include #include #include #include #include #include "vdef.h" #include "vas.h" #include "miniobj.h" #include "vav.h" #include "vqueue.h" #include "vrnd.h" #include "vsb.h" #include "heritage.h" struct vext { unsigned magic; #define VEXT_MAGIC 0xd5063ef6 VTAILQ_ENTRY(vext) list; char **argv; int fd; struct vsb *vsb; void *dlptr; }; static VTAILQ_HEAD(,vext) vext_list = VTAILQ_HEAD_INITIALIZER(vext_list); void vext_argument(const char *arg) { struct vext *vp; fprintf(stderr, "EEE <%s>\n", arg); ALLOC_OBJ(vp, VEXT_MAGIC); AN(vp); vp->argv = VAV_Parse(arg, NULL, ARGV_COMMA); AN(vp->argv); if (vp->argv[0] != NULL) ARGV_ERR("\tParse failure in argument: %s\n\t%s\n", arg, vp->argv[0]); VTAILQ_INSERT_TAIL(&vext_list, vp, list); fprintf(stderr, "eee <%s>\n", vp->argv[1]); vp->fd = open(vp->argv[1], O_RDONLY); if (vp->fd < 0) ARGV_ERR("\tCannot open %s\n\t%s\n", vp->argv[1], strerror(errno)); } void vext_iter(vext_iter_f *func, void *priv) { struct vext *vp; VTAILQ_FOREACH(vp, &vext_list, list) func(VSB_data(vp->vsb), priv); } void vext_copyin(struct vsb *vident) { struct vext *vp; const char *p; int i, fdo; unsigned u; char buf[BUFSIZ]; ssize_t sz, szw; VTAILQ_FOREACH(vp, &vext_list, list) { if (vp->vsb == NULL) { vp->vsb = VSB_new_auto(); AN(vp->vsb); } VSB_clear(vp->vsb); p = strrchr(vp->argv[1], '/'); if (p != NULL) p++; else p = vp->argv[0]; VSB_printf(vident, ",-E%s", p); VSB_printf(vp->vsb, "vext_cache/%s,", p); for (i = 0; i < 8; i++) { AZ(VRND_RandomCrypto(&u, sizeof u)); u %= 26; VSB_printf(vp->vsb, "%c", 'a' + (char)u); } VSB_cat(vp->vsb, ".so"); AZ(VSB_finish(vp->vsb)); fprintf(stderr, "ee2 %s\n", VSB_data(vp->vsb)); fdo = open(VSB_data(vp->vsb), O_WRONLY|O_CREAT|O_EXCL, 0755); xxxassert(fdo >= 0); AZ(lseek(vp->fd, 0, SEEK_SET)); do { sz = read(vp->fd, buf, sizeof buf); if (sz > 0) { szw = write(fdo, buf, sz); xxxassert(szw == sz); } } while (sz > 0); closefd(&fdo); closefd(&vp->fd); } } void vext_load(void) { struct vext *vp; VTAILQ_FOREACH(vp, &vext_list, list) { vp->dlptr = dlopen( VSB_data(vp->vsb), RTLD_NOW | RTLD_GLOBAL ); if (vp->dlptr == NULL) { XXXAN(vp->dlptr); } fprintf(stderr, "Loaded -E %s\n", VSB_data(vp->vsb)); } } void vext_cleanup(int do_unlink) { struct vext *vp; VTAILQ_FOREACH(vp, &vext_list, list) { if (vp->vsb != NULL && VSB_len(vp->vsb) > 0) { if (do_unlink) XXXAZ(unlink(VSB_data(vp->vsb))); VSB_clear(vp->vsb); } } } varnish-7.5.0/bin/varnishd/common/common_vsc.c000066400000000000000000000122331457605730600214000ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include "config.h" #include #include #include #include #include "vdef.h" #include "vrt.h" #include "miniobj.h" #include "vas.h" #include "vmb.h" #include "vsc_priv.h" #include "vqueue.h" #include "heritage.h" #include "vsmw.h" /*--------------------------------------------------------------------*/ struct vsc_seg { unsigned magic; #define VSC_SEG_MAGIC 0x9b355991 struct vsmw *vsm; // keep master/child sep. const char *nm; VTAILQ_ENTRY(vsc_seg) list; void *seg; /* VSC segments */ struct vsc_head *head; void *ptr; struct vsc_seg *doc; /* DOC segments */ const unsigned char *jp; int refs; }; static VTAILQ_HEAD(,vsc_seg) vsc_seglist = VTAILQ_HEAD_INITIALIZER(vsc_seglist); static void v_matchproto_(vsm_lock_f) vsc_dummy_lock(void) { } vsm_lock_f *vsc_lock = vsc_dummy_lock; vsm_lock_f *vsc_unlock = vsc_dummy_lock; static const size_t vsc_overhead = PRNDUP(sizeof(struct vsc_head)); static struct vsc_seg * vrt_vsc_mksegv(struct vsmw_cluster *vc, const char *category, size_t payload, const char *fmt, va_list va) { struct vsc_seg *vsg; ALLOC_OBJ(vsg, VSC_SEG_MAGIC); AN(vsg); vsg->seg = VSMW_Allocv(heritage.proc_vsmw, vc, category, VRT_VSC_Overhead(payload), fmt, va); AN(vsg->seg); vsg->vsm = heritage.proc_vsmw; vsg->head = (void*)vsg->seg; vsg->head->body_offset = vsc_overhead; vsg->ptr = (char*)vsg->seg + vsc_overhead; return (vsg); } static struct vsc_seg * vrt_vsc_mksegf(const char *category, size_t payload, const char *fmt, ...) { va_list ap; struct vsc_seg *vsg; va_start(ap, fmt); vsg = vrt_vsc_mksegv(NULL, category, payload, fmt, ap); va_end(ap); return (vsg); } size_t VRT_VSC_Overhead(size_t payload) { return (vsc_overhead + PRNDUP(payload)); } void VRT_VSC_Hide(const struct vsc_seg *vsg) { CHECK_OBJ_NOTNULL(vsg, VSC_SEG_MAGIC); assert(vsg->head->ready > 0); vsg->head->ready = 2; } void VRT_VSC_Reveal(const struct vsc_seg *vsg) { CHECK_OBJ_NOTNULL(vsg, VSC_SEG_MAGIC); assert(vsg->head->ready > 0); vsg->head->ready = 1; } void * VRT_VSC_Alloc(struct vsmw_cluster *vc, struct vsc_seg **sg, const char *nm, size_t sd, const unsigned char *jp, size_t sj, const char *fmt, va_list va) { struct vsc_seg *vsg, *dvsg; char buf[1024]; uintptr_t jjp; vsc_lock(); jjp = (uintptr_t)jp; VTAILQ_FOREACH(dvsg, &vsc_seglist, list) { if (dvsg->vsm != heritage.proc_vsmw) continue; if (dvsg->jp == NULL || dvsg->jp == jp) break; } if (dvsg == NULL || dvsg->jp == NULL) { /* Create a new documentation segment */ dvsg = vrt_vsc_mksegf(VSC_DOC_CLASS, sj, "%jx", (uintmax_t)jjp); AN(dvsg); dvsg->jp = jp; dvsg->head->doc_id = jjp; memcpy(dvsg->ptr, jp, sj); VWMB(); dvsg->head->ready = 1; VTAILQ_INSERT_HEAD(&vsc_seglist, dvsg, list); } AN(dvsg); dvsg->refs++; if (*fmt == '\0') bprintf(buf, "%s", nm); else bprintf(buf, "%s.%s", nm, fmt); AN(heritage.proc_vsmw); vsg = vrt_vsc_mksegv(vc, VSC_CLASS, sd, buf, va); AN(vsg); vsg->nm = nm; vsg->doc = dvsg; vsg->head->doc_id = jjp; VTAILQ_INSERT_TAIL(&vsc_seglist, vsg, list); VWMB(); vsg->head->ready = 1; vsc_unlock(); if (sg != NULL) *sg = vsg; return (vsg->ptr); } void VRT_VSC_Destroy(const char *nm, struct vsc_seg *vsg) { struct vsc_seg *dvsg; vsc_lock(); AN(heritage.proc_vsmw); CHECK_OBJ_NOTNULL(vsg, VSC_SEG_MAGIC); CAST_OBJ_NOTNULL(dvsg, vsg->doc, VSC_SEG_MAGIC); AZ(vsg->jp); assert(vsg->vsm == heritage.proc_vsmw); assert(vsg->nm == nm); VSMW_Free(heritage.proc_vsmw, &vsg->seg); VTAILQ_REMOVE(&vsc_seglist, vsg, list); FREE_OBJ(vsg); if (--dvsg->refs == 0) { VSMW_Free(heritage.proc_vsmw, &dvsg->seg); VTAILQ_REMOVE(&vsc_seglist, dvsg, list); FREE_OBJ(dvsg); } vsc_unlock(); } varnish-7.5.0/bin/varnishd/common/common_vsmw.c000066400000000000000000000263121457605730600216040ustar00rootroot00000000000000/*- * Copyright (c) 2010-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * VSM stuff common to manager and child. * */ #include "config.h" #include #include #include #include #include #include #include #include #include #include #include "vdef.h" #include "vas.h" #include "vsb.h" #include "miniobj.h" #include "vqueue.h" #include "vfil.h" #include "vrnd.h" #include "heritage.h" #include "vsmw.h" #ifndef MAP_HASSEMAPHORE # define MAP_HASSEMAPHORE 0 /* XXX Linux */ #endif #ifndef MAP_NOSYNC # define MAP_NOSYNC 0 /* XXX Linux */ #endif static void v_matchproto_(vsm_lock_f) vsmw_dummy_lock(void) { } static int vsmw_haslock; vsm_lock_f *vsmw_lock = vsmw_dummy_lock; vsm_lock_f *vsmw_unlock = vsmw_dummy_lock; #define vsmw_assert_lock() AN(vsmw_haslock) #define vsmw_do_lock() vsmw_do_lock_(__func__, __LINE__) #define vsmw_do_lock_(f, l) \ do { \ vsmw_lock(); \ AZ(vsmw_haslock); \ vsmw_haslock = 1; \ } while(0) #define vsmw_do_unlock() vsmw_do_unlock_(__func__, __LINE__) #define vsmw_do_unlock_(f, l) \ do { \ AN(vsmw_haslock); \ vsmw_haslock = 0; \ vsmw_unlock(); \ } while(0) /*--------------------------------------------------------------------*/ struct vsmw_cluster { unsigned magic; #define VSMW_CLUSTER_MAGIC 0x28b74c00 VTAILQ_ENTRY(vsmw_cluster) list; struct vsmwseg *cseg; char *fn; size_t len; void *ptr; size_t next; int refs; int named; }; struct vsmwseg { unsigned magic; #define VSMWSEG_MAGIC 0x7e4ccaea VTAILQ_ENTRY(vsmwseg) list; struct vsmw_cluster *cluster; char *category; size_t off; size_t len; char *id; void *ptr; }; struct vsmw { unsigned magic; #define VSMW_MAGIC 0xc2ca2cd9 int vdirfd; int mode; char *idx; VTAILQ_HEAD(, vsmw_cluster) clusters; VTAILQ_HEAD(, vsmwseg) segs; struct vsb *vsb; pid_t pid; time_t birth; uint64_t nsegs; uint64_t nsubs; }; /* Allocations in clusters never start at offset zero */ #define VSM_CLUSTER_OFFSET 16 /*--------------------------------------------------------------------*/ static void vsmw_idx_head(const struct vsmw *vsmw, int fd) { char buf[64]; bprintf(buf, "# %jd %jd\n", (intmax_t)vsmw->pid, (intmax_t)vsmw->birth); // XXX handle ENOSPC? #2764 assert(write(fd, buf, strlen(buf)) == strlen(buf)); } #define ASSERT_SEG_STR(x) do { \ AN(x); \ AZ(strchr(x, '\n')); \ } while (0); static void vsmw_fmt_index(const struct vsmw *vsmw, const struct vsmwseg *seg, char act) { vsmw_assert_lock(); CHECK_OBJ_NOTNULL(vsmw, VSMW_MAGIC); CHECK_OBJ_NOTNULL(seg, VSMWSEG_MAGIC); AN(seg->cluster); ASSERT_SEG_STR(seg->category); ASSERT_SEG_STR(seg->id); VSB_printf(vsmw->vsb, "%c %s %zu %zu %s %s\n", act, seg->cluster->fn, seg->off, seg->len, seg->category, seg->id); } /*--------------------------------------------------------------------*/ static void vsmw_mkent(const struct vsmw *vsmw, const char *pfx) { int fd; uint64_t rn; AN(pfx); vsmw_assert_lock(); while (1) { VSB_clear(vsmw->vsb); VSB_printf(vsmw->vsb, "_.%s", pfx); AZ(VRND_RandomCrypto(&rn, sizeof rn)); VSB_printf(vsmw->vsb, ".%016jx", (uintmax_t)rn); AZ(VSB_finish(vsmw->vsb)); fd = openat(vsmw->vdirfd, VSB_data(vsmw->vsb), O_RDONLY); if (fd < 0 && errno == ENOENT) return; if (fd >= 0) closefd(&fd); } } /*--------------------------------------------------------------------*/ static void vsmw_append_record(struct vsmw *vsmw, struct vsmwseg *seg, char act) { int fd; vsmw_assert_lock(); CHECK_OBJ_NOTNULL(vsmw, VSMW_MAGIC); CHECK_OBJ_NOTNULL(seg, VSMWSEG_MAGIC); fd = openat(vsmw->vdirfd, vsmw->idx, O_APPEND | O_WRONLY); assert(fd >= 0); VSB_clear(vsmw->vsb); vsmw_fmt_index(vsmw, seg, act); AZ(VSB_finish(vsmw->vsb)); XXXAZ(VSB_tofile(vsmw->vsb, fd)); // XXX handle ENOSPC? #2764 closefd(&fd); } /*--------------------------------------------------------------------*/ static void vsmw_addseg(struct vsmw *vsmw, struct vsmwseg *seg) { vsmw_assert_lock(); VTAILQ_INSERT_TAIL(&vsmw->segs, seg, list); vsmw_append_record(vsmw, seg, '+'); vsmw->nsegs++; } /*--------------------------------------------------------------------*/ static void vsmw_delseg(struct vsmw *vsmw, struct vsmwseg *seg) { char *t = NULL; int fd; struct vsmwseg *s2; vsmw_assert_lock(); CHECK_OBJ_NOTNULL(vsmw, VSMW_MAGIC); CHECK_OBJ_NOTNULL(seg, VSMWSEG_MAGIC); VTAILQ_REMOVE(&vsmw->segs, seg, list); vsmw->nsegs--; if (vsmw->nsubs < 10 || vsmw->nsubs * 2 < vsmw->nsegs) { vsmw_append_record(vsmw, seg, '-'); vsmw->nsubs++; } else { vsmw_mkent(vsmw, vsmw->idx); REPLACE(t, VSB_data(vsmw->vsb)); fd = openat(vsmw->vdirfd, t, O_WRONLY|O_CREAT|O_EXCL, vsmw->mode); assert(fd >= 0); vsmw_idx_head(vsmw, fd); VSB_clear(vsmw->vsb); VTAILQ_FOREACH(s2, &vsmw->segs, list) vsmw_fmt_index(vsmw, s2, '+'); AZ(VSB_finish(vsmw->vsb)); XXXAZ(VSB_tofile(vsmw->vsb, fd)); // XXX handle ENOSPC? #2764 closefd(&fd); AZ(renameat(vsmw->vdirfd, t, vsmw->vdirfd, vsmw->idx)); REPLACE(t, NULL); vsmw->nsubs = 0; } REPLACE(seg->category, NULL); REPLACE(seg->id, NULL); FREE_OBJ(seg); } /*--------------------------------------------------------------------*/ static struct vsmw_cluster * vsmw_newcluster(struct vsmw *vsmw, size_t len, const char *pfx) { struct vsmw_cluster *vc; int fd; size_t ps; vsmw_assert_lock(); ALLOC_OBJ(vc, VSMW_CLUSTER_MAGIC); AN(vc); vsmw_mkent(vsmw, pfx); REPLACE(vc->fn, VSB_data(vsmw->vsb)); VTAILQ_INSERT_TAIL(&vsmw->clusters, vc, list); ps = getpagesize(); len = RUP2(len, ps); vc->len = len; fd = openat(vsmw->vdirfd, vc->fn, O_RDWR | O_CREAT | O_EXCL, vsmw->mode); assert(fd >= 0); AZ(VFIL_allocate(fd, (off_t)len, 1)); vc->ptr = (void *)mmap(NULL, len, PROT_READ|PROT_WRITE, MAP_HASSEMAPHORE | MAP_NOSYNC | MAP_SHARED, fd, 0); closefd(&fd); assert(vc->ptr != MAP_FAILED); (void)mlock(vc->ptr, len); return (vc); } struct vsmw_cluster * VSMW_NewCluster(struct vsmw *vsmw, size_t len, const char *pfx) { struct vsmw_cluster *vc; struct vsmwseg *seg; vsmw_do_lock(); vc = vsmw_newcluster(vsmw, len + VSM_CLUSTER_OFFSET, pfx); AN(vc); vc->next += VSM_CLUSTER_OFFSET; ALLOC_OBJ(seg, VSMWSEG_MAGIC); AN(seg); vc->cseg = seg; seg->len = vc->len; seg->cluster = vc; REPLACE(seg->category, ""); REPLACE(seg->id, ""); vc->refs++; vc->named = 1; vsmw_addseg(vsmw, seg); vsmw_do_unlock(); return (vc); } static void vsmw_DestroyCluster_locked(struct vsmw *vsmw, struct vsmw_cluster *vc) { vsmw_assert_lock(); CHECK_OBJ_NOTNULL(vsmw, VSMW_MAGIC); CHECK_OBJ_NOTNULL(vc, VSMW_CLUSTER_MAGIC); AZ(vc->refs); AZ(munmap(vc->ptr, vc->len)); if (vc->named) vsmw_delseg(vsmw, vc->cseg); vc->cseg = 0; VTAILQ_REMOVE(&vsmw->clusters, vc, list); if (unlinkat(vsmw->vdirfd, vc->fn, 0)) assert (errno == ENOENT); REPLACE(vc->fn, NULL); FREE_OBJ(vc); } void VSMW_DestroyCluster(struct vsmw *vsmw, struct vsmw_cluster **vsmcp) { struct vsmw_cluster *vc; TAKE_OBJ_NOTNULL(vc, vsmcp, VSMW_CLUSTER_MAGIC); vsmw_do_lock(); if (--vc->refs == 0) vsmw_DestroyCluster_locked(vsmw, vc); vsmw_do_unlock(); } /*--------------------------------------------------------------------*/ void * VSMW_Allocv(struct vsmw *vsmw, struct vsmw_cluster *vc, const char *category, size_t payload, const char *fmt, va_list va) { struct vsmwseg *seg; vsmw_do_lock(); CHECK_OBJ_NOTNULL(vsmw, VSMW_MAGIC); ALLOC_OBJ(seg, VSMWSEG_MAGIC); AN(seg); REPLACE(seg->category, category); seg->len = PRNDUP(payload); VSB_clear(vsmw->vsb); VSB_vprintf(vsmw->vsb, fmt, va); AZ(VSB_finish(vsmw->vsb)); REPLACE(seg->id, VSB_data(vsmw->vsb)); if (vc == NULL) vc = vsmw_newcluster(vsmw, seg->len, category); AN(vc); vc->refs++; seg->cluster = vc; seg->off = vc->next; vc->next += seg->len; assert(vc->next <= vc->len); seg->ptr = seg->off + (char*)vc->ptr; vsmw_addseg(vsmw, seg); vsmw_do_unlock(); return (seg->ptr); } void * VSMW_Allocf(struct vsmw *vsmw, struct vsmw_cluster *vc, const char *category, size_t len, const char *fmt, ...) { va_list ap; void *p; va_start(ap, fmt); p = VSMW_Allocv(vsmw, vc, category, len, fmt, ap); va_end(ap); return (p); } /*--------------------------------------------------------------------*/ void VSMW_Free(struct vsmw *vsmw, void **pp) { struct vsmwseg *seg; struct vsmw_cluster *cp; vsmw_do_lock(); CHECK_OBJ_NOTNULL(vsmw, VSMW_MAGIC); AN(pp); VTAILQ_FOREACH(seg, &vsmw->segs, list) if (seg->ptr == *pp) break; AN(seg); *pp = NULL; cp = seg->cluster; CHECK_OBJ_NOTNULL(cp, VSMW_CLUSTER_MAGIC); assert(cp->refs > 0); vsmw_delseg(vsmw, seg); if (!--cp->refs) vsmw_DestroyCluster_locked(vsmw, cp); vsmw_do_unlock(); } /*--------------------------------------------------------------------*/ struct vsmw * VSMW_New(int vdirfd, int mode, const char *idxname) { struct vsmw *vsmw; int fd; assert(vdirfd > 0); assert(mode > 0); AN(idxname); vsmw_do_lock(); ALLOC_OBJ(vsmw, VSMW_MAGIC); AN(vsmw); VTAILQ_INIT(&vsmw->segs); VTAILQ_INIT(&vsmw->clusters); vsmw->vsb = VSB_new_auto(); AN(vsmw->vsb); REPLACE(vsmw->idx, idxname); vsmw->mode = mode; vsmw->vdirfd = vdirfd; vsmw->pid = getpid(); vsmw->birth = time(NULL); if (unlinkat(vdirfd, vsmw->idx, 0)) assert (errno == ENOENT); fd = openat(vdirfd, vsmw->idx, O_APPEND | O_WRONLY | O_CREAT, vsmw->mode); assert(fd >= 0); vsmw_idx_head(vsmw, fd); closefd(&fd); vsmw_do_unlock(); return (vsmw); } void VSMW_Destroy(struct vsmw **pp) { struct vsmw *vsmw; struct vsmwseg *seg, *s2; vsmw_do_lock(); TAKE_OBJ_NOTNULL(vsmw, pp, VSMW_MAGIC); VTAILQ_FOREACH_SAFE(seg, &vsmw->segs, list, s2) vsmw_delseg(vsmw, seg); if (unlinkat(vsmw->vdirfd, vsmw->idx, 0)) assert (errno == ENOENT); REPLACE(vsmw->idx, NULL); VSB_destroy(&vsmw->vsb); closefd(&vsmw->vdirfd); FREE_OBJ(vsmw); vsmw_do_unlock(); } varnish-7.5.0/bin/varnishd/common/heritage.h000066400000000000000000000073021457605730600210330ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * This file contains the heritage passed when mgt forks cache */ struct vsmw; struct suckaddr; struct listen_sock; struct transport; struct VCLS; struct uds_perms; struct conn_heritage; struct listen_sock { unsigned magic; #define LISTEN_SOCK_MAGIC 0x999e4b57 VTAILQ_ENTRY(listen_sock) list; VTAILQ_ENTRY(listen_sock) arglist; int sock; int uds; char *endpoint; const char *name; const struct suckaddr *addr; const struct transport *transport; const struct uds_perms *perms; unsigned test_heritage; struct conn_heritage *conn_heritage; }; VTAILQ_HEAD(listen_sock_head, listen_sock); struct heritage { /* Two pipe(2)'s for CLI connection between cache and mgt. */ int cli_fd; /* File descriptor for stdout/stderr */ int std_fd; int vsm_fd; /* Sockets from which to accept connections */ struct listen_sock_head socks; /* Hash method */ const struct hash_slinger *hash; struct params *param; const char *identity; char *panic_str; ssize_t panic_str_len; struct VCLS *cls; const char *ident; long mgt_pid; struct vsmw *proc_vsmw; unsigned min_vcl_version; unsigned max_vcl_version; int argc; char * const * argv; }; extern struct heritage heritage; #define ASSERT_MGT() do { assert(getpid() == heritage.mgt_pid);} while (0) /* Really belongs in mgt.h, but storage_file chokes on both */ void MCH_Fd_Inherit(int fd, const char *what); #define ARGV_ERR(...) \ do { \ fprintf(stderr, "Error: " __VA_ARGS__); \ fprintf(stderr, "(-? gives usage)\n"); \ exit(2); \ } while (0) /* cache/cache_main.c */ void child_main(int, size_t); /* cache/cache_vcl.c */ int VCL_TestLoad(const char *); /* cache/cache_acceptor.c */ struct transport; void XPORT_Init(void); const struct transport *XPORT_Find(const char *name); /* common/common_vsc.c & common/common_vsmw.c */ typedef void vsm_lock_f(void); extern vsm_lock_f *vsc_lock; extern vsm_lock_f *vsc_unlock; extern vsm_lock_f *vsmw_lock; extern vsm_lock_f *vsmw_unlock; /* common/common_vext.c */ void vext_argument(const char *); void vext_copyin(struct vsb *); void vext_load(void); void vext_cleanup(int); typedef void vext_iter_f(const char *, void *); void vext_iter(vext_iter_f *func, void *); varnish-7.5.0/bin/varnishd/common/vsmw.h000066400000000000000000000040071457605730600202360ustar00rootroot00000000000000/*- * Copyright (c) 2017 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #ifdef VSMW_H_INCLUDED #error "Multiple includes of vsmw.h" #endif #define VSMW_H_INCLUDED struct vsmw; struct vsmw_cluster; struct vsmw_cluster *VSMW_NewCluster(struct vsmw *, size_t , const char *); void VSMW_DestroyCluster(struct vsmw *, struct vsmw_cluster **); void *VSMW_Allocv(struct vsmw *, struct vsmw_cluster *, const char *, size_t, const char *, va_list); void *VSMW_Allocf(struct vsmw *, struct vsmw_cluster *, const char *, size_t, const char *, ...); void VSMW_Free(struct vsmw *, void **); struct vsmw *VSMW_New(int, int, const char *); void VSMW_Destroy(struct vsmw **); varnish-7.5.0/bin/varnishd/flint.lnt000066400000000000000000000142751457605730600174440ustar00rootroot00000000000000// Copyright (c) 2006-2020 Varnish Software AS // SPDX-License-Identifier: BSD-2-Clause // See LICENSE file for full text of license // -w4 /////////////////////////////////////////////////////////////////////// // deprecated -esym(765, WS_Reserve) -esym(714, WS_Reserve) -esym(759, WS_Reserve) /////////////////////////////////////////////////////////////////////// -printf(3, VSL) -printf(2, http_PrintfHeader) -printf(2, VSB_printf) -esym(759, VDI_AH_*) // could be moved from header to module -esym(765, VDI_AH_*) // could be made static -esym(755, vct_*) -esym(759, vpf_*) -esym(765, vpf_*) -esym(714, vpf_*) -esym(759, H_*) -esym(765, H_*) -esym(757, VSL_*) -esym(765, VLU_*) -esym(759, VLU_*) -esym(714, VLU_*) -esym(765, VSS_*) -esym(759, VSS_*) -esym(755, VSL_*) -esym(765, VSL_*) -esym(759, VSL_*) -esym(765, CLI_*) -esym(759, CLI_*) -esym(755, CLI_*) -esym(765, http_IsHdr) -esym(759, http_IsHdr) -esym(755, http_IsHdr) // -"esym(793,significant characters in an external identifier)" // XXX: I think this is a flexelint bug: -esym(522, vbit_clr) // Stuff used by compiled VCL -esym(757, VRT_*) -esym(759, VRT_*) -esym(765, VRT_*) -esym(714, VRT_*) -esym(755, VRT_*) -esym(765, vrt_magic_string_end) -esym(759, vrt_magic_string_end) -esym(768, vrt_ref::*) -esym(768, vcf_return::name) -esym(768, VCL_conf::*) -esym(768, vdp::priv1) // FLINT Bug20090910_838 -efunc(838, VRT_purge) // Stuff in VMODs which is used through dl*(3) functions -esym(754, Vmod_*_Func::*) -esym(714, Vmod_*_Data) -esym(765, Vmod_*_Data) -esym(755, PAN_dump_once) // global macro not referenced -esym(755, PAN_dump_once_oneline) // global macro not referenced -esym(714, WS_Dump) // Info 714: sym not referenced -esym(759, WS_Dump) // Info 759: header declaration could be moved to module -esym(765, WS_Dump) // Info 765: external could be static //-sem (pthread_mutex_lock, thread_lock) -sem (pthread_mutex_trylock, thread_lock) -sem (VBE_DropRefLocked, thread_unlock) -emacro(835, HCB_BIT_NODE) // Info 835: A zero has been given as left argument to operator '<<' -emacro(835, VBC_STATE_AVAIL) // Info 835: A zero has been given as left argument to operator '<<' -emacro(835, BANS_FLAG_REQ) // Info 835: A zero has been given as left argument to operator '<<' -emacro(835, HTTPH) // Info 835: A zero has been given as left argument to operator '<<' -emacro(835, HTTPH_R_PASS) // Info 835: A zero has been given as left argument to operator '<<' -emacro(835, SMP_SC_LOADED) // Info 835: A zero has been given as left argument to operator '<<' -emacro(835, SMP_SEG_MUSTLOAD) // Info 835: A zero has been given as left argument to operator '<<' -emacro(835, DELAYED_EFFECT) // Info 835: A zero has been given as left argument to operator '<<' -emacro(835, HDF_FILTER) // Info 835: A zero has been given as left argument to operator '<<' -emacro(835, O_LARGEFILE) // Info 835: A zero has been given as left argument to operator '<<' -emacro(845, HTTPH) // Info 845: The left argument to operator '&&' is certain to be 0 -esym(773, PCRE2_DATE) // Expression-like macro '___' not parenthesized ////////////// // Macros defined differently in each VMOD // -esym(767, VPFX) // macro '___' was defined differently in another module // -esym(767, VARGS) // macro '___' was defined differently in another module // -esym(767, VENUM) // macro '___' was defined differently in another module ////////////// -efunc(1791, pdiff) // return last on line ////////////// -efile(451, "symbol_kind.h") // No include guard -efile(451, "config.h") // No include guard ////////////// -sem(vca_thread_acct, thread_mono) -sem(vca_epoll_thread, thread_mono) -sem(vca_kqueue_thread, thread_mono) -sem(vca_poll_thread, thread_mono) -sem(vca_ports_thread, thread_mono) -sem(exp_timer, thread_mono) -sem(wrk_herdtimer_thread, thread_mono) -sem(wrk_herder_thread, thread_mono) ////////////// // 436 = Apparent preprocessor directive in invocation of macro '___' -emacro(436, SLTM) ////////////// +libh netinet/tcp.h -elib(46) ////////////// +libh mgt_event.h -esym(522, mgt_ProcTitle) -sem(VRT_AddDirector, custodial(3)) -sem(VCP_New, custodial(3)) -sem(vsmw_addseg, custodial(2)) -sem(BAN_Free, custodial(1)) -sem(EXP_Inject, custodial(1)) -sem(HSH_Insert, custodial(3)) -sem(WS_Init, custodial(2)) -sem(http_Setup, custodial(2)) -sem(vfp_esi_end, custodial(2)) -sem(vdi_dns_cache_list_add, custodial(3)) -e717 // do ... while(1) ... -e850 // for loop index variable '___' whose type category is '___' // is modified in body of the for loop that began at '___' -esym(765, vcc_ProcAction) // could be made static -esym(759, vcc_ProcAction) // could be moved to module -esym(714, vcc_ProcAction) // not ref. -emacro(506, isnan, isfinite) // constant value boolean -emacro(736, isfinite) // loss of precision -emacro(774, HTTPH) // always false -emacro(527, ARGV_ERR) // unreachable -e788 // enum value not used in defaulted switch // cache.h -emacro(506, INCOMPL) // Constant value Boolean -esym(525, __builtin_frame_address) // Not defined -esym(525, __builtin_return_address) // Not defined // cache_vcl.c -esym(528, vcl_call_method) // Not referenced -esym(765, VCL_TEMP_*) -esym(759, VCL_TEMP_*) -e441 // for clause irregularity: loop variable '___' not found in 2nd for expression // cache_vpi.c -esym(552, vpi_wrk_len) // from libvarnish --emacro((835),VBH_NOIDX) --emacro((835),O_CLOEXEC) // Review all below this line /////////////////////////////////////////////// -e713 // 42 Loss of precision (___) (___ to ___) -e840 // Use of nul character in a string literal (see: vcc_if.c) -e663 // Suspicious array to pointer conversion -e778 // Constant expression evaluates to 0 in operation '___' -e736 // Loss of precision (___) (___ bits to ___ bits) -e655 // bitwise compatible enums // cache_session.c -esym(528, ses_set_attr) // Not referenced // cache_vrt_var.c -esym(528, beresp_filter_fixed) // Not referenced -esym(714, Lck_DestroyClass) // Not referenced -esym(759, HTTP_IterHdrPack) // Could be moved to module -esym(759, ObjGetU32) // Could be moved to module -esym(759, Lck_DestroyClass) // Could be moved to module -esym(765, HTTP_IterHdrPack) // Could be made static -esym(765, ObjGetU32) // Could be made static -esym(765, Lck_DestroyClass) // Could be made static -esym(769, obj_attr::OA__MAX) // Not referenced varnish-7.5.0/bin/varnishd/flint.sh000066400000000000000000000011611457605730600172470ustar00rootroot00000000000000#!/bin/sh # # Copyright (c) 2006-2021 Varnish Software AS # SPDX-License-Identifier: BSD-2-Clause # See LICENSE file for full text of license FLOPS=' -I../../lib/libvgz -I../../lib/libvsc -DNOT_IN_A_VMOD -DVARNISH_STATE_DIR="foo" -DVARNISH_VMOD_DIR="foo" -DVARNISH_VCL_DIR="foo" cache/*.c common/*.c hash/*.c http1/*.c http2/*.c mgt/*.c proxy/*.c storage/*.c waiter/*.c ../../lib/libvarnish/flint.lnt ../../lib/libvarnish/*.c ../../lib/libvcc/flint.lnt ../../lib/libvcc/*.c ../../vmod/flint.lnt ../../vmod/vcc_debug_if.c ../../vmod/vmod_debug*.c ../../vmod/VSC_debug*.c ' ../../tools/flint_skel.sh $* varnish-7.5.0/bin/varnishd/fuzzers/000077500000000000000000000000001457605730600173105ustar00rootroot00000000000000varnish-7.5.0/bin/varnishd/fuzzers/esi_parse_fuzzer.c000066400000000000000000000103651457605730600230400ustar00rootroot00000000000000/*- * Copyright (c) 2018 Varnish Software AS * All rights reserved. * * Author: Federico G. Schwindt * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * ESI parser fuzzer. */ #include "config.h" #include #include #include #include "cache/cache_varnishd.h" #include "cache/cache_vgz.h" /* enum vgz_flag */ #include "cache/cache_esi.h" #include "cache/cache_filter.h" /* struct vfp_ctx */ #include "vfil.h" int LLVMFuzzerTestOneInput(const uint8_t *, size_t); struct VSC_main *VSC_C_main; volatile struct params *cache_param; int PAN__DumpStruct(struct vsb *vsb, int block, int track, const void *ptr, const char *smagic, unsigned magic, const char *fmt, ...) { (void)vsb; (void)block; (void)track; (void)ptr; (void)smagic; (void)magic; (void)fmt; return (0); } void VSL(enum VSL_tag_e tag, vxid_t vxid, const char *fmt, ...) { (void)tag; (void)vxid; (void)fmt; } void VSLb(struct vsl_log *vsl, enum VSL_tag_e tag, const char *fmt, ...) { (void)vsl; (void)tag; (void)fmt; } void VSLb_ts(struct vsl_log *l, const char *event, vtim_real first, vtim_real *pprev, vtim_real now) { (void)l; (void)event; (void)first; (void)pprev; (void)now; } void WRK_Log(enum VSL_tag_e tag, const char *fmt, ...) { (void)tag; (void)fmt; } int LLVMFuzzerTestOneInput(const uint8_t* data, size_t size) { struct VSC_main __VSC_C_main; struct params __cache_param; struct http req[1]; struct http resp[1]; struct vfp_ctx vc[1]; struct worker wrk[1]; struct ws ws[1]; struct vep_state *vep; struct vsb *vsb; txt hd[HTTP_HDR_URL + 1]; char ws_buf[1024]; if (size < 1) return (0); AN(data); VSC_C_main = &__VSC_C_main; cache_param = &__cache_param; memset(&__cache_param, 0, sizeof(__cache_param)); #define BSET(b, no) (b)[(no) >> 3] |= (0x80 >> ((no) & 7)) if (data[0] & 0x8f) BSET(__cache_param.feature_bits, FEATURE_ESI_IGNORE_HTTPS); if (size > 1 && data[1] & 0x8f) BSET(__cache_param.feature_bits, FEATURE_ESI_DISABLE_XML_CHECK); if (size > 2 && data[2] & 0x8f) BSET(__cache_param.feature_bits, FEATURE_ESI_IGNORE_OTHER_ELEMENTS); if (size > 3 && data[3] & 0x8f) BSET(__cache_param.feature_bits, FEATURE_ESI_REMOVE_BOM); #undef BSET /* Setup ws */ WS_Init(ws, "req", ws_buf, sizeof ws_buf); /* Setup req */ INIT_OBJ(req, HTTP_MAGIC); req->hd = hd; req->hd[HTTP_HDR_URL].b = "/"; req->ws = ws; /* Setup resp */ INIT_OBJ(resp, HTTP_MAGIC); resp->ws = ws; /* Setup wrk */ INIT_OBJ(wrk, WORKER_MAGIC); /* Setup vc */ INIT_OBJ(vc, VFP_CTX_MAGIC); vc->wrk = wrk; vc->resp = resp; vep = VEP_Init(vc, req, NULL, NULL); AN(vep); VEP_Parse(vep, (const char *)data, size); vsb = VEP_Finish(vep); if (vsb != NULL) VSB_destroy(&vsb); WS_Rollback(ws, 0); return (0); } #if defined(TEST_DRIVER) int main(int argc, char **argv) { ssize_t len; char *buf; int i; for (i = 1; i < argc; i++) { len = 0; buf = VFIL_readfile(NULL, argv[i], &len); AN(buf); LLVMFuzzerTestOneInput((uint8_t *)buf, len); free(buf); } } #endif varnish-7.5.0/bin/varnishd/hash/000077500000000000000000000000001457605730600165235ustar00rootroot00000000000000varnish-7.5.0/bin/varnishd/hash/hash_classic.c000066400000000000000000000127501457605730600213200ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * A classic bucketed hash */ #include "config.h" #include #include #include "cache/cache_varnishd.h" #include "cache/cache_objhead.h" #include "common/heritage.h" #include "hash/hash_slinger.h" static struct VSC_lck *lck_hcl; /*--------------------------------------------------------------------*/ struct hcl_hd { unsigned magic; #define HCL_HEAD_MAGIC 0x0f327016 VTAILQ_HEAD(, objhead) head; struct lock mtx; }; static unsigned hcl_nhash = 16383; static struct hcl_hd *hcl_head; /*-------------------------------------------------------------------- * The ->init method allows the management process to pass arguments */ static void v_matchproto_(hash_init_f) hcl_init(int ac, char * const *av) { int i; unsigned u; if (ac == 0) return; if (ac > 1) ARGV_ERR("(-hclassic) too many arguments\n"); i = sscanf(av[0], "%u", &u); if (i <= 0 || u == 0) return; if (u > 2 && !(u & (u - 1))) { fprintf(stderr, "NOTE:\n" "\tA power of two number of hash buckets is " "marginally less efficient\n" "\twith systematic URLs. Reducing by one" " hash bucket.\n"); u--; } hcl_nhash = u; fprintf(stderr, "Classic hash: %u buckets\n", hcl_nhash); } /*-------------------------------------------------------------------- * The ->start method is called during cache process start and allows * initialization to happen before the first lookup. */ static void v_matchproto_(hash_start_f) hcl_start(void) { unsigned u; lck_hcl = Lck_CreateClass(NULL, "hcl"); hcl_head = calloc(hcl_nhash, sizeof *hcl_head); XXXAN(hcl_head); for (u = 0; u < hcl_nhash; u++) { VTAILQ_INIT(&hcl_head[u].head); Lck_New(&hcl_head[u].mtx, lck_hcl); hcl_head[u].magic = HCL_HEAD_MAGIC; } } /*-------------------------------------------------------------------- * Lookup and possibly insert element. * If nobj != NULL and the lookup does not find key, nobj is inserted. * If nobj == NULL and the lookup does not find key, NULL is returned. * A reference to the returned object is held. * We use a two-pass algorithm to handle inserts as they are quite * rare and collisions even rarer. */ static struct objhead * v_matchproto_(hash_lookup_f) hcl_lookup(struct worker *wrk, const void *digest, struct objhead **noh) { struct objhead *oh; struct hcl_hd *hp; unsigned u1, hdigest; int i; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); AN(digest); if (noh != NULL) CHECK_OBJ_NOTNULL(*noh, OBJHEAD_MAGIC); assert(sizeof oh->digest >= sizeof hdigest); memcpy(&hdigest, digest, sizeof hdigest); u1 = hdigest % hcl_nhash; hp = &hcl_head[u1]; Lck_Lock(&hp->mtx); VTAILQ_FOREACH(oh, &hp->head, hoh_list) { CHECK_OBJ_NOTNULL(oh, OBJHEAD_MAGIC); i = memcmp(oh->digest, digest, sizeof oh->digest); if (i < 0) continue; if (i > 0) break; oh->refcnt++; Lck_Unlock(&hp->mtx); Lck_Lock(&oh->mtx); return (oh); } if (noh == NULL) { Lck_Unlock(&hp->mtx); return (NULL); } if (oh != NULL) VTAILQ_INSERT_BEFORE(oh, *noh, hoh_list); else VTAILQ_INSERT_TAIL(&hp->head, *noh, hoh_list); oh = *noh; *noh = NULL; memcpy(oh->digest, digest, sizeof oh->digest); oh->hoh_head = hp; Lck_Unlock(&hp->mtx); Lck_Lock(&oh->mtx); return (oh); } /*-------------------------------------------------------------------- * Dereference and if no references are left, free. */ static int v_matchproto_(hash_deref_f) hcl_deref(struct worker *wrk, struct objhead *oh) { struct hcl_hd *hp; int ret; CHECK_OBJ_NOTNULL(oh, OBJHEAD_MAGIC); Lck_AssertHeld(&oh->mtx); Lck_Unlock(&oh->mtx); CAST_OBJ_NOTNULL(hp, oh->hoh_head, HCL_HEAD_MAGIC); assert(oh->refcnt > 0); Lck_Lock(&hp->mtx); if (--oh->refcnt == 0) { VTAILQ_REMOVE(&hp->head, oh, hoh_list); ret = 0; } else ret = 1; Lck_Unlock(&hp->mtx); if (!ret) HSH_DeleteObjHead(wrk, oh); return (ret); } /*--------------------------------------------------------------------*/ const struct hash_slinger hcl_slinger = { .magic = SLINGER_MAGIC, .name = "classic", .init = hcl_init, .start = hcl_start, .lookup = hcl_lookup, .deref = hcl_deref, }; varnish-7.5.0/bin/varnishd/hash/hash_critbit.c000066400000000000000000000247621457605730600213450ustar00rootroot00000000000000/*- * Copyright (c) 2008-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * A Crit Bit tree based hash */ // #define PHK #include "config.h" #include #include "cache/cache_varnishd.h" #include "cache/cache_objhead.h" #include "hash/hash_slinger.h" #include "vmb.h" #include "vtim.h" static struct lock hcb_mtx; /*--------------------------------------------------------------------- * Table for finding out how many bits two bytes have in common, * counting from the MSB towards the LSB. * ie: * hcb_bittbl[0x01 ^ 0x22] == 2 * hcb_bittbl[0x10 ^ 0x0b] == 3 * */ static unsigned char hcb_bittbl[256]; static unsigned char hcb_bits(unsigned char x, unsigned char y) { return (hcb_bittbl[x ^ y]); } static void hcb_build_bittbl(void) { unsigned char x; unsigned y; y = 0; for (x = 0; x < 8; x++) for (; y < (1U << x); y++) hcb_bittbl[y] = 8 - x; /* Quick asserts for sanity check */ assert(hcb_bits(0x34, 0x34) == 8); AZ(hcb_bits(0xaa, 0x55)); assert(hcb_bits(0x01, 0x22) == 2); assert(hcb_bits(0x10, 0x0b) == 3); } /*--------------------------------------------------------------------- * For space reasons we overload the two pointers with two different * kinds of of pointers. We cast them to uintptr_t's and abuse the * low two bits to tell them apart, assuming that Varnish will never * run on machines with less than 32bit alignment. * * Asserts will explode if these assumptions are not met. */ struct hcb_y { unsigned magic; #define HCB_Y_MAGIC 0x125c4bd2 unsigned short critbit; unsigned char ptr; unsigned char bitmask; volatile uintptr_t leaf[2]; VSTAILQ_ENTRY(hcb_y) list; }; #define HCB_BIT_NODE (1<<0) #define HCB_BIT_Y (1<<1) struct hcb_root { volatile uintptr_t origo; }; static struct hcb_root hcb_root; static VSTAILQ_HEAD(, hcb_y) cool_y = VSTAILQ_HEAD_INITIALIZER(cool_y); static VSTAILQ_HEAD(, hcb_y) dead_y = VSTAILQ_HEAD_INITIALIZER(dead_y); static VTAILQ_HEAD(, objhead) cool_h = VTAILQ_HEAD_INITIALIZER(cool_h); static VTAILQ_HEAD(, objhead) dead_h = VTAILQ_HEAD_INITIALIZER(dead_h); /*--------------------------------------------------------------------- * Pointer accessor functions */ static int hcb_is_node(uintptr_t u) { return (u & HCB_BIT_NODE); } static int hcb_is_y(uintptr_t u) { return (u & HCB_BIT_Y); } static uintptr_t hcb_r_node(const struct objhead *n) { AZ((uintptr_t)n & (HCB_BIT_NODE | HCB_BIT_Y)); return (HCB_BIT_NODE | (uintptr_t)n); } static struct objhead * hcb_l_node(uintptr_t u) { assert(u & HCB_BIT_NODE); AZ(u & HCB_BIT_Y); return ((struct objhead *)(u & ~HCB_BIT_NODE)); } static uintptr_t hcb_r_y(const struct hcb_y *y) { CHECK_OBJ_NOTNULL(y, HCB_Y_MAGIC); AZ((uintptr_t)y & (HCB_BIT_NODE | HCB_BIT_Y)); return (HCB_BIT_Y | (uintptr_t)y); } static struct hcb_y * hcb_l_y(uintptr_t u) { AZ(u & HCB_BIT_NODE); assert(u & HCB_BIT_Y); return ((struct hcb_y *)(u & ~HCB_BIT_Y)); } /*--------------------------------------------------------------------- * Find the "critical" bit that separates these two digests */ static unsigned hcb_crit_bit(const uint8_t *digest, const struct objhead *oh2, struct hcb_y *y) { unsigned char u, r; CHECK_OBJ_NOTNULL(y, HCB_Y_MAGIC); for (u = 0; u < DIGEST_LEN && digest[u] == oh2->digest[u]; u++) ; assert(u < DIGEST_LEN); r = hcb_bits(digest[u], oh2->digest[u]); y->ptr = u; y->bitmask = 0x80 >> r; y->critbit = u * 8 + r; return (y->critbit); } /*--------------------------------------------------------------------- * Unless we have the lock, we need to be very careful about pointer * references into the tree, we cannot trust things to be the same * in two consecutive memory accesses. */ static struct objhead * hcb_insert(const struct worker *wrk, struct hcb_root *root, const uint8_t *digest, struct objhead **noh) { volatile uintptr_t *p; uintptr_t pp; struct hcb_y *y, *y2; struct objhead *oh2; unsigned s, s2; p = &root->origo; pp = *p; if (pp == 0) { if (noh == NULL) return (NULL); oh2 = *noh; *noh = NULL; memcpy(oh2->digest, digest, sizeof oh2->digest); *p = hcb_r_node(oh2); return (oh2); } while (hcb_is_y(pp)) { y = hcb_l_y(pp); CHECK_OBJ_NOTNULL(y, HCB_Y_MAGIC); assert(y->ptr < DIGEST_LEN); s = (digest[y->ptr] & y->bitmask) != 0; assert(s < 2); p = &y->leaf[s]; pp = *p; } if (pp == 0) { /* We raced hcb_delete and got a NULL pointer */ assert(noh == NULL); return (NULL); } assert(hcb_is_node(pp)); /* We found a node, does it match ? */ oh2 = hcb_l_node(pp); CHECK_OBJ_NOTNULL(oh2, OBJHEAD_MAGIC); if (!memcmp(oh2->digest, digest, DIGEST_LEN)) return (oh2); if (noh == NULL) return (NULL); /* Insert */ TAKE_OBJ_NOTNULL(y2, &wrk->wpriv->nhashpriv, HCB_Y_MAGIC); (void)hcb_crit_bit(digest, oh2, y2); s2 = (digest[y2->ptr] & y2->bitmask) != 0; assert(s2 < 2); oh2 = *noh; *noh = NULL; memcpy(oh2->digest, digest, sizeof oh2->digest); y2->leaf[s2] = hcb_r_node(oh2); s2 = 1-s2; p = &root->origo; AN(*p); while (hcb_is_y(*p)) { y = hcb_l_y(*p); CHECK_OBJ_NOTNULL(y, HCB_Y_MAGIC); assert(y->critbit != y2->critbit); if (y->critbit > y2->critbit) break; assert(y->ptr < DIGEST_LEN); s = (digest[y->ptr] & y->bitmask) != 0; assert(s < 2); p = &y->leaf[s]; } y2->leaf[s2] = *p; VWMB(); *p = hcb_r_y(y2); return (oh2); } /*--------------------------------------------------------------------*/ static void hcb_delete(struct hcb_root *r, const struct objhead *oh) { struct hcb_y *y; volatile uintptr_t *p; unsigned s; if (r->origo == hcb_r_node(oh)) { r->origo = 0; return; } p = &r->origo; assert(hcb_is_y(*p)); y = NULL; while (1) { assert(hcb_is_y(*p)); y = hcb_l_y(*p); assert(y->ptr < DIGEST_LEN); s = (oh->digest[y->ptr] & y->bitmask) != 0; assert(s < 2); if (y->leaf[s] == hcb_r_node(oh)) { *p = y->leaf[1 - s]; VSTAILQ_INSERT_TAIL(&cool_y, y, list); return; } p = &y->leaf[s]; } } /*--------------------------------------------------------------------*/ static void * v_matchproto_(bgthread_t) hcb_cleaner(struct worker *wrk, void *priv) { struct hcb_y *y, *y2; struct objhead *oh, *oh2; (void)priv; while (1) { VSTAILQ_FOREACH_SAFE(y, &dead_y, list, y2) { CHECK_OBJ_NOTNULL(y, HCB_Y_MAGIC); VSTAILQ_REMOVE_HEAD(&dead_y, list); FREE_OBJ(y); } VTAILQ_FOREACH_SAFE(oh, &dead_h, hoh_list, oh2) { CHECK_OBJ(oh, OBJHEAD_MAGIC); VTAILQ_REMOVE(&dead_h, oh, hoh_list); HSH_DeleteObjHead(wrk, oh); } Lck_Lock(&hcb_mtx); VSTAILQ_CONCAT(&dead_y, &cool_y); VTAILQ_CONCAT(&dead_h, &cool_h, hoh_list); Lck_Unlock(&hcb_mtx); Pool_Sumstat(wrk); VTIM_sleep(cache_param->critbit_cooloff); } NEEDLESS(return (NULL)); } /*--------------------------------------------------------------------*/ static void v_matchproto_(hash_start_f) hcb_start(void) { struct objhead *oh = NULL; pthread_t tp; (void)oh; Lck_New(&hcb_mtx, lck_hcb); WRK_BgThread(&tp, "hcb-cleaner", hcb_cleaner, NULL); memset(&hcb_root, 0, sizeof hcb_root); hcb_build_bittbl(); } static int v_matchproto_(hash_deref_f) hcb_deref(struct worker *wrk, struct objhead *oh) { int r; (void)wrk; CHECK_OBJ_NOTNULL(oh, OBJHEAD_MAGIC); Lck_AssertHeld(&oh->mtx); assert(oh->refcnt > 0); r = --oh->refcnt; if (oh->refcnt == 0) { Lck_Lock(&hcb_mtx); hcb_delete(&hcb_root, oh); VTAILQ_INSERT_TAIL(&cool_h, oh, hoh_list); Lck_Unlock(&hcb_mtx); } Lck_Unlock(&oh->mtx); #ifdef PHK fprintf(stderr, "hcb_defef %d %d <%s>\n", __LINE__, r, oh->hash); #endif return (r); } static struct objhead * v_matchproto_(hash_lookup_f) hcb_lookup(struct worker *wrk, const void *digest, struct objhead **noh) { struct objhead *oh; struct hcb_y *y; unsigned u; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); AN(digest); if (noh != NULL) { CHECK_OBJ_NOTNULL(*noh, OBJHEAD_MAGIC); assert((*noh)->refcnt == 1); } /* First try in read-only mode without holding a lock */ wrk->stats->hcb_nolock++; oh = hcb_insert(wrk, &hcb_root, digest, NULL); if (oh != NULL) { Lck_Lock(&oh->mtx); /* * A refcount of zero indicates that the tree changed * under us, so fall through and try with the lock held. */ u = oh->refcnt; if (u > 0) { oh->refcnt++; return (oh); } Lck_Unlock(&oh->mtx); } while (1) { /* No luck, try with lock held, so we can modify tree */ CAST_OBJ_NOTNULL(y, wrk->wpriv->nhashpriv, HCB_Y_MAGIC); Lck_Lock(&hcb_mtx); VSC_C_main->hcb_lock++; oh = hcb_insert(wrk, &hcb_root, digest, noh); Lck_Unlock(&hcb_mtx); if (oh == NULL) return (NULL); Lck_Lock(&oh->mtx); CHECK_OBJ_NOTNULL(oh, OBJHEAD_MAGIC); if (noh != NULL && *noh == NULL) { assert(oh->refcnt > 0); VSC_C_main->hcb_insert++; return (oh); } /* * A refcount of zero indicates that the tree changed * under us, so fall through and try with the lock held. */ u = oh->refcnt; if (u > 0) { oh->refcnt++; return (oh); } Lck_Unlock(&oh->mtx); } } static void v_matchproto_(hash_prep_f) hcb_prep(struct worker *wrk) { struct hcb_y *y; if (wrk->wpriv->nhashpriv == NULL) { ALLOC_OBJ(y, HCB_Y_MAGIC); AN(y); wrk->wpriv->nhashpriv = y; } } const struct hash_slinger hcb_slinger = { .magic = SLINGER_MAGIC, .name = "critbit", .start = hcb_start, .lookup = hcb_lookup, .prep = hcb_prep, .deref = hcb_deref, }; varnish-7.5.0/bin/varnishd/hash/hash_simple_list.c000066400000000000000000000077471457605730600222350ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * This is the reference hash(/lookup) implementation */ #include "config.h" #include "cache/cache_varnishd.h" #include "cache/cache_objhead.h" #include "hash/hash_slinger.h" static struct VSC_lck *lck_hsl; /*--------------------------------------------------------------------*/ static VTAILQ_HEAD(, objhead) hsl_head = VTAILQ_HEAD_INITIALIZER(hsl_head); static struct lock hsl_mtx; /*-------------------------------------------------------------------- * The ->init method is called during process start and allows * initialization to happen before the first lookup. */ static void v_matchproto_(hash_start_f) hsl_start(void) { lck_hsl = Lck_CreateClass(NULL, "hsl"); Lck_New(&hsl_mtx, lck_hsl); } /*-------------------------------------------------------------------- * Lookup and possibly insert element. * If nobj != NULL and the lookup does not find key, nobj is inserted. * If nobj == NULL and the lookup does not find key, NULL is returned. * A reference to the returned object is held. */ static struct objhead * v_matchproto_(hash_lookup_f) hsl_lookup(struct worker *wrk, const void *digest, struct objhead **noh) { struct objhead *oh; int i; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); AN(digest); if (noh != NULL) CHECK_OBJ_NOTNULL(*noh, OBJHEAD_MAGIC); Lck_Lock(&hsl_mtx); VTAILQ_FOREACH(oh, &hsl_head, hoh_list) { i = memcmp(oh->digest, digest, sizeof oh->digest); if (i < 0) continue; if (i > 0) break; oh->refcnt++; Lck_Unlock(&hsl_mtx); Lck_Lock(&oh->mtx); return (oh); } if (noh == NULL) return (NULL); if (oh != NULL) VTAILQ_INSERT_BEFORE(oh, *noh, hoh_list); else VTAILQ_INSERT_TAIL(&hsl_head, *noh, hoh_list); oh = *noh; *noh = NULL; memcpy(oh->digest, digest, sizeof oh->digest); Lck_Unlock(&hsl_mtx); Lck_Lock(&oh->mtx); return (oh); } /*-------------------------------------------------------------------- * Dereference and if no references are left, free. */ static int v_matchproto_(hash_deref_f) hsl_deref(struct worker *wrk, struct objhead *oh) { int ret; CHECK_OBJ_NOTNULL(oh, OBJHEAD_MAGIC); Lck_AssertHeld(&oh->mtx); Lck_Unlock(&oh->mtx); Lck_Lock(&hsl_mtx); if (--oh->refcnt == 0) { VTAILQ_REMOVE(&hsl_head, oh, hoh_list); ret = 0; } else ret = 1; Lck_Unlock(&hsl_mtx); if (!ret) HSH_DeleteObjHead(wrk, oh); return (ret); } /*--------------------------------------------------------------------*/ const struct hash_slinger hsl_slinger = { .magic = SLINGER_MAGIC, .name = "simple", .start = hsl_start, .lookup = hsl_lookup, .deref = hsl_deref, }; varnish-7.5.0/bin/varnishd/hash/hash_slinger.h000066400000000000000000000044251457605730600213470ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ struct worker; struct objhead; typedef void hash_init_f(int ac, char * const *av); typedef void hash_start_f(void); typedef void hash_prep_f(struct worker *); typedef struct objhead *hash_lookup_f(struct worker *, const void *digest, struct objhead **); typedef int hash_deref_f(struct worker *, struct objhead *); struct hash_slinger { unsigned magic; #define SLINGER_MAGIC 0x1b720cba const char *name; hash_init_f *init; hash_start_f *start; hash_prep_f *prep; hash_lookup_f *lookup; hash_deref_f *deref; }; /* mgt_hash.c */ void HSH_config(const char *); /* cache_hash.c */ void HSH_Init(const struct hash_slinger *); void HSH_Cleanup(const struct worker *); extern const struct hash_slinger hsl_slinger; extern const struct hash_slinger hcl_slinger; extern const struct hash_slinger hcb_slinger; varnish-7.5.0/bin/varnishd/hash/mgt_hash.c000066400000000000000000000051031457605730600204600ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #include "config.h" #include #include #include #include "mgt/mgt.h" #include "common/heritage.h" #include "hash/hash_slinger.h" #include "vav.h" static const struct choice hsh_choice[] = { { "classic", &hcl_slinger }, { "simple", &hsl_slinger }, { "simple_list", &hsl_slinger }, /* backwards compat */ { "critbit", &hcb_slinger }, { NULL, NULL } }; /*--------------------------------------------------------------------*/ void HSH_config(const char *h_arg) { char **av; int ac; const struct hash_slinger *hp; ASSERT_MGT(); av = VAV_Parse(h_arg, NULL, ARGV_COMMA); AN(av); if (av[0] != NULL) ARGV_ERR("%s\n", av[0]); if (av[1] == NULL) ARGV_ERR("-h argument is empty\n"); for (ac = 0; av[ac + 2] != NULL; ac++) continue; hp = MGT_Pick(hsh_choice, av[1], "hash"); CHECK_OBJ_NOTNULL(hp, SLINGER_MAGIC); VSB_printf(vident, ",-h%s", av[1]); heritage.hash = hp; if (hp->init != NULL) hp->init(ac, av + 2); else if (ac > 0) ARGV_ERR("Hash method \"%s\" takes no arguments\n", hp->name); /* NB: Don't free av, the hasher is allowed to keep it. */ } varnish-7.5.0/bin/varnishd/hpack/000077500000000000000000000000001457605730600166665ustar00rootroot00000000000000varnish-7.5.0/bin/varnishd/hpack/vhp.h000066400000000000000000000067361457605730600176500ustar00rootroot00000000000000/*- * Copyright (c) 2016 Varnish Software AS * All rights reserved. * * Author: Martin Blix Grydeland * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #include /* VHT - Varnish HPACK Table */ #define VHT_ENTRY_SIZE 32U struct vht_entry { unsigned magic; #define VHT_ENTRY_MAGIC 0xc06dd892 unsigned offset; unsigned namelen; unsigned valuelen; }; struct vht_table { unsigned magic; #define VHT_TABLE_MAGIC 0x6bbdc683 unsigned n; unsigned size; unsigned maxsize; /* n * 32 + size <= maxsize */ unsigned protomax; unsigned bufsize; char *buf; }; void VHT_NewEntry(struct vht_table *); int VHT_NewEntry_Indexed(struct vht_table *, unsigned); void VHT_AppendName(struct vht_table *, const char *, ssize_t); void VHT_AppendValue(struct vht_table *, const char *, ssize_t); int VHT_SetMaxTableSize(struct vht_table *, size_t); int VHT_SetProtoMax(struct vht_table *, size_t); const char *VHT_LookupName(const struct vht_table *, unsigned, size_t *); const char *VHT_LookupValue(const struct vht_table *, unsigned, size_t *); int VHT_Init(struct vht_table *, size_t); void VHT_Fini(struct vht_table *); /* VHD - Varnish HPACK Decoder */ enum vhd_ret_e { #define VHD_RET(NAME, VAL, DESC) \ VHD_##NAME = VAL, #include "tbl/vhd_return.h" }; struct vhd_int { uint8_t magic; #define VHD_INT_MAGIC 0x05 uint8_t pfx; uint8_t m; unsigned v; }; struct vhd_raw { uint8_t magic; #define VHD_RAW_MAGIC 0xa0 unsigned l; }; struct vhd_huffman { uint8_t magic; #define VHD_HUFFMAN_MAGIC 0x56 uint8_t blen; uint16_t bits; uint16_t pos; unsigned len; }; struct vhd_lookup { uint8_t magic; #define VHD_LOOKUP_MAGIC 0x65 unsigned l; }; struct vhd_decode { unsigned magic; #define VHD_DECODE_MAGIC 0x9cbc72b2 unsigned index; uint16_t state; int8_t error; uint8_t first; union { struct vhd_int integer[1]; struct vhd_lookup lookup[1]; struct vhd_raw raw[1]; struct vhd_huffman huffman[1]; }; }; void VHD_Init(struct vhd_decode *); enum vhd_ret_e VHD_Decode(struct vhd_decode *, struct vht_table *, const uint8_t *in, size_t inlen, size_t *p_inused, char *out, size_t outlen, size_t *p_outused); const char *VHD_Error(enum vhd_ret_e); varnish-7.5.0/bin/varnishd/hpack/vhp_decode.c000066400000000000000000000643701457605730600211440ustar00rootroot00000000000000/*- * Copyright (c) 2016 Varnish Software AS * All rights reserved. * * Author: Martin Blix Grydeland * Author: Dridi Boukelmoune * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #include "config.h" #include #include #include #include #include "vdef.h" #include "vas.h" #include "miniobj.h" #include "hpack/vhp.h" #include "vhp_hufdec.h" struct vhd_ctx { struct vhd_decode *d; struct vht_table *tbl; const uint8_t *in; const uint8_t *in_e; char *out; char *out_e; }; typedef enum vhd_ret_e vhd_state_f(struct vhd_ctx *ctx, unsigned first); /* Function flags */ #define VHD_INCREMENTAL (1U << 0) /* Functions */ enum vhd_func_e { #define VHD_FSM_FUNC(NAME, func) \ VHD_F_##NAME, #include "tbl/vhd_fsm_funcs.h" VHD_F__MAX, }; #define VHD_FSM_FUNC(NAME, func) \ static vhd_state_f func; #include "tbl/vhd_fsm_funcs.h" /* States */ enum vhd_state_e { VHD_S__MIN = -1, #define VHD_FSM(STATE, FUNC, arg1, arg2) \ VHD_S_##STATE, #include "tbl/vhd_fsm.h" VHD_S__MAX, }; static const struct vhd_state { const char *name; enum vhd_func_e func; unsigned arg1; unsigned arg2; } vhd_states[VHD_S__MAX] = { #define VHD_FSM(STATE, FUNC, arg1, arg2) \ [VHD_S_##STATE] = { #STATE, VHD_F_##FUNC, arg1, arg2 }, #include "tbl/vhd_fsm.h" }; /* Utility functions */ static void vhd_set_state(struct vhd_decode *d, enum vhd_state_e state) { AN(d); assert(state > VHD_S__MIN && state < VHD_S__MAX); d->state = state; d->first = 1; } static void vhd_next_state(struct vhd_decode *d) { AN(d); assert(d->state + 1 < VHD_S__MAX); vhd_set_state(d, d->state + 1); } /* State functions */ static enum vhd_ret_e v_matchproto_(vhd_state_f) vhd_skip(struct vhd_ctx *ctx, unsigned first) { AN(ctx); AN(first); vhd_next_state(ctx->d); return (VHD_AGAIN); } static enum vhd_ret_e v_matchproto_(vhd_state_f) vhd_goto(struct vhd_ctx *ctx, unsigned first) { const struct vhd_state *s; AN(ctx); AN(first); assert(ctx->d->state < VHD_S__MAX); s = &vhd_states[ctx->d->state]; assert(s->arg1 < VHD_S__MAX); vhd_set_state(ctx->d, s->arg1); return (VHD_AGAIN); } static enum vhd_ret_e v_matchproto_(vhd_state_f) vhd_idle(struct vhd_ctx *ctx, unsigned first) { uint8_t c; AN(ctx); (void)first; while (ctx->in < ctx->in_e) { c = *ctx->in; if ((c & 0x80) == 0x80) vhd_set_state(ctx->d, VHD_S_HP61_START); else if ((c & 0xc0) == 0x40) vhd_set_state(ctx->d, VHD_S_HP621_START); else if ((c & 0xf0) == 0x00) vhd_set_state(ctx->d, VHD_S_HP622_START); else if ((c & 0xf0) == 0x10) vhd_set_state(ctx->d, VHD_S_HP623_START); else if ((c & 0xe0) == 0x20) vhd_set_state(ctx->d, VHD_S_HP63_START); else return (VHD_ERR_ARG); return (VHD_AGAIN); } return (VHD_OK); } static enum vhd_ret_e v_matchproto_(vhd_state_f) vhd_integer(struct vhd_ctx *ctx, unsigned first) { const struct vhd_state *s; struct vhd_int *i; uint8_t c; unsigned mask; assert(UINT_MAX >= UINT32_MAX); AN(ctx); assert(ctx->d->state < VHD_S__MAX); s = &vhd_states[ctx->d->state]; i = ctx->d->integer; if (first) { INIT_OBJ(i, VHD_INT_MAGIC); i->pfx = s->arg1; assert(i->pfx >= 4 && i->pfx <= 7); } CHECK_OBJ_NOTNULL(i, VHD_INT_MAGIC); while (ctx->in < ctx->in_e) { c = *ctx->in; ctx->in++; if (i->pfx) { mask = (1U << i->pfx) - 1; i->pfx = 0; i->v = c & mask; if (i->v < mask) { vhd_next_state(ctx->d); return (VHD_AGAIN); } } else { if ((i->m == 28 && (c & 0x78)) || i->m > 28) return (VHD_ERR_INT); i->v += (c & 0x7f) * ((uint32_t)1 << i->m); i->m += 7; if (!(c & 0x80)) { vhd_next_state(ctx->d); return (VHD_AGAIN); } } } return (VHD_MORE); } static enum vhd_ret_e v_matchproto_(vhd_state_f) vhd_set_max(struct vhd_ctx *ctx, unsigned first) { AN(ctx); AN(first); CHECK_OBJ_NOTNULL(ctx->d->integer, VHD_INT_MAGIC); if (ctx->tbl == NULL) return (VHD_ERR_UPD); if (VHT_SetMaxTableSize(ctx->tbl, ctx->d->integer->v)) return (VHD_ERR_UPD); vhd_next_state(ctx->d); return (VHD_AGAIN); } static enum vhd_ret_e v_matchproto_(vhd_state_f) vhd_set_idx(struct vhd_ctx *ctx, unsigned first) { AN(ctx); AN(first); CHECK_OBJ_NOTNULL(ctx->d->integer, VHD_INT_MAGIC); ctx->d->index = ctx->d->integer->v; vhd_next_state(ctx->d); return (VHD_AGAIN); } static enum vhd_ret_e v_matchproto_(vhd_state_f) vhd_lookup(struct vhd_ctx *ctx, unsigned first) { const struct vhd_state *s; struct vhd_lookup *lu; const char *p; size_t l; AN(ctx); assert(ctx->d->state < VHD_S__MAX); s = &vhd_states[ctx->d->state]; lu = ctx->d->lookup; if (first) INIT_OBJ(lu, VHD_LOOKUP_MAGIC); CHECK_OBJ_NOTNULL(lu, VHD_LOOKUP_MAGIC); switch (s->arg1) { case VHD_NAME: case VHD_NAME_SEC: p = VHT_LookupName(ctx->tbl, ctx->d->index, &l); break; case VHD_VALUE: case VHD_VALUE_SEC: p = VHT_LookupValue(ctx->tbl, ctx->d->index, &l); break; default: WRONG("vhd_lookup wrong arg1"); break; } if (first && p == NULL) return (VHD_ERR_IDX); AN(p); assert(l <= UINT_MAX); if (first) lu->l = l; assert(lu->l <= l); p += l - lu->l; l = vmin_t(size_t, lu->l, ctx->out_e - ctx->out); memcpy(ctx->out, p, l); ctx->out += l; lu->l -= l; if (lu->l == 0) { vhd_next_state(ctx->d); return (s->arg1); } assert(ctx->out == ctx->out_e); return (VHD_BUF); } static enum vhd_ret_e v_matchproto_(vhd_state_f) vhd_new(struct vhd_ctx *ctx, unsigned first) { AN(ctx); AN(first); if (ctx->tbl != NULL) VHT_NewEntry(ctx->tbl); vhd_next_state(ctx->d); return (VHD_AGAIN); } static enum vhd_ret_e v_matchproto_(vhd_state_f) vhd_new_idx(struct vhd_ctx *ctx, unsigned first) { AN(ctx); AN(first); if (ctx->tbl != NULL) { if (VHT_NewEntry_Indexed(ctx->tbl, ctx->d->index)) return (VHD_ERR_IDX); } vhd_next_state(ctx->d); return (VHD_AGAIN); } static enum vhd_ret_e v_matchproto_(vhd_state_f) vhd_branch_zidx(struct vhd_ctx *ctx, unsigned first) { const struct vhd_state *s; AN(ctx); (void)first; assert(ctx->d->state < VHD_S__MAX); s = &vhd_states[ctx->d->state]; assert(s->arg1 < VHD_S__MAX); if (ctx->d->index == 0) vhd_set_state(ctx->d, s->arg1); else vhd_next_state(ctx->d); return (VHD_AGAIN); } static enum vhd_ret_e v_matchproto_(vhd_state_f) vhd_branch_bit0(struct vhd_ctx *ctx, unsigned first) { const struct vhd_state *s; AN(ctx); (void)first; assert(ctx->d->state < VHD_S__MAX); s = &vhd_states[ctx->d->state]; assert(s->arg1 < VHD_S__MAX); if (ctx->in == ctx->in_e) return (VHD_MORE); if (*ctx->in & 0x80) vhd_set_state(ctx->d, s->arg1); else vhd_next_state(ctx->d); return (VHD_AGAIN); } static enum vhd_ret_e v_matchproto_(vhd_state_f) vhd_raw(struct vhd_ctx *ctx, unsigned first) { const struct vhd_state *s; struct vhd_raw *raw; size_t l2; AN(ctx); assert(ctx->d->state < VHD_S__MAX); s = &vhd_states[ctx->d->state]; raw = ctx->d->raw; if (first) { CHECK_OBJ_NOTNULL(ctx->d->integer, VHD_INT_MAGIC); l2 = ctx->d->integer->v; INIT_OBJ(raw, VHD_RAW_MAGIC); raw->l = l2; } CHECK_OBJ_NOTNULL(raw, VHD_RAW_MAGIC); while (raw->l > 0) { l2 = raw->l; if (l2 > (ctx->in_e - ctx->in)) l2 = ctx->in_e - ctx->in; if (l2 == 0) return (VHD_MORE); if (l2 > (ctx->out_e - ctx->out)) l2 = ctx->out_e - ctx->out; if (l2 == 0) return (VHD_BUF); memcpy(ctx->out, ctx->in, l2); ctx->in += l2; if (ctx->tbl != NULL && (s->arg2 & VHD_INCREMENTAL)) { switch (s->arg1) { case VHD_NAME: VHT_AppendName(ctx->tbl, ctx->out, l2); break; case VHD_VALUE: VHT_AppendValue(ctx->tbl, ctx->out, l2); break; default: WRONG("vhd_raw wrong arg1"); break; } } ctx->out += l2; raw->l -= l2; } vhd_next_state(ctx->d); return (s->arg1); } static enum vhd_ret_e v_matchproto_(vhd_state_f) vhd_huffman(struct vhd_ctx *ctx, unsigned first) { const struct vhd_state *s; struct vhd_huffman *huf; enum vhd_ret_e r; unsigned u, l; AN(ctx); assert(ctx->d->state < VHD_S__MAX); s = &vhd_states[ctx->d->state]; huf = ctx->d->huffman; if (first) { CHECK_OBJ_NOTNULL(ctx->d->integer, VHD_INT_MAGIC); l = ctx->d->integer->v; INIT_OBJ(huf, VHD_HUFFMAN_MAGIC); huf->len = l; } CHECK_OBJ_NOTNULL(huf, VHD_HUFFMAN_MAGIC); r = VHD_OK; l = 0; while (1) { assert(huf->pos < HUFDEC_LEN); assert(hufdec[huf->pos].mask > 0); assert(hufdec[huf->pos].mask <= 8); if (huf->len > 0 && huf->blen < hufdec[huf->pos].mask) { /* Refill from input */ if (ctx->in == ctx->in_e) { r = VHD_MORE; break; } huf->bits = (huf->bits << 8) | *ctx->in; huf->blen += 8; huf->len--; ctx->in++; } if (huf->len == 0 && huf->pos == 0 && huf->blen <= 7 && huf->bits == (1U << huf->blen) - 1U) { /* End of stream */ r = s->arg1; vhd_next_state(ctx->d); break; } if (ctx->out + l == ctx->out_e) { r = VHD_BUF; break; } if (huf->blen >= hufdec[huf->pos].mask) u = huf->bits >> (huf->blen - hufdec[huf->pos].mask); else u = huf->bits << (hufdec[huf->pos].mask - huf->blen); huf->pos += u; assert(huf->pos < HUFDEC_LEN); if (hufdec[huf->pos].len == 0 || hufdec[huf->pos].len > huf->blen) { /* Invalid or incomplete code */ r = VHD_ERR_HUF; break; } huf->blen -= hufdec[huf->pos].len; huf->bits &= (1U << huf->blen) - 1U; if (hufdec[huf->pos].jump) { huf->pos += hufdec[huf->pos].jump; assert(huf->pos < HUFDEC_LEN); } else { ctx->out[l++] = hufdec[huf->pos].chr; huf->pos = 0; } } if (l > 0 && ctx->tbl != NULL && (s->arg2 & VHD_INCREMENTAL)) { switch (s->arg1) { case VHD_NAME: VHT_AppendName(ctx->tbl, ctx->out, l); break; case VHD_VALUE: VHT_AppendValue(ctx->tbl, ctx->out, l); break; default: WRONG("vhd_raw wrong arg1"); break; } } ctx->out += l; assert(r != VHD_OK); return (r); } /* Public interface */ const char * VHD_Error(enum vhd_ret_e r) { switch (r) { #define VHD_RET(NAME, VAL, DESC) \ case VHD_##NAME: \ return ("VHD_" #NAME " (" DESC ")"); #include "tbl/vhd_return.h" default: return ("VHD_UNKNOWN"); } } enum vhd_ret_e VHD_Decode(struct vhd_decode *d, struct vht_table *tbl, const uint8_t *in, size_t inlen, size_t *p_inused, char *out, size_t outlen, size_t *p_outused) { const struct vhd_state *s; struct vhd_ctx ctx[1]; enum vhd_ret_e ret; unsigned first; CHECK_OBJ_NOTNULL(d, VHD_DECODE_MAGIC); CHECK_OBJ_ORNULL(tbl, VHT_TABLE_MAGIC); AN(in); AN(p_inused); AN(out); AN(p_outused); if (d->error < 0) return (d->error); assert(*p_inused <= inlen); assert(*p_outused <= outlen); ctx->d = d; ctx->tbl = tbl; ctx->in = in + *p_inused; ctx->in_e = in + inlen; ctx->out = out + *p_outused; ctx->out_e = out + outlen; do { first = d->first; d->first = 0; assert(d->state < VHD_S__MAX); s = &vhd_states[d->state]; switch (s->func) { #define VHD_FSM_FUNC(NAME, func) \ case VHD_F_##NAME: \ ret = func(ctx, first); \ break; #include "tbl/vhd_fsm_funcs.h" default: WRONG("Undefined vhd function"); break; } } while (ret == VHD_AGAIN); if (ret < 0) d->error = ret; assert(in + *p_inused <= ctx->in); *p_inused += ctx->in - (in + *p_inused); assert(out + *p_outused <= ctx->out); *p_outused += ctx->out - (out + *p_outused); return (ret); } void VHD_Init(struct vhd_decode *d) { AN(d); assert(VHD_S__MAX <= UINT16_MAX); assert(HUFDEC_LEN <= UINT16_MAX); INIT_OBJ(d, VHD_DECODE_MAGIC); d->state = VHD_S_IDLE; d->first = 1; } /* Test driver */ #ifdef DECODE_TEST_DRIVER #include #include static int verbose = 0; static size_t hexbuf(uint8_t *buf, size_t buflen, const char *h) { size_t l; uint8_t u; AN(h); AN(buf); l = 0; for (; *h != '\0'; h++) { if (l == buflen * 2) WRONG("Too small buffer"); if (isspace(*h)) continue; if (*h >= '0' && *h <= '9') u = *h - '0'; else if (*h >= 'a' && *h <= 'f') u = 0xa + *h - 'a'; else if (*h >= 'A' && *h <= 'F') u = 0xa + *h - 'A'; else WRONG("Bad input character"); assert(u <= 0xf); if (l % 2 == 0) { u <<= 4; buf[l / 2] = u; } else { buf[l / 2] |= u; } l++; } AZ(l % 2); return (l / 2); } static int match(const char *b, size_t l, ...) { va_list ap; const char *e; const char *m; int r = 0; va_start(ap, l); e = b + l; while (1) { m = va_arg(ap, const char *); if (m == NULL) break; l = strlen(m); if (e - b <= l || b[l] != '\0' || strncmp(b, m, l)) { printf("%.*s != %s\n", (int)(e - b), b, m); r = -1; break; } else if (verbose) { printf("%s == %s\n", b, m); } b += l + 1; } va_end(ap); return (r); } #define M_1IN (1U << 0) #define M_1OUT (1U << 1) static enum vhd_ret_e decode(struct vhd_decode *d, struct vht_table *tbl, uint8_t *in, size_t in_l, char *out, size_t out_l, unsigned m) { size_t in_u, out_u; enum vhd_ret_e r; CHECK_OBJ_NOTNULL(d, VHD_DECODE_MAGIC); AN(in); AN(out); in_u = 0; out_u = 0; while (1) { r = VHD_Decode(d, tbl, in, (m & M_1IN ? (in_l > in_u ? in_u + 1 : in_u) : in_l), &in_u, out, (m & M_1OUT ? (out_l > out_u ? out_u + 1 : out_u) : out_l), &out_u); assert(in_u <= in_l); assert(out_u <= out_l); if (r < VHD_OK) return (r); switch (r) { case VHD_OK: return (r); case VHD_MORE: if (in_u == in_l) return (r); break; case VHD_BUF: if (out_u == out_l) return (r); break; case VHD_NAME: case VHD_NAME_SEC: assert(out_l - out_u > 0); out[out_u++] = '\0'; if (verbose) printf("Name%s: '%s'\n", (r == VHD_NAME_SEC ? " (sec)" : ""), out); out += out_u; out_l -= out_u; out_u = 0; break; case VHD_VALUE: case VHD_VALUE_SEC: assert(out_l - out_u > 0); out[out_u++] = '\0'; if (verbose) printf("Value%s: '%s'\n", (r == VHD_VALUE_SEC ? " (sec)" : ""), out); out += out_u; out_l -= out_u; out_u = 0; break; default: WRONG("Wrong return code"); break; } } NEEDLESS(return (VHD_OK)); } #define CHECK_RET(r, e) \ do { \ if (verbose || r != e) { \ printf("%s %s %s\n", \ VHD_Error(r), \ (r == e ? "==" : "!="), \ VHD_Error(e)); \ } \ assert(r == e); \ } while (0) #define CHECK_INT(d, u) \ do { \ CHECK_OBJ_NOTNULL(d->integer, VHD_INT_MAGIC); \ if (verbose || d->integer->v != u) { \ printf("%u %s %u\n", d->integer->v, \ (d->integer->v == u ? "==" : "!="), \ u); \ } \ assert(d->integer->v == u); \ } while (0) static void test_integer(unsigned mode) { struct vhd_decode d[1]; uint8_t in[128]; size_t in_l; char out[128]; enum vhd_ret_e r; /* Test single byte decoding */ VHD_Init(d); vhd_set_state(d, VHD_S_TEST_INT5); in_l = hexbuf(in, sizeof in, "1e"); r = decode(d, NULL, in, in_l, out, sizeof out, mode); CHECK_RET(r, VHD_OK); CHECK_INT(d, 30); /* Test multibyte decoding */ VHD_Init(d); vhd_set_state(d, VHD_S_TEST_INT5); in_l = hexbuf(in, sizeof in, "ff 9a 0a"); r = decode(d, NULL, in, in_l, out, sizeof out, mode); CHECK_RET(r, VHD_OK); CHECK_INT(d, 1337); /* Test max size we allow */ VHD_Init(d); vhd_set_state(d, VHD_S_TEST_INT5); in_l = hexbuf(in, sizeof in, "1f ff ff ff ff 07"); r = decode(d, NULL, in, in_l, out, sizeof out, mode); CHECK_RET(r, VHD_OK); CHECK_INT(d, 0x8000001E); /* Test overflow */ VHD_Init(d); vhd_set_state(d, VHD_S_TEST_INT5); in_l = hexbuf(in, sizeof in, "1f ff ff ff ff 08"); r = decode(d, NULL, in, in_l, out, sizeof out, mode); CHECK_RET(r, VHD_ERR_INT); } static void test_raw(unsigned mode) { struct vhd_decode d[1]; uint8_t in[128]; size_t in_l; char out[128]; enum vhd_ret_e r; /* Test raw encoding */ VHD_Init(d); vhd_set_state(d, VHD_S_TEST_LITERAL); in_l = hexbuf(in, sizeof in, "0a63 7573 746f 6d2d 6b65 790d 6375 7374 6f6d 2d68 6561 6465 72"); r = decode(d, NULL, in, in_l, out, sizeof out, mode); CHECK_RET(r, VHD_OK); AZ(match(out, sizeof out, "custom-key", "custom-header", NULL)); /* Test too short input */ VHD_Init(d); vhd_set_state(d, VHD_S_TEST_LITERAL); in_l = hexbuf(in, sizeof in, "02"); r = decode(d, NULL, in, in_l, out, sizeof out, mode); CHECK_RET(r, VHD_MORE); } static void test_huffman(unsigned mode) { struct vhd_decode d[1]; uint8_t in[256]; size_t in_l; char out[256]; enum vhd_ret_e r; /* Decode a huffman encoded value */ VHD_Init(d); in_l = hexbuf(in, sizeof in, "0141 8cf1 e3c2 e5f2 3a6b a0ab 90f4 ff"); vhd_set_state(d, VHD_S_TEST_LITERAL); r = decode(d, NULL, in, in_l, out, sizeof out, mode); CHECK_RET(r, VHD_OK); AZ(match(out, sizeof out, "A", "www.example.com", NULL)); /* Decode an incomplete input buffer */ VHD_Init(d); in_l = hexbuf(in, sizeof in, "0141 81"); vhd_set_state(d, VHD_S_TEST_LITERAL); r = decode(d, NULL, in, in_l, out, sizeof out, mode); CHECK_RET(r, VHD_MORE); /* Decode an incomplete huffman code */ VHD_Init(d); in_l = hexbuf(in, sizeof in, "0141 81 fe"); vhd_set_state(d, VHD_S_TEST_LITERAL); r = decode(d, NULL, in, in_l, out, sizeof out, mode); CHECK_RET(r, VHD_ERR_HUF); /* Decode an invalid huffman code */ VHD_Init(d); in_l = hexbuf(in, sizeof in, "0141 84 ff ff ff ff"); vhd_set_state(d, VHD_S_TEST_LITERAL); r = decode(d, NULL, in, in_l, out, sizeof out, mode); CHECK_RET(r, VHD_ERR_HUF); } static void test_c2(unsigned mode) { struct vhd_decode d[1]; uint8_t in[256]; size_t in_l; char out[256]; enum vhd_ret_e r; /* See RFC 7541 Appendix C.2 */ VHD_Init(d); /* C.2.1 */ in_l = hexbuf(in, sizeof in, "400a 6375 7374 6f6d 2d6b 6579 0d63 7573" "746f 6d2d 6865 6164 6572"); r = decode(d, NULL, in, in_l, out, sizeof out, mode); CHECK_RET(r, VHD_OK); AZ(match(out, sizeof out, "custom-key", "custom-header", NULL)); /* C.2.2 */ in_l = hexbuf(in, sizeof in, "040c 2f73 616d 706c 652f 7061 7468"); r = decode(d, NULL, in, in_l, out, sizeof out, mode); CHECK_RET(r, VHD_OK); AZ(match(out, sizeof out, ":path", "/sample/path", NULL)); /* C.2.3 */ in_l = hexbuf(in, sizeof in, "1008 7061 7373 776f 7264 0673 6563 7265" "74"); r = decode(d, NULL, in, in_l, out, sizeof out, mode); CHECK_RET(r, VHD_OK); AZ(match(out, sizeof out, "password", "secret", NULL)); /* C.2.4 */ in_l = hexbuf(in, sizeof in, "82"); r = decode(d, NULL, in, in_l, out, sizeof out, mode); CHECK_RET(r, VHD_OK); AZ(match(out, sizeof out, ":method", "GET", NULL)); } static void test_c3(unsigned mode) { struct vht_table t[1]; struct vhd_decode d[1]; uint8_t in[256]; size_t in_l; char out[256]; enum vhd_ret_e r; /* See RFC 7541 Appendix C.3 */ AZ(VHT_Init(t, 4096)); VHD_Init(d); /* C.3.1 */ in_l = hexbuf(in, sizeof in, "8286 8441 0f77 7777 2e65 7861 6d70 6c65" "2e63 6f6d"); r = decode(d, t, in, in_l, out, sizeof out, mode); CHECK_RET(r, VHD_OK); AZ(match(out, sizeof out, ":method", "GET", ":scheme", "http", ":path", "/", ":authority", "www.example.com", NULL)); /* C.3.2 */ in_l = hexbuf(in, sizeof in, "8286 84be 5808 6e6f 2d63 6163 6865"); r = decode(d, t, in, in_l, out, sizeof out, mode); CHECK_RET(r, VHD_OK); AZ(match(out, sizeof out, ":method", "GET", ":scheme", "http", ":path", "/", ":authority", "www.example.com", "cache-control", "no-cache", NULL)); /* C.3.3 */ in_l = hexbuf(in, sizeof in, "8287 85bf 400a 6375 7374 6f6d 2d6b 6579" "0c63 7573 746f 6d2d 7661 6c75 65"); r = decode(d, t, in, in_l, out, sizeof out, mode); CHECK_RET(r, VHD_OK); AZ(match(out, sizeof out, ":method", "GET", ":scheme", "https", ":path", "/index.html", ":authority", "www.example.com", "custom-key", "custom-value", NULL)); VHT_Fini(t); } static void test_c4(unsigned mode) { struct vht_table t[1]; struct vhd_decode d[1]; uint8_t in[256]; size_t in_l; char out[256]; enum vhd_ret_e r; /* See RFC 7541 Appendix C.4 */ AZ(VHT_Init(t, 4096)); VHD_Init(d); /* C.4.1 */ in_l = hexbuf(in, sizeof in, "8286 8441 8cf1 e3c2 e5f2 3a6b a0ab 90f4 ff"); r = decode(d, t, in, in_l, out, sizeof out, mode); CHECK_RET(r, VHD_OK); AZ(match(out, sizeof out, ":method", "GET", ":scheme", "http", ":path", "/", ":authority", "www.example.com", NULL)); /* C.4.2 */ in_l = hexbuf(in, sizeof in, "8286 84be 5886 a8eb 1064 9cbf"); r = decode(d, t, in, in_l, out, sizeof out, mode); CHECK_RET(r, VHD_OK); AZ(match(out, sizeof out, ":method", "GET", ":scheme", "http", ":path", "/", ":authority", "www.example.com", "cache-control", "no-cache", NULL)); /* C.4.3 */ in_l = hexbuf(in, sizeof in, "8287 85bf 4088 25a8 49e9 5ba9 7d7f 8925" "a849 e95b b8e8 b4bf"); r = decode(d, t, in, in_l, out, sizeof out, mode); CHECK_RET(r, VHD_OK); AZ(match(out, sizeof out, ":method", "GET", ":scheme", "https", ":path", "/index.html", ":authority", "www.example.com", "custom-key", "custom-value", NULL)); VHT_Fini(t); } static void test_c5(unsigned mode) { struct vht_table t[1]; struct vhd_decode d[1]; uint8_t in[256]; size_t in_l; char out[256]; enum vhd_ret_e r; /* See RFC 7541 Appendix C.5 */ AZ(VHT_Init(t, 256)); VHD_Init(d); /* C.5.1 */ in_l = hexbuf(in, sizeof in, "4803 3330 3258 0770 7269 7661 7465 611d" "4d6f 6e2c 2032 3120 4f63 7420 3230 3133" "2032 303a 3133 3a32 3120 474d 546e 1768" "7474 7073 3a2f 2f77 7777 2e65 7861 6d70" "6c65 2e63 6f6d"); r = decode(d, t, in, in_l, out, sizeof out, mode); CHECK_RET(r, VHD_OK); AZ(match(out, sizeof out, ":status", "302", "cache-control", "private", "date", "Mon, 21 Oct 2013 20:13:21 GMT", "location", "https://www.example.com", NULL)); /* C.5.2 */ in_l = hexbuf(in, sizeof in, "4803 3330 37c1 c0bf"); r = decode(d, t, in, in_l, out, sizeof out, mode); CHECK_RET(r, VHD_OK); AZ(match(out, sizeof out, ":status", "307", "cache-control", "private", "date", "Mon, 21 Oct 2013 20:13:21 GMT", "location", "https://www.example.com", NULL)); /* C.5.3 */ in_l = hexbuf(in, sizeof in, "88c1 611d 4d6f 6e2c 2032 3120 4f63 7420" "3230 3133 2032 303a 3133 3a32 3220 474d" "54c0 5a04 677a 6970 7738 666f 6f3d 4153" "444a 4b48 514b 425a 584f 5157 454f 5049" "5541 5851 5745 4f49 553b 206d 6178 2d61" "6765 3d33 3630 303b 2076 6572 7369 6f6e" "3d31"); r = decode(d, t, in, in_l, out, sizeof out, mode); CHECK_RET(r, VHD_OK); AZ(match(out, sizeof out, ":status", "200", "cache-control", "private", "date", "Mon, 21 Oct 2013 20:13:22 GMT", "location", "https://www.example.com", "content-encoding", "gzip", "set-cookie", "foo=ASDJKHQKBZXOQWEOPIUAXQWEOIU; max-age=3600; version=1", NULL)); VHT_Fini(t); } static void test_c6(unsigned mode) { struct vht_table t[1]; struct vhd_decode d[1]; uint8_t in[256]; size_t in_l; char out[256]; enum vhd_ret_e r; /* See RFC 7541 Appendix C.6 */ AZ(VHT_Init(t, 256)); VHD_Init(d); /* C.6.1 */ in_l = hexbuf(in, sizeof in, "4882 6402 5885 aec3 771a 4b61 96d0 7abe" "9410 54d4 44a8 2005 9504 0b81 66e0 82a6" "2d1b ff6e 919d 29ad 1718 63c7 8f0b 97c8" "e9ae 82ae 43d3"); r = decode(d, t, in, in_l, out, sizeof out, mode); CHECK_RET(r, VHD_OK); AZ(match(out, sizeof out, ":status", "302", "cache-control", "private", "date", "Mon, 21 Oct 2013 20:13:21 GMT", "location", "https://www.example.com", NULL)); /* C.6.2 */ in_l = hexbuf(in, sizeof in, "4883 640e ffc1 c0bf"); r = decode(d, t, in, in_l, out, sizeof out, mode); CHECK_RET(r, VHD_OK); AZ(match(out, sizeof out, ":status", "307", "cache-control", "private", "date", "Mon, 21 Oct 2013 20:13:21 GMT", "location", "https://www.example.com", NULL)); /* C.6.3 */ in_l = hexbuf(in, sizeof in, "88c1 6196 d07a be94 1054 d444 a820 0595" "040b 8166 e084 a62d 1bff c05a 839b d9ab" "77ad 94e7 821d d7f2 e6c7 b335 dfdf cd5b" "3960 d5af 2708 7f36 72c1 ab27 0fb5 291f" "9587 3160 65c0 03ed 4ee5 b106 3d50 07"); r = decode(d, t, in, in_l, out, sizeof out, mode); CHECK_RET(r, VHD_OK); AZ(match(out, sizeof out, ":status", "200", "cache-control", "private", "date", "Mon, 21 Oct 2013 20:13:22 GMT", "location", "https://www.example.com", "content-encoding", "gzip", "set-cookie", "foo=ASDJKHQKBZXOQWEOPIUAXQWEOIU; max-age=3600; version=1", NULL)); VHT_Fini(t); } #define do_test(name) \ do { \ printf("Doing test: %s\n", #name); \ name(0); \ printf("Doing test: %s 1IN\n", #name); \ name(M_1IN); \ printf("Doing test: %s 1OUT\n", #name); \ name(M_1OUT); \ printf("Doing test: %s 1IN|1OUT\n", #name); \ name(M_1IN|M_1OUT); \ printf("Test finished: %s\n\n", #name); \ } while (0) int main(int argc, char **argv) { if (argc == 2 && !strcmp(argv[1], "-v")) verbose = 1; else if (argc != 1) { fprintf(stderr, "Usage: %s [-v]\n", argv[0]); return (1); } if (verbose) { printf("sizeof (struct vhd_int)=%zu\n", sizeof (struct vhd_int)); printf("sizeof (struct vhd_lookup)=%zu\n", sizeof (struct vhd_lookup)); printf("sizeof (struct vhd_raw)=%zu\n", sizeof (struct vhd_raw)); printf("sizeof (struct vhd_huffman)=%zu\n", sizeof (struct vhd_huffman)); printf("sizeof (struct vhd_decode)=%zu\n", sizeof (struct vhd_decode)); } do_test(test_integer); do_test(test_raw); do_test(test_huffman); do_test(test_c2); do_test(test_c3); do_test(test_c4); do_test(test_c5); do_test(test_c6); return (0); } #endif /* DECODE_TEST_DRIVER */ varnish-7.5.0/bin/varnishd/hpack/vhp_gen_hufdec.c000066400000000000000000000142711457605730600220030ustar00rootroot00000000000000/*- * Copyright (c) 2016 Varnish Software AS * All rights reserved. * * Martin Blix Grydeland * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include "config.h" #include #include #include #include #include #include #include "vdef.h" #include "vas.h" static unsigned minlen = UINT_MAX; static unsigned maxlen = 0; static unsigned idx = 0; static const struct { uint32_t code; unsigned blen; char chr; } huf[] = { #define HPH(c, h, l) { h, l, (char)c }, #include "tbl/vhp_huffman.h" }; #define HUF_LEN (sizeof huf / sizeof huf[0]) struct tbl; struct cod { uint32_t bits; unsigned len; uint8_t chr; struct tbl *next; }; struct tbl { unsigned mask; uint32_t code; unsigned masked; unsigned n; unsigned idx; unsigned lvl; unsigned p_idx; struct cod e[]; }; static struct tbl * tbl_new(unsigned mask) { unsigned n; size_t size; struct tbl *tbl; assert(mask > 0); assert(mask <= 8); n = 1U << mask; size = sizeof (struct tbl) + n * sizeof (struct cod); tbl = calloc(1, size); AN(tbl); memset(tbl, 0, size); tbl->mask = mask; tbl->n = n; tbl->idx = idx; idx += n; return (tbl); } static void tbl_free(struct tbl* table) { for (unsigned i = 0; i < table->n; i++) { if (table->e[i].next != NULL) tbl_free(table->e[i].next); } free(table); } static void tbl_add(struct tbl *tbl, uint32_t code, unsigned codelen, uint32_t bits, unsigned len, char chr) { uint32_t b; unsigned u; AN(tbl); assert(codelen > 0); assert(codelen <= maxlen); assert(len > 0); assert(tbl->mask > 0); if (len > tbl->mask) { /* Does not belong in this table */ b = bits >> (len - tbl->mask); bits &= (1U << (len - tbl->mask)) - 1; if (tbl->e[b].next == NULL) { tbl->e[b].len = tbl->mask; tbl->e[b].next = tbl_new(len - tbl->mask); AN(tbl->e[b].next); tbl->e[b].next->masked = tbl->masked + tbl->mask; tbl->e[b].next->code = code; tbl->e[b].next->lvl = tbl->lvl + 1; tbl->e[b].next->p_idx = tbl->idx + b; } AN(tbl->e[b].next); tbl_add(tbl->e[b].next, code, codelen, bits, len - tbl->mask, chr); return; } bits = bits << (tbl->mask - len); for (u = 0; u < (1U << (tbl->mask - len)); u++) { b = bits | u; assert(b < tbl->n); AZ(tbl->e[b].len); AZ(tbl->e[b].next); tbl->e[b].len = len; tbl->e[b].chr = chr; } } static void print_lsb(uint32_t c, int l) { assert(l <= 32); while (l > 0) { if (c & (1U << (l - 1))) printf("1"); else printf("0"); l--; } } static void tbl_print(const struct tbl *tbl) { unsigned u; printf("/* Table: lvl=%u p_idx=%u n=%u mask=%u masked=%u */\n", tbl->lvl, tbl->p_idx, tbl->n, tbl->mask, tbl->masked); for (u = 0; u < tbl->n; u++) { printf("/* %3u: ", tbl->idx + u); printf("%*s", maxlen - tbl->mask - tbl->masked, ""); printf("%*s", tbl->mask - tbl->e[u].len, ""); if (tbl->masked > 0) { printf("("); print_lsb(tbl->code >> tbl->mask, tbl->masked); printf(") "); } else printf(" "); if (tbl->e[u].len < tbl->mask) { print_lsb(u >> (tbl->mask - tbl->e[u].len), tbl->e[u].len); printf(" ("); print_lsb(u, tbl->mask - tbl->e[u].len); printf(")"); } else { assert(tbl->e[u].len == tbl->mask); print_lsb(u, tbl->e[u].len); printf(" "); } printf("%*s", 3 - (tbl->mask - tbl->e[u].len), ""); printf(" */ "); if (tbl->e[u].next) { /* Jump to next table */ assert(tbl->e[u].next->idx - (tbl->idx + u) <= UINT8_MAX); printf("{ .len = %u, .jump = %u },", tbl->e[u].len, tbl->e[u].next->idx - (tbl->idx + u)); printf(" /* Next: %u */", tbl->e[u].next->idx); } else if (tbl->e[u].len) { printf("{ "); printf(".len = %u", tbl->e[u].len); printf(", .chr = (char)0x%02x", tbl->e[u].chr); if (isgraph(tbl->e[u].chr)) printf(" /* '%c' */", tbl->e[u].chr); if (u == 0) /* First in table, set mask */ printf(", .mask = %u", tbl->mask); printf(" },"); } else printf("{ .len = 0 }, /* invalid */"); printf("\n"); } for (u = 0; u < tbl->n; u++) if (tbl->e[u].next) tbl_print(tbl->e[u].next); } int main(int argc, const char **argv) { struct tbl *top; unsigned u; (void)argc; (void)argv; for (u = 0; u < HUF_LEN; u++) { maxlen = vmax(maxlen, huf[u].blen); minlen = vmin(minlen, huf[u].blen); } top = tbl_new(8); AN(top); for (u = 0; u < HUF_LEN; u++) tbl_add(top, huf[u].code, huf[u].blen, huf[u].code, huf[u].blen, huf[u].chr); printf("/*\n"); printf(" * NB: This file is machine generated, DO NOT EDIT!\n"); printf(" *\n"); printf(" */\n\n"); printf("#define HUFDEC_LEN %u\n", idx); printf("#define HUFDEC_MIN %u\n", minlen); printf("#define HUFDEC_MAX %u\n\n", maxlen); printf("static const struct {\n"); printf("\tuint8_t\tmask;\n"); printf("\tuint8_t\tlen;\n"); printf("\tuint8_t\tjump;\n"); printf("\tchar\tchr;\n"); printf("} hufdec[HUFDEC_LEN] = {\n"); tbl_print(top); printf("};\n"); tbl_free(top); return (0); } varnish-7.5.0/bin/varnishd/hpack/vhp_table.c000066400000000000000000000464121457605730600210050ustar00rootroot00000000000000/*- * Copyright (c) 2016 Varnish Software AS * All rights reserved. * * Author: Martin Blix Grydeland * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Layout: * * buf [ * * * ... * * * (padding bytes for pointer alignment) * * * * ... * * ] * */ #include "config.h" #include #include #include #include #include #include #include "vdef.h" #include "miniobj.h" #include "vas.h" #include "hpack/vhp.h" #define VHT_STATIC_MAX 61 struct vht_static { const char *name; unsigned namelen; const char *value; unsigned valuelen; }; static const struct vht_static static_table[] = { #define HPS(NUM, NAME, VAL) \ { NAME, sizeof NAME - 1, VAL, sizeof VAL - 1 }, #include "tbl/vhp_static.h" }; #define TBLSIZE(tbl) ((tbl)->size + (tbl)->n * VHT_ENTRY_SIZE) #define ENTRIES(buf, bufsize, n) \ (((struct vht_entry *)((uintptr_t)(buf) + bufsize)) - (n)) #define TBLENTRIES(tbl) ENTRIES((tbl)->buf, (tbl)->bufsize, (tbl)->n) #define TBLENTRY(tbl, i) (&TBLENTRIES(tbl)[(i)]) #define ENTRYLEN(e) ((e)->namelen + (e)->valuelen) #define ENTRYSIZE(e) (ENTRYLEN(e) + VHT_ENTRY_SIZE) /****************************************************************************/ /* Internal interface */ static void vht_newentry(struct vht_table *tbl) { struct vht_entry *e; assert(tbl->maxsize - TBLSIZE(tbl) >= VHT_ENTRY_SIZE); tbl->n++; e = TBLENTRY(tbl, 0); INIT_OBJ(e, VHT_ENTRY_MAGIC); e->offset = tbl->size; } /* Trim elements from the end until the table size is less than max. */ static void vht_trim(struct vht_table *tbl, ssize_t max) { unsigned u, v; int i; struct vht_entry *e; CHECK_OBJ_NOTNULL(tbl, VHT_TABLE_MAGIC); if (max < 0) max = 0; if (TBLSIZE(tbl) <= max) return; u = v = 0; for (i = tbl->n - 1; i >= 0; i--) { e = TBLENTRY(tbl, i); CHECK_OBJ_NOTNULL(e, VHT_ENTRY_MAGIC); if (TBLSIZE(tbl) - (u + v * VHT_ENTRY_SIZE) > max) { /* Trim entry */ assert(e->offset == u); u += ENTRYLEN(e); v++; FINI_OBJ(e); } else { /* Fixup offset */ assert(e->offset >= u); e->offset -= u; } } assert(v <= tbl->n); memmove(tbl->buf, tbl->buf + u, tbl->size - u); memmove(TBLENTRY(tbl, v), TBLENTRY(tbl, 0), (tbl->n - v) * sizeof *e); tbl->n -= v; tbl->size -= u; } /* Append len bytes from buf to entry 0 name. Asserts if no space. */ static void vht_appendname(struct vht_table *tbl, const char *buf, size_t len) { struct vht_entry *e; CHECK_OBJ_NOTNULL(tbl, VHT_TABLE_MAGIC); e = TBLENTRY(tbl, 0); CHECK_OBJ_NOTNULL(e, VHT_ENTRY_MAGIC); AZ(e->valuelen); /* Name needs to be set before value */ assert(TBLSIZE(tbl) + len <= tbl->maxsize); assert(e->offset + e->namelen == tbl->size); memcpy(tbl->buf + tbl->size, buf, len); e->namelen += len; tbl->size += len; } /* Append len bytes from buf to entry 0 value. Asserts if no space. */ static void vht_appendvalue(struct vht_table *tbl, const char *buf, size_t len) { struct vht_entry *e; CHECK_OBJ_NOTNULL(tbl, VHT_TABLE_MAGIC); e = TBLENTRY(tbl, 0); CHECK_OBJ_NOTNULL(e, VHT_ENTRY_MAGIC); assert(TBLSIZE(tbl) + len <= tbl->maxsize); assert(e->offset + e->namelen + e->valuelen == tbl->size); memcpy(tbl->buf + tbl->size, buf, len); e->valuelen += len; tbl->size += len; } /****************************************************************************/ /* Public interface */ void VHT_NewEntry(struct vht_table *tbl) { CHECK_OBJ_NOTNULL(tbl, VHT_TABLE_MAGIC); assert(tbl->maxsize <= tbl->protomax); vht_trim(tbl, tbl->maxsize - VHT_ENTRY_SIZE); if (tbl->maxsize - TBLSIZE(tbl) < VHT_ENTRY_SIZE) { /* Maxsize less than one entry */ assert(tbl->maxsize < VHT_ENTRY_SIZE); return; } vht_newentry(tbl); } int VHT_NewEntry_Indexed(struct vht_table *tbl, unsigned idx) { struct vht_entry *e, *e2; unsigned l, l2, lentry, lname, u; uint8_t tmp[48]; /* Referenced name insertion. This has to be done carefully because the referenced name may be evicted as the result of the insertion (RFC 7541 section 4.4). */ assert(sizeof tmp >= VHT_ENTRY_SIZE); CHECK_OBJ_NOTNULL(tbl, VHT_TABLE_MAGIC); assert(tbl->maxsize <= tbl->protomax); if (idx == 0) return (-1); if (idx <= VHT_STATIC_MAX) { /* Static table reference */ VHT_NewEntry(tbl); VHT_AppendName(tbl, static_table[idx - 1].name, static_table[idx - 1].namelen); return (0); } idx -= VHT_STATIC_MAX + 1; if (idx >= tbl->n) return (-1); /* No such index */ assert(tbl->maxsize >= VHT_ENTRY_SIZE); e = TBLENTRY(tbl, idx); CHECK_OBJ_NOTNULL(e, VHT_ENTRY_MAGIC); /* Count how many elements we can safely evict to make space without evicting the referenced entry. */ l = 0; u = 0; while (tbl->n - 1 - u > idx && tbl->maxsize - TBLSIZE(tbl) + l < VHT_ENTRY_SIZE + e->namelen) { e2 = TBLENTRY(tbl, tbl->n - 1 - u); CHECK_OBJ_NOTNULL(e2, VHT_ENTRY_MAGIC); l += ENTRYSIZE(e2); u++; } vht_trim(tbl, TBLSIZE(tbl) - l); e += u; assert(e == TBLENTRY(tbl, idx)); if (tbl->maxsize - TBLSIZE(tbl) >= VHT_ENTRY_SIZE + e->namelen) { /* New entry with name fits */ vht_newentry(tbl); idx++; assert(e == TBLENTRY(tbl, idx)); CHECK_OBJ_NOTNULL(e, VHT_ENTRY_MAGIC); vht_appendname(tbl, tbl->buf + e->offset, e->namelen); return (0); } /* The tricky case: The referenced name will be evicted as a result of the insertion. Move the referenced element data to the end of the buffer through a local buffer. */ /* Remove the referenced element from the entry list */ assert(idx == tbl->n - 1); assert(e->offset == 0); lname = e->namelen; lentry = ENTRYLEN(e); FINI_OBJ(e); memmove(TBLENTRY(tbl, 1), TBLENTRY(tbl, 0), (tbl->n - 1) * sizeof *e); tbl->n--; /* Shift the referenced element last using a temporary buffer. */ l = 0; while (l < lentry) { l2 = lentry - l; if (l2 > sizeof tmp) l2 = sizeof tmp; memcpy(tmp, tbl->buf, l2); memmove(tbl->buf, tbl->buf + l2, tbl->size - l2); memcpy(tbl->buf + tbl->size - l2, tmp, l2); l += l2; } assert(l == lentry); tbl->size -= lentry; /* Fix up the existing element offsets */ for (u = 0; u < tbl->n; u++) { e = TBLENTRY(tbl, u); CHECK_OBJ_NOTNULL(e, VHT_ENTRY_MAGIC); assert(e->offset >= lentry); e->offset -= lentry; assert(e->offset + ENTRYLEN(e) <= tbl->size); } /* Insert the new entry with the name now present at the end of the buffer. */ assert(tbl->maxsize - TBLSIZE(tbl) >= VHT_ENTRY_SIZE + lname); tbl->n++; e = TBLENTRY(tbl, 0); INIT_OBJ(e, VHT_ENTRY_MAGIC); e->offset = tbl->size; e->namelen = lname; tbl->size += lname; return (0); } void VHT_AppendName(struct vht_table *tbl, const char *buf, ssize_t len) { CHECK_OBJ_NOTNULL(tbl, VHT_TABLE_MAGIC); assert(tbl->maxsize <= tbl->protomax); if (len == 0) return; AN(buf); if (len < 0) len = strlen(buf); vht_trim(tbl, tbl->maxsize - len); if (tbl->n == 0) /* Max size exceeded */ return; vht_appendname(tbl, buf, len); } void VHT_AppendValue(struct vht_table *tbl, const char *buf, ssize_t len) { CHECK_OBJ_NOTNULL(tbl, VHT_TABLE_MAGIC); assert(tbl->maxsize <= tbl->protomax); if (len == 0) return; AN(buf); if (len < 0) len = strlen(buf); vht_trim(tbl, tbl->maxsize - len); if (tbl->n == 0) /* Max size exceeded */ return; vht_appendvalue(tbl, buf, len); } int VHT_SetMaxTableSize(struct vht_table *tbl, size_t maxsize) { CHECK_OBJ_NOTNULL(tbl, VHT_TABLE_MAGIC); assert(tbl->maxsize <= tbl->protomax); if (maxsize > tbl->protomax) return (-1); vht_trim(tbl, maxsize); assert(TBLSIZE(tbl) <= maxsize); tbl->maxsize = maxsize; return (0); } int VHT_SetProtoMax(struct vht_table *tbl, size_t protomax) { size_t bufsize; char *buf; CHECK_OBJ_NOTNULL(tbl, VHT_TABLE_MAGIC); assert(protomax <= UINT_MAX); assert(tbl->maxsize <= tbl->protomax); if (protomax == tbl->protomax) return (0); if (tbl->maxsize > protomax) tbl->maxsize = protomax; vht_trim(tbl, tbl->maxsize); assert(TBLSIZE(tbl) <= tbl->maxsize); bufsize = PRNDUP(protomax); if (bufsize == tbl->bufsize) { tbl->protomax = protomax; return (0); } buf = malloc(bufsize); if (buf == NULL) return (-1); if (tbl->buf != NULL) { memcpy(buf, tbl->buf, tbl->size); memcpy(ENTRIES(buf, bufsize, tbl->n), TBLENTRIES(tbl), sizeof (struct vht_entry) * tbl->n); free(tbl->buf); } tbl->buf = buf; tbl->bufsize = bufsize; tbl->protomax = protomax; return (0); } const char * VHT_LookupName(const struct vht_table *tbl, unsigned idx, size_t *plen) { struct vht_entry *e; AN(plen); *plen = 0; if (idx == 0) { return (NULL); } if (idx <= VHT_STATIC_MAX) { *plen = static_table[idx - 1].namelen; return (static_table[idx - 1].name); } if (tbl == NULL) return (NULL); CHECK_OBJ_NOTNULL(tbl, VHT_TABLE_MAGIC); idx -= VHT_STATIC_MAX + 1; if (idx >= tbl->n) return (NULL); e = TBLENTRY(tbl, idx); CHECK_OBJ_NOTNULL(e, VHT_ENTRY_MAGIC); assert(e->offset + e->namelen <= tbl->size); *plen = e->namelen; return (tbl->buf + e->offset); } const char * VHT_LookupValue(const struct vht_table *tbl, unsigned idx, size_t *plen) { struct vht_entry *e; AN(plen); *plen = 0; if (idx == 0) { return (NULL); } if (idx <= VHT_STATIC_MAX) { *plen = static_table[idx - 1].valuelen; return (static_table[idx - 1].value); } if (tbl == NULL) return (NULL); CHECK_OBJ_NOTNULL(tbl, VHT_TABLE_MAGIC); idx -= VHT_STATIC_MAX + 1; if (idx >= tbl->n) return (NULL); e = TBLENTRY(tbl, idx); CHECK_OBJ_NOTNULL(e, VHT_ENTRY_MAGIC); assert(e->offset + e->namelen + e->valuelen <= tbl->size); *plen = e->valuelen; return (tbl->buf + e->offset + e->namelen); } int VHT_Init(struct vht_table *tbl, size_t protomax) { int r; assert(sizeof (struct vht_entry) <= VHT_ENTRY_SIZE); AN(tbl); if (protomax > UINT_MAX) return (-1); INIT_OBJ(tbl, VHT_TABLE_MAGIC); r = VHT_SetProtoMax(tbl, protomax); if (r) { FINI_OBJ(tbl); return (r); } tbl->maxsize = tbl->protomax; return (0); } void VHT_Fini(struct vht_table *tbl) { CHECK_OBJ_NOTNULL(tbl, VHT_TABLE_MAGIC); free(tbl->buf); memset(tbl, 0, sizeof *tbl); } /****************************************************************************/ /* Internal interface */ #ifdef TABLE_TEST_DRIVER #define VHT_DYNAMIC (VHT_STATIC_MAX + 1) static int verbose = 0; static int vht_matchtable(struct vht_table *tbl, ...) { va_list ap; unsigned u; int r; const char *a, *b; const struct vht_entry *e; CHECK_OBJ_NOTNULL(tbl, VHT_TABLE_MAGIC); va_start(ap, tbl); r = 0; for (u = 0; u < tbl->n; u++) { a = NULL; b = NULL; if (!r) { a = va_arg(ap, const char *); if (a == NULL) { printf("Too many elements in table\n"); r = -1; } else { b = va_arg(ap, const char *); AN(b); } } e = TBLENTRY(tbl, u); CHECK_OBJ_NOTNULL(e, VHT_ENTRY_MAGIC); if (a) { AN(b); if (e->namelen != strlen(a) || strncmp(a, tbl->buf + e->offset, e->namelen)) r = -1; if (e->valuelen != strlen(b) || strncmp(b, tbl->buf + e->offset + e->namelen, e->valuelen)) r = -1; } if (verbose || r) printf("%2u: @%03u (\"%.*s\", \"%.*s\")", u, e->offset, (int)e->namelen, tbl->buf + e->offset, (int)e->valuelen, tbl->buf + e->offset +e->namelen); if (a && (verbose || r)) { AN(b); printf(" %s (\"%s\", \"%s\")", (r ? "!=" : "=="), a, b); } if (verbose || r) printf("\n"); } if (!r) { a = va_arg(ap, const char *); if (a != NULL) { printf("Missing elements in table\n"); r = -1; } } va_end(ap); if (verbose || r) printf("n=%d, size=%u, tblsz=%u, max=%u, pmax=%u, bufsz=%u\n", tbl->n, tbl->size, TBLSIZE(tbl), tbl->maxsize, tbl->protomax, tbl->bufsize); return (r); } static void test_1(void) { /* Static table */ const char *p; size_t l; if (verbose) printf("Test 1:\n"); /* 1: ':authority' -> '' */ p = VHT_LookupName(NULL, 1, &l); assert(l == strlen(":authority")); AZ(strncmp(p, ":authority", strlen(":authority"))); p = VHT_LookupValue(NULL, 1, &l); AN(p); AZ(l); /* 5: ':path' -> '/index.html' */ p = VHT_LookupValue(NULL, 5, &l); assert(l == strlen("/index.html")); AZ(strncmp(p, "/index.html", strlen("/index.html"))); /* 61: 'www-authenticate' -> '' */ p = VHT_LookupName(NULL, 61, &l); assert(l == strlen("www-authenticate")); AZ(strncmp(p, "www-authenticate", strlen("www-authenticate"))); p = VHT_LookupValue(NULL, 61, &l); AN(p); AZ(l); /* Test zero index */ AZ(VHT_LookupName(NULL, 0, &l)); AZ(l); AZ(VHT_LookupValue(NULL, 0, &l)); AZ(l); printf("Test 1 finished successfully\n"); if (verbose) printf("\n"); } static void test_2(void) { /* Test filling and overflow */ struct vht_table tbl[1]; if (verbose) printf("Test 2:\n"); AZ(VHT_Init(tbl, VHT_ENTRY_SIZE + 10)); VHT_NewEntry(tbl); VHT_AppendName(tbl, "12345", -1); VHT_AppendValue(tbl, "abcde", -1); assert(TBLSIZE(tbl) == VHT_ENTRY_SIZE + 10); /* 0: '12345' -> 'abcde' */ AZ(vht_matchtable(tbl, "12345", "abcde", NULL)); VHT_AppendValue(tbl, "f", -1); AZ(vht_matchtable(tbl, NULL)); VHT_NewEntry(tbl); AZ(vht_matchtable(tbl, "", "", NULL)); VHT_Fini(tbl); AZ(tbl->buf); printf("Test 2 finished successfully\n"); if (verbose) printf("\n"); } static void test_3(void) { /* Test change in proto max size and dynamic max size */ struct vht_table tbl[1]; if (verbose) printf("Test 3:\n"); AZ(VHT_Init(tbl, 4096)); VHT_NewEntry(tbl); VHT_AppendName(tbl, "a", -1); VHT_AppendValue(tbl, "12345", -1); VHT_NewEntry(tbl); VHT_AppendName(tbl, "b", -1); VHT_AppendValue(tbl, "67890", -1); VHT_NewEntry(tbl); VHT_AppendName(tbl, "c", -1); VHT_AppendValue(tbl, "abcde", -1); AZ(vht_matchtable(tbl, "c", "abcde", "b", "67890", "a", "12345", NULL)); /* Buffer reallocation */ AZ(VHT_SetProtoMax(tbl, VHT_ENTRY_SIZE * 2 + 6 * 2)); AZ(vht_matchtable(tbl, "c", "abcde", "b", "67890", NULL)); /* Increase table size beyond protomax */ assert(VHT_SetMaxTableSize(tbl, VHT_ENTRY_SIZE * 2 + 6 * 2 + 1) == -1); /* Decrease by one */ AZ(VHT_SetMaxTableSize(tbl, VHT_ENTRY_SIZE * 2 + 6 * 2 - 1)); AZ(vht_matchtable(tbl, "c", "abcde", NULL)); /* Increase by one back to protomax */ AZ(VHT_SetMaxTableSize(tbl, VHT_ENTRY_SIZE * 2 + 6 * 2)); AZ(vht_matchtable(tbl, "c", "abcde", NULL)); /* Add entry */ VHT_NewEntry(tbl); VHT_AppendName(tbl, "d", -1); VHT_AppendValue(tbl, "ABCDE", -1); AZ(vht_matchtable(tbl, "d", "ABCDE", "c", "abcde", NULL)); /* Set to zero */ AZ(VHT_SetMaxTableSize(tbl, 0)); AZ(vht_matchtable(tbl, NULL)); VHT_NewEntry(tbl); AZ(vht_matchtable(tbl, NULL)); /* Set protomax to zero */ AZ(VHT_SetProtoMax(tbl, 0)); AZ(vht_matchtable(tbl, NULL)); VHT_NewEntry(tbl); AZ(vht_matchtable(tbl, NULL)); VHT_Fini(tbl); printf("Test 3 finished successfully\n"); if (verbose) printf("\n"); } static void test_4(void) { /* Referenced name new entry */ struct vht_table tbl[1]; static const char longname[] = "1234567890" "1234567890" "1234567890" "1234567890" "1234567890" "1"; /* 51 bytes + VHT_ENTRY_SIZE == 83 */ if (verbose) printf("Test 4:\n"); AZ(VHT_Init(tbl, VHT_ENTRY_SIZE * 2 + 10 * 2)); /* 84 bytes */ /* New entry indexed from static table */ AZ(VHT_NewEntry_Indexed(tbl, 4)); VHT_AppendValue(tbl, "12345", -1); AZ(vht_matchtable(tbl, ":path", "12345", NULL)); /* New entry indexed from dynamic table */ AZ(VHT_NewEntry_Indexed(tbl, VHT_DYNAMIC + 0)); VHT_AppendValue(tbl, "abcde", -1); AZ(vht_matchtable(tbl, ":path", "abcde", ":path", "12345", NULL)); AZ(tbl->maxsize - TBLSIZE(tbl)); /* No space left */ /* New entry indexed from dynamic table, no overlap eviction */ AZ(VHT_NewEntry_Indexed(tbl, VHT_DYNAMIC + 0)); VHT_AppendValue(tbl, "ABCDE", -1); AZ(vht_matchtable(tbl, ":path", "ABCDE", ":path", "abcde", NULL)); /* New entry indexed from dynamic table, overlap eviction */ AZ(VHT_NewEntry_Indexed(tbl, VHT_DYNAMIC + 1)); AZ(vht_matchtable(tbl, ":path", "", ":path", "ABCDE", NULL)); /* New entry indexed from dynamic table, overlap eviction with overlap larger than the copy buffer size */ VHT_NewEntry(tbl); VHT_AppendName(tbl, longname, strlen(longname)); AZ(vht_matchtable(tbl, longname, "", NULL)); AZ(VHT_NewEntry_Indexed(tbl, VHT_DYNAMIC + 0)); VHT_AppendValue(tbl, "2", -1); AZ(vht_matchtable(tbl, longname, "2", NULL)); VHT_Fini(tbl); printf("Test 4 finished successfully\n"); if (verbose) printf("\n"); } static void test_5(void) { struct vht_table tbl[1]; char buf_a[3]; char buf_b[2]; int i; if (verbose) printf("Test 5:\n"); assert(sizeof buf_a > 0); for (i = 0; i < sizeof buf_a - 1; i++) buf_a[i] = 'a'; buf_a[i++] = '\0'; assert(sizeof buf_b > 0); for (i = 0; i < sizeof buf_b - 1; i++) buf_b[i] = 'b'; buf_b[i++] = '\0'; AZ(VHT_Init(tbl, 3 * ((sizeof buf_a - 1)+(sizeof buf_b - 1)+VHT_ENTRY_SIZE))); VHT_NewEntry(tbl); VHT_AppendName(tbl, buf_a, sizeof buf_a - 1); VHT_AppendValue(tbl, buf_b, sizeof buf_b - 1); AZ(vht_matchtable(tbl, buf_a, buf_b, NULL)); VHT_NewEntry(tbl); VHT_AppendName(tbl, buf_a, sizeof buf_a - 1); VHT_AppendValue(tbl, buf_b, sizeof buf_b - 1); AZ(vht_matchtable(tbl, buf_a, buf_b, buf_a, buf_b, NULL)); VHT_NewEntry(tbl); VHT_AppendName(tbl, buf_a, sizeof buf_a - 1); VHT_AppendValue(tbl, buf_b, sizeof buf_b - 1); AZ(vht_matchtable(tbl, buf_a, buf_b, buf_a, buf_b, buf_a, buf_b, NULL)); AZ(VHT_NewEntry_Indexed(tbl, VHT_DYNAMIC + 2)); VHT_AppendValue(tbl, buf_b, sizeof buf_b - 1); AZ(vht_matchtable(tbl, buf_a, buf_b, buf_a, buf_b, buf_a, buf_b, NULL)); VHT_Fini(tbl); printf("Test 5 finished successfully\n"); if (verbose) printf("\n"); } int main(int argc, char **argv) { if (argc == 2 && !strcmp(argv[1], "-v")) verbose = 1; else if (argc != 1) { fprintf(stderr, "Usage: %s [-v]\n", argv[0]); return (1); } if (verbose) { printf("sizeof (struct vht_table) == %zu\n", sizeof (struct vht_table)); printf("sizeof (struct vht_entry) == %zu\n", sizeof (struct vht_entry)); printf("\n"); } test_1(); test_2(); test_3(); test_4(); test_5(); return (0); } #endif /* TABLE_TEST_DRIVER */ varnish-7.5.0/bin/varnishd/http1/000077500000000000000000000000001457605730600166405ustar00rootroot00000000000000varnish-7.5.0/bin/varnishd/http1/cache_http1.h000066400000000000000000000052131457605730600211750ustar00rootroot00000000000000/*- * Copyright (c) 2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ struct VSC_vbe; /* cache_http1_fetch.c [V1F] */ int V1F_SendReq(struct worker *, struct busyobj *, uint64_t *ctr_hdrbytes, uint64_t *ctr_bodybytes); int V1F_FetchRespHdr(struct busyobj *); int V1F_Setup_Fetch(struct vfp_ctx *vfc, struct http_conn *htc); /* cache_http1_fsm.c [HTTP1] */ extern const int HTTP1_Req[3]; extern const int HTTP1_Resp[3]; /* cache_http1_deliver.c */ void V1D_Deliver(struct req *, struct boc *, int sendbody); /* cache_http1_pipe.c */ struct v1p_acct { uint64_t req; uint64_t bereq; uint64_t in; uint64_t out; }; int V1P_Enter(void); void V1P_Leave(void); stream_close_t V1P_Process(const struct req *, int fd, struct v1p_acct *, vtim_real deadline); void V1P_Charge(struct req *, const struct v1p_acct *, struct VSC_vbe *); /* cache_http1_line.c */ void V1L_Chunked(const struct worker *w); void V1L_EndChunk(const struct worker *w); void V1L_Open(struct worker *, struct ws *, int *fd, struct vsl_log *, vtim_real deadline, unsigned niov); stream_close_t V1L_Flush(const struct worker *w); stream_close_t V1L_Close(struct worker *w, uint64_t *cnt); size_t V1L_Write(const struct worker *w, const void *ptr, ssize_t len); extern const struct vdp * const VDP_v1l; varnish-7.5.0/bin/varnishd/http1/cache_http1_deliver.c000066400000000000000000000107341457605730600227060ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include "config.h" #include "cache/cache_varnishd.h" #include "cache/cache_filter.h" #include "cache_http1.h" #include "vtcp.h" /*--------------------------------------------------------------------*/ static void v1d_error(struct req *req, struct boc *boc, const char *msg) { static const char r_500[] = "HTTP/1.1 500 Internal Server Error\r\n" "Server: Varnish\r\n" "Connection: close\r\n\r\n"; AZ(req->wrk->v1l); VSLbs(req->vsl, SLT_Error, TOSTRAND(msg)); VSLb(req->vsl, SLT_RespProtocol, "HTTP/1.1"); VSLb(req->vsl, SLT_RespStatus, "500"); VSLb(req->vsl, SLT_RespReason, "Internal Server Error"); req->wrk->stats->client_resp_500++; VTCP_Assert(write(req->sp->fd, r_500, sizeof r_500 - 1)); req->doclose = SC_TX_EOF; req->acct.resp_bodybytes += VDP_Close(req->vdc, req->objcore, boc); } /*-------------------------------------------------------------------- */ void v_matchproto_(vtr_deliver_f) V1D_Deliver(struct req *req, struct boc *boc, int sendbody) { struct vrt_ctx ctx[1]; int err = 0, chunked = 0; stream_close_t sc; uint64_t hdrbytes, bytes; CHECK_OBJ_NOTNULL(req, REQ_MAGIC); CHECK_OBJ_ORNULL(boc, BOC_MAGIC); CHECK_OBJ_NOTNULL(req->objcore, OBJCORE_MAGIC); if (req->doclose == SC_NULL && http_HdrIs(req->resp, H_Connection, "close")) { req->doclose = SC_RESP_CLOSE; } else if (req->doclose != SC_NULL) { if (!http_HdrIs(req->resp, H_Connection, "close")) { http_Unset(req->resp, H_Connection); http_SetHeader(req->resp, "Connection: close"); } } else if (!http_GetHdr(req->resp, H_Connection, NULL)) http_SetHeader(req->resp, "Connection: keep-alive"); if (sendbody) { if (!http_GetHdr(req->resp, H_Content_Length, NULL)) { if (req->http->protover == 11) { chunked = 1; http_SetHeader(req->resp, "Transfer-Encoding: chunked"); } else { req->doclose = SC_TX_EOF; } } INIT_OBJ(ctx, VRT_CTX_MAGIC); VCL_Req2Ctx(ctx, req); if (VDP_Push(ctx, req->vdc, req->ws, VDP_v1l, NULL)) { v1d_error(req, boc, "Failure to push v1d processor"); return; } } if (WS_Overflowed(req->ws)) { v1d_error(req, boc, "workspace_client overflow"); return; } if (WS_Overflowed(req->sp->ws)) { v1d_error(req, boc, "workspace_session overflow"); return; } V1L_Open(req->wrk, req->wrk->aws, &req->sp->fd, req->vsl, req->t_prev + SESS_TMO(req->sp, send_timeout), cache_param->http1_iovs); if (WS_Overflowed(req->wrk->aws)) { v1d_error(req, boc, "workspace_thread overflow"); return; } hdrbytes = HTTP1_Write(req->wrk, req->resp, HTTP1_Resp); if (sendbody) { if (DO_DEBUG(DBG_FLUSH_HEAD)) (void)V1L_Flush(req->wrk); if (chunked) V1L_Chunked(req->wrk); err = VDP_DeliverObj(req->vdc, req->objcore); if (!err && chunked) V1L_EndChunk(req->wrk); } sc = V1L_Close(req->wrk, &bytes); AZ(req->wrk->v1l); req->acct.resp_hdrbytes += hdrbytes; req->acct.resp_bodybytes += VDP_Close(req->vdc, req->objcore, boc); if (sc == SC_NULL && err && req->sp->fd >= 0) sc = SC_REM_CLOSE; if (sc != SC_NULL) Req_Fail(req, sc); } varnish-7.5.0/bin/varnishd/http1/cache_http1_fetch.c000066400000000000000000000211041457605730600223360ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include "config.h" #include "cache/cache_varnishd.h" #include "cache/cache_filter.h" #include #include #include "vtcp.h" #include "vtim.h" #include "cache_http1.h" /*-------------------------------------------------------------------- * Pass the request body to the backend */ static int v_matchproto_(objiterate_f) vbf_iter_req_body(void *priv, unsigned flush, const void *ptr, ssize_t l) { struct busyobj *bo; CAST_OBJ_NOTNULL(bo, priv, BUSYOBJ_MAGIC); if (l > 0) { (void)V1L_Write(bo->wrk, ptr, l); if (flush && V1L_Flush(bo->wrk) != SC_NULL) return (-1); } return (0); } /*-------------------------------------------------------------------- * Send request to backend, including any (cached) req.body * * Return value: * 0 success * 1 failure */ int V1F_SendReq(struct worker *wrk, struct busyobj *bo, uint64_t *ctr_hdrbytes, uint64_t *ctr_bodybytes) { struct http *hp; stream_close_t sc; ssize_t i; uint64_t bytes, hdrbytes; struct http_conn *htc; int do_chunked = 0; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(bo, BUSYOBJ_MAGIC); CHECK_OBJ_NOTNULL(bo->htc, HTTP_CONN_MAGIC); CHECK_OBJ_ORNULL(bo->req, REQ_MAGIC); AN(ctr_hdrbytes); AN(ctr_bodybytes); htc = bo->htc; assert(*htc->rfd > 0); hp = bo->bereq; if (bo->req != NULL && !bo->req->req_body_status->length_known) { http_PrintfHeader(hp, "Transfer-Encoding: chunked"); do_chunked = 1; } VTCP_blocking(*htc->rfd); /* XXX: we should timeout instead */ /* XXX: need a send_timeout for the backend side */ V1L_Open(wrk, wrk->aws, htc->rfd, bo->vsl, nan(""), 0); hdrbytes = HTTP1_Write(wrk, hp, HTTP1_Req); /* Deal with any message-body the request might (still) have */ i = 0; if (bo->bereq_body != NULL) { AZ(bo->req); AZ(do_chunked); (void)ObjIterate(bo->wrk, bo->bereq_body, bo, vbf_iter_req_body, 0); } else if (bo->req != NULL && bo->req->req_body_status != BS_NONE) { if (do_chunked) V1L_Chunked(wrk); i = VRB_Iterate(wrk, bo->vsl, bo->req, vbf_iter_req_body, bo); if (bo->req->req_body_status != BS_CACHED) bo->no_retry = "req.body not cached"; if (bo->req->req_body_status == BS_ERROR) { /* * XXX: (#2332) We should test to see if the backend * XXX: sent us some headers explaining why. * XXX: This is hard because of the mistaken API split * XXX: between cache_backend.c and V1F, and therefore * XXX: Parked in this comment, pending renovation of * XXX: the VDI/backend-protocol API to allow non-H1 * XXX: backends. */ assert(i < 0); VSLb(bo->vsl, SLT_FetchError, "req.body read error: %d (%s)", errno, VAS_errtxt(errno)); bo->req->doclose = SC_RX_BODY; } if (do_chunked) V1L_EndChunk(wrk); } sc = V1L_Close(wrk, &bytes); CHECK_OBJ_NOTNULL(sc, STREAM_CLOSE_MAGIC); /* Bytes accounting */ if (bytes < hdrbytes) *ctr_hdrbytes += bytes; else { *ctr_hdrbytes += hdrbytes; *ctr_bodybytes += bytes - hdrbytes; } if (sc == SC_NULL && i < 0) sc = SC_TX_ERROR; CHECK_OBJ_NOTNULL(sc, STREAM_CLOSE_MAGIC); if (sc != SC_NULL) { VSLb(bo->vsl, SLT_FetchError, "backend write error: %d (%s) (%s)", errno, VAS_errtxt(errno), sc->desc); VSLb_ts_busyobj(bo, "Bereq", W_TIM_real(wrk)); htc->doclose = sc; return (-1); } CHECK_OBJ_NOTNULL(sc, STREAM_CLOSE_MAGIC); VSLb_ts_busyobj(bo, "Bereq", W_TIM_real(wrk)); return (0); } int V1F_FetchRespHdr(struct busyobj *bo) { struct http *hp; int i; double t; struct http_conn *htc; enum htc_status_e hs; const char *name, *desc; CHECK_OBJ_NOTNULL(bo, BUSYOBJ_MAGIC); CHECK_OBJ_NOTNULL(bo->htc, HTTP_CONN_MAGIC); CHECK_OBJ_ORNULL(bo->req, REQ_MAGIC); htc = bo->htc; assert(*htc->rfd > 0); VSC_C_main->backend_req++; /* Receive response */ HTC_RxInit(htc, bo->ws); CHECK_OBJ_NOTNULL(htc, HTTP_CONN_MAGIC); CHECK_OBJ_NOTNULL(bo->htc, HTTP_CONN_MAGIC); t = VTIM_real() + htc->first_byte_timeout; hs = HTC_RxStuff(htc, HTTP1_Complete, NULL, NULL, t, NAN, htc->between_bytes_timeout, cache_param->http_resp_size); if (hs != HTC_S_COMPLETE) { bo->acct.beresp_hdrbytes += htc->rxbuf_e - htc->rxbuf_b; switch (hs) { case HTC_S_JUNK: VSLb(bo->vsl, SLT_FetchError, "Received junk"); htc->doclose = SC_RX_JUNK; break; case HTC_S_CLOSE: VSLb(bo->vsl, SLT_FetchError, "backend closed"); htc->doclose = SC_RESP_CLOSE; break; case HTC_S_TIMEOUT: VSLb(bo->vsl, SLT_FetchError, "timeout"); htc->doclose = SC_RX_TIMEOUT; break; case HTC_S_OVERFLOW: VSLb(bo->vsl, SLT_FetchError, "overflow"); htc->doclose = SC_RX_OVERFLOW; break; case HTC_S_IDLE: VSLb(bo->vsl, SLT_FetchError, "first byte timeout"); htc->doclose = SC_RX_TIMEOUT; break; default: HTC_Status(hs, &name, &desc); VSLb(bo->vsl, SLT_FetchError, "HTC %s (%s)", name, desc); htc->doclose = SC_RX_BAD; break; } return (htc->rxbuf_e == htc->rxbuf_b ? 1 : -1); } VTCP_set_read_timeout(*htc->rfd, htc->between_bytes_timeout); hp = bo->beresp; i = HTTP1_DissectResponse(htc, hp, bo->bereq); bo->acct.beresp_hdrbytes += htc->rxbuf_e - htc->rxbuf_b; if (i) { VSLb(bo->vsl, SLT_FetchError, "http format error"); htc->doclose = SC_RX_JUNK; return (-1); } htc->doclose = http_DoConnection(hp, SC_RESP_CLOSE); /* * Figure out how the fetch is supposed to happen, before the * headers are adultered by VCL */ if (http_method_eq(http_GetMethod(bo->bereq), HEAD)) { /* * A HEAD request can never have a body in the reply, * no matter what the headers might say. * [RFC7231 4.3.2 p25] */ bo->wrk->stats->fetch_head++; bo->htc->body_status = BS_NONE; } else if (http_GetStatus(bo->beresp) <= 199) { /* * 1xx responses never have a body. * [RFC7230 3.3.2 p31] * ... but we should never see them. */ bo->wrk->stats->fetch_1xx++; bo->htc->body_status = BS_ERROR; } else if (http_IsStatus(bo->beresp, 204)) { /* * 204 is "No Content", obviously don't expect a body. * [RFC7230 3.3.1 p29 and 3.3.2 p31] */ bo->wrk->stats->fetch_204++; if ((http_GetHdr(bo->beresp, H_Content_Length, NULL) && bo->htc->content_length != 0) || http_GetHdr(bo->beresp, H_Transfer_Encoding, NULL)) bo->htc->body_status = BS_ERROR; else bo->htc->body_status = BS_NONE; } else if (http_IsStatus(bo->beresp, 304)) { /* * 304 is "Not Modified" it has no body. * [RFC7230 3.3 p28] */ bo->wrk->stats->fetch_304++; bo->htc->body_status = BS_NONE; } else if (bo->htc->body_status == BS_CHUNKED) { bo->wrk->stats->fetch_chunked++; } else if (bo->htc->body_status == BS_LENGTH) { assert(bo->htc->content_length > 0); bo->wrk->stats->fetch_length++; } else if (bo->htc->body_status == BS_EOF) { bo->wrk->stats->fetch_eof++; } else if (bo->htc->body_status == BS_ERROR) { bo->wrk->stats->fetch_bad++; } else if (bo->htc->body_status == BS_NONE) { bo->wrk->stats->fetch_none++; } else { WRONG("wrong bodystatus"); } assert(bo->vfc->resp == bo->beresp); if (bo->htc->body_status != BS_NONE && bo->htc->body_status != BS_ERROR) if (V1F_Setup_Fetch(bo->vfc, bo->htc)) { VSLb(bo->vsl, SLT_FetchError, "overflow"); htc->doclose = SC_RX_OVERFLOW; return (-1); } return (0); } varnish-7.5.0/bin/varnishd/http1/cache_http1_fsm.c000066400000000000000000000265001457605730600220370ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * This file contains the two central state machine for pushing HTTP1 * sessions through their states. * */ #include "config.h" #include #include #include "cache/cache_varnishd.h" #include "cache/cache_objhead.h" #include "cache/cache_transport.h" #include "cache_http1.h" #include "vtcp.h" static const char H1NEWREQ[] = "HTTP1::NewReq"; static const char H1PROC[] = "HTTP1::Proc"; static const char H1CLEANUP[] = "HTTP1::Cleanup"; static void HTTP1_Session(struct worker *, struct req *); static void http1_setstate(const struct sess *sp, const char *s) { uintptr_t p; p = (uintptr_t)s; AZ(SES_Set_proto_priv(sp, &p)); } static const char * http1_getstate(const struct sess *sp) { uintptr_t *p; AZ(SES_Get_proto_priv(sp, &p)); return ((const char *)*p); } /*-------------------------------------------------------------------- * Call protocol for this request */ static void v_matchproto_(task_func_t) http1_req(struct worker *wrk, void *arg) { struct req *req; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CAST_OBJ_NOTNULL(req, arg, REQ_MAGIC); THR_SetRequest(req); req->transport = &HTTP1_transport; assert(!WS_IsReserved(wrk->aws)); HTTP1_Session(wrk, req); AZ(wrk->v1l); WS_Assert(wrk->aws); THR_SetRequest(NULL); } /*-------------------------------------------------------------------- * Call protocol for this session (new or from waiter) * * When sessions are rescheduled from the waiter, a struct pool_task * is put on the reserved session workspace (for reasons of memory * conservation). This reservation is released as the first thing. * The acceptor and any other code which schedules this function * must obey this calling convention with a dummy reservation. */ static void v_matchproto_(task_func_t) http1_new_session(struct worker *wrk, void *arg) { struct sess *sp; struct req *req; uintptr_t *u; ssize_t sz; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CAST_OBJ_NOTNULL(req, arg, REQ_MAGIC); sp = req->sp; CHECK_OBJ_NOTNULL(sp, SESS_MAGIC); HTC_RxInit(req->htc, req->ws); if (!SES_Reserve_proto_priv(sp, &u, &sz)) { /* Out of session workspace. Free the req, close the sess, * and do not set a new task func, which will exit the * worker thread. */ VSL(SLT_Error, req->sp->vxid, "insufficient workspace (proto_priv)"); WS_Release(req->ws, 0); Req_Release(req); SES_Delete(sp, SC_RX_JUNK, NAN); return; } assert(sz == sizeof u); http1_setstate(sp, H1NEWREQ); wrk->task->func = http1_req; wrk->task->priv = req; } static void v_matchproto_(task_func_t) http1_unwait(struct worker *wrk, void *arg) { struct sess *sp; struct req *req; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CAST_OBJ_NOTNULL(sp, arg, SESS_MAGIC); WS_Release(sp->ws, 0); req = Req_New(sp); CHECK_OBJ_NOTNULL(req, REQ_MAGIC); req->htc->rfd = &sp->fd; HTC_RxInit(req->htc, req->ws); http1_setstate(sp, H1NEWREQ); wrk->task->func = http1_req; wrk->task->priv = req; } static void v_matchproto_(vtr_req_body_t) http1_req_body(struct req *req) { CHECK_OBJ_NOTNULL(req, REQ_MAGIC); if (V1F_Setup_Fetch(req->vfc, req->htc) != 0) req->req_body_status = BS_ERROR; } static void http1_sess_panic(struct vsb *vsb, const struct sess *sp) { VSB_printf(vsb, "state = %s\n", http1_getstate(sp)); } static void http1_req_panic(struct vsb *vsb, const struct req *req) { VSB_printf(vsb, "state = %s\n", http1_getstate(req->sp)); } static void v_matchproto_(vtr_req_fail_f) http1_req_fail(struct req *req, stream_close_t reason) { assert(reason != SC_NULL); assert(req->sp->fd != 0); if (req->sp->fd > 0) SES_Close(req->sp, reason); } static int v_matchproto_(vtr_minimal_response_f) http1_minimal_response(struct req *req, uint16_t status) { ssize_t wl, l; char buf[80]; const char *reason; assert(status >= 100); assert(status < 1000); reason = http_Status2Reason(status, NULL); bprintf(buf, "HTTP/1.1 %03d %s\r\n\r\n", status, reason); l = strlen(buf); VSLb(req->vsl, SLT_RespProtocol, "HTTP/1.1"); VSLb(req->vsl, SLT_RespStatus, "%03d", status); VSLbs(req->vsl, SLT_RespReason, TOSTRAND(reason)); if (status >= 400) req->err_code = status; wl = write(req->sp->fd, buf, l); if (wl > 0) req->acct.resp_hdrbytes += wl; if (wl != l) { if (wl < 0) VTCP_Assert(1); if (req->doclose == SC_NULL) req->doclose = SC_REM_CLOSE; return (-1); } return (0); } struct transport HTTP1_transport = { .name = "HTTP/1", .proto_ident = "HTTP", .magic = TRANSPORT_MAGIC, .deliver = V1D_Deliver, .minimal_response = http1_minimal_response, .new_session = http1_new_session, .req_body = http1_req_body, .req_fail = http1_req_fail, .req_panic = http1_req_panic, .sess_panic = http1_sess_panic, .unwait = http1_unwait, }; /*---------------------------------------------------------------------- */ static inline void http1_abort(struct req *req, uint16_t status) { assert(req->doclose != SC_NULL); assert(status >= 400); (void)http1_minimal_response(req, status); } static int http1_dissect(struct worker *wrk, struct req *req) { CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(req, REQ_MAGIC); CHECK_OBJ_NOTNULL(req->transport, TRANSPORT_MAGIC); /* Allocate a new vxid now that we know we'll need it. */ assert(IS_NO_VXID(req->vsl->wid)); req->vsl->wid = VXID_Get(wrk, VSL_CLIENTMARKER); VSLb(req->vsl, SLT_Begin, "req %ju rxreq", VXID(req->sp->vxid)); VSL(SLT_Link, req->sp->vxid, "req %ju rxreq", VXID(req->vsl->wid)); AZ(isnan(req->t_first)); /* First byte timestamp set by http1_wait */ AZ(isnan(req->t_req)); /* Complete req rcvd set by http1_wait */ req->t_prev = req->t_first; VSLb_ts_req(req, "Start", req->t_first); VSLb_ts_req(req, "Req", req->t_req); HTTP_Setup(req->http, req->ws, req->vsl, SLT_ReqMethod); req->err_code = HTTP1_DissectRequest(req->htc, req->http); /* If we could not even parse the request, just close */ if (req->err_code != 0) { VSLb(req->vsl, SLT_HttpGarbage, "%.*s", (int)(req->htc->rxbuf_e - req->htc->rxbuf_b), req->htc->rxbuf_b); wrk->stats->client_req_400++; (void)Req_LogStart(wrk, req); req->doclose = SC_RX_JUNK; http1_abort(req, 400); return (-1); } AZ(req->req_body_status); req->req_body_status = req->htc->body_status; return (0); } /*---------------------------------------------------------------------- */ static void HTTP1_Session(struct worker *wrk, struct req *req) { enum htc_status_e hs; struct sess *sp; const char *st; int i; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(req, REQ_MAGIC); sp = req->sp; CHECK_OBJ_NOTNULL(sp, SESS_MAGIC); /* * Whenever we come in from the acceptor or waiter, we need to set * blocking mode. It would be simpler to do this in the acceptor * or waiter, but we'd rather do the syscall in the worker thread. */ if (http1_getstate(sp) == H1NEWREQ) VTCP_blocking(sp->fd); req->transport = &HTTP1_transport; while (1) { st = http1_getstate(sp); if (st == H1NEWREQ) { CHECK_OBJ_NOTNULL(req->transport, TRANSPORT_MAGIC); assert(isnan(req->t_prev)); assert(isnan(req->t_req)); AZ(req->vcl); AZ(req->esi_level); AN(WS_Reservation(req->htc->ws)); hs = HTC_RxStuff(req->htc, HTTP1_Complete, &req->t_first, &req->t_req, sp->t_idle + SESS_TMO(sp, timeout_linger), sp->t_idle + SESS_TMO(sp, timeout_idle), NAN, cache_param->http_req_size); assert(!WS_IsReserved(req->htc->ws)); if (hs < HTC_S_EMPTY) { req->acct.req_hdrbytes += req->htc->rxbuf_e - req->htc->rxbuf_b; Req_AcctLogCharge(wrk->stats, req); Req_Release(req); SES_DeleteHS(sp, hs, NAN); return; } if (hs == HTC_S_IDLE) { wrk->stats->sess_herd++; Req_Release(req); SES_Wait(sp, &HTTP1_transport); return; } if (hs != HTC_S_COMPLETE) WRONG("htc_status (nonbad)"); if (H2_prism_complete(req->htc) == HTC_S_COMPLETE) { if (!FEATURE(FEATURE_HTTP2)) { SES_Close(req->sp, SC_REQ_HTTP20); assert(!WS_IsReserved(req->ws)); assert(!WS_IsReserved(wrk->aws)); http1_setstate(sp, H1CLEANUP); continue; } http1_setstate(sp, NULL); H2_PU_Sess(wrk, sp, req); return; } i = http1_dissect(wrk, req); req->acct.req_hdrbytes += req->htc->rxbuf_e - req->htc->rxbuf_b; if (i) { assert(req->doclose != SC_NULL); SES_Close(req->sp, req->doclose); assert(!WS_IsReserved(req->ws)); assert(!WS_IsReserved(wrk->aws)); http1_setstate(sp, H1CLEANUP); continue; } if (http_HdrIs(req->http, H_Upgrade, "h2c")) { if (!FEATURE(FEATURE_HTTP2)) { VSLb(req->vsl, SLT_Debug, "H2 upgrade attempt"); } else if (req->htc->body_status != BS_NONE) { VSLb(req->vsl, SLT_Debug, "H2 upgrade attempt has body"); } else { http1_setstate(sp, NULL); req->err_code = 2; H2_OU_Sess(wrk, sp, req); return; } } assert(req->req_step == R_STP_TRANSPORT); VCL_TaskEnter(req->privs); VCL_TaskEnter(req->top->privs); http1_setstate(sp, H1PROC); } else if (st == H1PROC) { req->task->func = http1_req; req->task->priv = req; CNT_Embark(wrk, req); if (CNT_Request(req) == REQ_FSM_DISEMBARK) return; wrk->stats->client_req++; AZ(req->top->vcl0); req->task->func = NULL; req->task->priv = NULL; assert(!WS_IsReserved(req->ws)); assert(!WS_IsReserved(wrk->aws)); http1_setstate(sp, H1CLEANUP); } else if (st == H1CLEANUP) { assert(!WS_IsReserved(wrk->aws)); assert(!WS_IsReserved(req->ws)); if (sp->fd >= 0 && req->doclose != SC_NULL) SES_Close(sp, req->doclose); if (sp->fd < 0) { wrk->stats->sess_closed++; Req_Cleanup(sp, wrk, req); Req_Release(req); SES_Delete(sp, SC_NULL, NAN); return; } Req_Cleanup(sp, wrk, req); HTC_RxInit(req->htc, req->ws); if (req->htc->rxbuf_e != req->htc->rxbuf_b) wrk->stats->sess_readahead++; if (FEATURE(FEATURE_BUSY_STATS_RATE)) WRK_AddStat(wrk); http1_setstate(sp, H1NEWREQ); } else { WRONG("Wrong H1 session state"); } } } varnish-7.5.0/bin/varnishd/http1/cache_http1_line.c000066400000000000000000000216311457605730600222010ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Write data to fd * We try to use writev() if possible in order to minimize number of * syscalls made and packets sent. It also just might allow the worker * thread to complete the request without holding stuff locked. * * XXX: chunked header (generated in Flush) and Tail (EndChunk) * are not accounted by means of the size_t returned. Obvious ideas: * - add size_t return value to Flush and EndChunk * - base accounting on (struct v1l).cnt */ #include "config.h" #include #include "cache/cache_varnishd.h" #include "cache/cache_filter.h" #include #include "cache_http1.h" #include "vtim.h" /*--------------------------------------------------------------------*/ struct v1l { unsigned magic; #define V1L_MAGIC 0x2f2142e5 int *wfd; stream_close_t werr; /* valid after V1L_Flush() */ struct iovec *iov; unsigned siov; unsigned niov; ssize_t liov; ssize_t cliov; unsigned ciov; /* Chunked header marker */ vtim_real deadline; struct vsl_log *vsl; ssize_t cnt; /* Flushed byte count */ struct ws *ws; uintptr_t ws_snap; }; /*-------------------------------------------------------------------- * for niov == 0, reserve the ws for max number of iovs * otherwise, up to niov */ void V1L_Open(struct worker *wrk, struct ws *ws, int *fd, struct vsl_log *vsl, vtim_real deadline, unsigned niov) { struct v1l *v1l; unsigned u; uintptr_t ws_snap; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); AZ(wrk->v1l); if (WS_Overflowed(ws)) return; if (niov != 0) assert(niov >= 3); ws_snap = WS_Snapshot(ws); v1l = WS_Alloc(ws, sizeof *v1l); if (v1l == NULL) return; INIT_OBJ(v1l, V1L_MAGIC); v1l->ws = ws; v1l->ws_snap = ws_snap; u = WS_ReserveLumps(ws, sizeof(struct iovec)); if (u < 3) { /* Must have at least 3 in case of chunked encoding */ WS_Release(ws, 0); WS_MarkOverflow(ws); return; } if (u > IOV_MAX) u = IOV_MAX; if (niov != 0 && u > niov) u = niov; v1l->iov = WS_Reservation(ws); v1l->siov = u; v1l->ciov = u; v1l->wfd = fd; v1l->deadline = deadline; v1l->vsl = vsl; v1l->werr = SC_NULL; AZ(wrk->v1l); wrk->v1l = v1l; WS_Release(ws, u * sizeof(struct iovec)); } stream_close_t V1L_Close(struct worker *wrk, uint64_t *cnt) { struct v1l *v1l; struct ws *ws; uintptr_t ws_snap; stream_close_t sc; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); AN(cnt); sc = V1L_Flush(wrk); TAKE_OBJ_NOTNULL(v1l, &wrk->v1l, V1L_MAGIC); *cnt = v1l->cnt; ws = v1l->ws; ws_snap = v1l->ws_snap; ZERO_OBJ(v1l, sizeof *v1l); WS_Rollback(ws, ws_snap); return (sc); } static void v1l_prune(struct v1l *v1l, size_t bytes) { ssize_t used = 0; ssize_t j, used_here; for (j = 0; j < v1l->niov; j++) { if (used + v1l->iov[j].iov_len > bytes) { /* Cutoff is in this iov */ used_here = bytes - used; v1l->iov[j].iov_len -= used_here; v1l->iov[j].iov_base = (char*)v1l->iov[j].iov_base + used_here; memmove(v1l->iov, &v1l->iov[j], (v1l->niov - j) * sizeof(struct iovec)); v1l->niov -= j; v1l->liov -= bytes; return; } used += v1l->iov[j].iov_len; } AZ(v1l->liov); } stream_close_t V1L_Flush(const struct worker *wrk) { ssize_t i; int err; struct v1l *v1l; char cbuf[32]; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); v1l = wrk->v1l; CHECK_OBJ_NOTNULL(v1l, V1L_MAGIC); CHECK_OBJ_NOTNULL(v1l->werr, STREAM_CLOSE_MAGIC); AN(v1l->wfd); assert(v1l->niov <= v1l->siov); if (*v1l->wfd >= 0 && v1l->liov > 0 && v1l->werr == SC_NULL) { if (v1l->ciov < v1l->siov && v1l->cliov > 0) { /* Add chunk head & tail */ bprintf(cbuf, "00%zx\r\n", v1l->cliov); i = strlen(cbuf); v1l->iov[v1l->ciov].iov_base = cbuf; v1l->iov[v1l->ciov].iov_len = i; v1l->liov += i; /* This is OK, because siov was --'ed */ v1l->iov[v1l->niov].iov_base = cbuf + i - 2; v1l->iov[v1l->niov++].iov_len = 2; v1l->liov += 2; } else if (v1l->ciov < v1l->siov) { v1l->iov[v1l->ciov].iov_base = cbuf; v1l->iov[v1l->ciov].iov_len = 0; } i = 0; err = 0; do { if (VTIM_real() > v1l->deadline) { VSLb(v1l->vsl, SLT_Debug, "Hit total send timeout, " "wrote = %zd/%zd; not retrying", i, v1l->liov); i = -1; break; } i = writev(*v1l->wfd, v1l->iov, v1l->niov); if (i > 0) v1l->cnt += i; if (i == v1l->liov) break; /* we hit a timeout, and some data may have been sent: * Remove sent data from start of I/O vector, then retry * * XXX: Add a "minimum sent data per timeout counter to * prevent slowloris attacks */ err = errno; if (err == EWOULDBLOCK) { VSLb(v1l->vsl, SLT_Debug, "Hit idle send timeout, " "wrote = %zd/%zd; retrying", i, v1l->liov); } if (i > 0) v1l_prune(v1l, i); } while (i > 0 || err == EWOULDBLOCK); if (i <= 0) { VSLb(v1l->vsl, SLT_Debug, "Write error, retval = %zd, len = %zd, errno = %s", i, v1l->liov, VAS_errtxt(err)); assert(v1l->werr == SC_NULL); if (err == EPIPE) v1l->werr = SC_REM_CLOSE; else v1l->werr = SC_TX_ERROR; errno = err; } } v1l->liov = 0; v1l->cliov = 0; v1l->niov = 0; if (v1l->ciov < v1l->siov) v1l->ciov = v1l->niov++; CHECK_OBJ_NOTNULL(v1l->werr, STREAM_CLOSE_MAGIC); return (v1l->werr); } size_t V1L_Write(const struct worker *wrk, const void *ptr, ssize_t len) { struct v1l *v1l; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); v1l = wrk->v1l; CHECK_OBJ_NOTNULL(v1l, V1L_MAGIC); AN(v1l->wfd); if (len == 0 || *v1l->wfd < 0) return (0); if (len == -1) len = strlen(ptr); assert(v1l->niov < v1l->siov); v1l->iov[v1l->niov].iov_base = TRUST_ME(ptr); v1l->iov[v1l->niov].iov_len = len; v1l->liov += len; v1l->niov++; v1l->cliov += len; if (v1l->niov >= v1l->siov) { (void)V1L_Flush(wrk); VSC_C_main->http1_iovs_flush++; } return (len); } void V1L_Chunked(const struct worker *wrk) { struct v1l *v1l; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); v1l = wrk->v1l; CHECK_OBJ_NOTNULL(v1l, V1L_MAGIC); assert(v1l->ciov == v1l->siov); assert(v1l->siov >= 3); /* * If there is no space for chunked header, a chunk of data and * a chunk tail, we might as well flush right away. */ if (v1l->niov + 3 >= v1l->siov) { (void)V1L_Flush(wrk); VSC_C_main->http1_iovs_flush++; } v1l->siov--; v1l->ciov = v1l->niov++; v1l->cliov = 0; assert(v1l->ciov < v1l->siov); assert(v1l->niov < v1l->siov); } /* * XXX: It is not worth the complexity to attempt to get the * XXX: end of chunk into the V1L_Flush(), because most of the time * XXX: if not always, that is a no-op anyway, because the calling * XXX: code already called V1L_Flush() to release local storage. */ void V1L_EndChunk(const struct worker *wrk) { struct v1l *v1l; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); v1l = wrk->v1l; CHECK_OBJ_NOTNULL(v1l, V1L_MAGIC); assert(v1l->ciov < v1l->siov); (void)V1L_Flush(wrk); v1l->siov++; v1l->ciov = v1l->siov; v1l->niov = 0; v1l->cliov = 0; (void)V1L_Write(wrk, "0\r\n\r\n", -1); } /*-------------------------------------------------------------------- * VDP using V1L */ static int v_matchproto_(vdp_bytes_f) v1l_bytes(struct vdp_ctx *vdc, enum vdp_action act, void **priv, const void *ptr, ssize_t len) { ssize_t wl = 0; CHECK_OBJ_NOTNULL(vdc, VDP_CTX_MAGIC); (void)priv; AZ(vdc->nxt); /* always at the bottom of the pile */ if (len > 0) wl = V1L_Write(vdc->wrk, ptr, len); if (act > VDP_NULL && V1L_Flush(vdc->wrk) != SC_NULL) return (-1); if (len != wl) return (-1); return (0); } const struct vdp * const VDP_v1l = &(struct vdp){ .name = "V1B", .bytes = v1l_bytes, }; varnish-7.5.0/bin/varnishd/http1/cache_http1_pipe.c000066400000000000000000000110041457605730600222000ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * XXX: charge bytes to srcaddr */ #include "config.h" #include "cache/cache_varnishd.h" #include #include #include "cache_http1.h" #include "vtcp.h" #include "vtim.h" #include "VSC_vbe.h" static struct lock pipestat_mtx; static int rdf(int fd0, int fd1, uint64_t *pcnt) { int i, j; char buf[BUFSIZ], *p; i = read(fd0, buf, sizeof buf); VTCP_Assert(i); if (i <= 0) return (1); for (p = buf; i > 0; i -= j, p += j) { j = write(fd1, p, i); VTCP_Assert(j); if (j <= 0) return (1); *pcnt += j; if (i != j) (void)usleep(100000); /* XXX hack */ } return (0); } int V1P_Enter(void) { int retval = 0; Lck_Lock(&pipestat_mtx); if (cache_param->pipe_sess_max == 0 || VSC_C_main->n_pipe < cache_param->pipe_sess_max) VSC_C_main->n_pipe++; else retval = -1; Lck_Unlock(&pipestat_mtx); return (retval); } void V1P_Leave(void) { Lck_Lock(&pipestat_mtx); assert(VSC_C_main->n_pipe > 0); VSC_C_main->n_pipe--; Lck_Unlock(&pipestat_mtx); } void V1P_Charge(struct req *req, const struct v1p_acct *a, struct VSC_vbe *b) { AN(b); VSLb(req->vsl, SLT_PipeAcct, "%ju %ju %ju %ju", (uintmax_t)a->req, (uintmax_t)a->bereq, (uintmax_t)a->in, (uintmax_t)a->out); Lck_Lock(&pipestat_mtx); VSC_C_main->s_pipe_hdrbytes += a->req; VSC_C_main->s_pipe_in += a->in; VSC_C_main->s_pipe_out += a->out; b->pipe_hdrbytes += a->bereq; b->pipe_out += a->in; b->pipe_in += a->out; Lck_Unlock(&pipestat_mtx); } stream_close_t V1P_Process(const struct req *req, int fd, struct v1p_acct *v1a, vtim_real deadline) { struct pollfd fds[2]; vtim_dur tmo, tmo_task; stream_close_t sc; int i, j; CHECK_OBJ_NOTNULL(req, REQ_MAGIC); CHECK_OBJ_NOTNULL(req->sp, SESS_MAGIC); assert(fd > 0); if (req->htc->pipeline_b != NULL) { j = write(fd, req->htc->pipeline_b, req->htc->pipeline_e - req->htc->pipeline_b); VTCP_Assert(j); if (j < 0) return (SC_OVERLOAD); req->htc->pipeline_b = NULL; req->htc->pipeline_e = NULL; v1a->in += j; } memset(fds, 0, sizeof fds); fds[0].fd = fd; fds[0].events = POLLIN; fds[1].fd = req->sp->fd; fds[1].events = POLLIN; sc = SC_TX_PIPE; while (fds[0].fd > -1 || fds[1].fd > -1) { fds[0].revents = 0; fds[1].revents = 0; tmo = cache_param->pipe_timeout; if (tmo == 0.) tmo = INFINITY; if (deadline > 0.) { tmo_task = deadline - VTIM_real(); tmo = vmin(tmo, tmo_task); } i = poll(fds, 2, VTIM_poll_tmo(tmo)); if (i == 0) sc = SC_RX_TIMEOUT; if (i < 1) break; if (fds[0].revents && rdf(fd, req->sp->fd, &v1a->out)) { if (fds[1].fd == -1) break; (void)shutdown(fd, SHUT_RD); (void)shutdown(req->sp->fd, SHUT_WR); fds[0].events = 0; fds[0].fd = -1; } if (fds[1].revents && rdf(req->sp->fd, fd, &v1a->in)) { if (fds[0].fd == -1) break; (void)shutdown(req->sp->fd, SHUT_RD); (void)shutdown(fd, SHUT_WR); fds[1].events = 0; fds[1].fd = -1; } } return (sc); } /*--------------------------------------------------------------------*/ void V1P_Init(void) { Lck_New(&pipestat_mtx, lck_pipestat); } varnish-7.5.0/bin/varnishd/http1/cache_http1_proto.c000066400000000000000000000301641457605730600224160ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * HTTP protocol requests * * The trouble with the "until magic sequence" design of HTTP protocol messages * is that either you have to read a single character at a time, which is * inefficient, or you risk reading too much, and pre-read some of the object, * or even the next pipelined request, which follows the one you want. * * HTC reads a HTTP protocol header into a workspace, subject to limits, * and stops when we see the magic marker (double [CR]NL), and if we overshoot, * it keeps track of the "pipelined" data. * * We use this both for client and backend connections. */ #include "config.h" #include "cache/cache_varnishd.h" #include "cache/cache_transport.h" #include "cache_http1.h" #include "vct.h" const int HTTP1_Req[3] = { HTTP_HDR_METHOD, HTTP_HDR_URL, HTTP_HDR_PROTO }; const int HTTP1_Resp[3] = { HTTP_HDR_PROTO, HTTP_HDR_STATUS, HTTP_HDR_REASON }; /*-------------------------------------------------------------------- * Check if we have a complete HTTP request or response yet */ enum htc_status_e v_matchproto_(htc_complete_f) HTTP1_Complete(struct http_conn *htc) { char *p; enum htc_status_e retval; CHECK_OBJ_NOTNULL(htc, HTTP_CONN_MAGIC); AN(WS_Reservation(htc->ws)); assert(pdiff(htc->rxbuf_b, htc->rxbuf_e) <= WS_ReservationSize(htc->ws)); /* Skip any leading white space */ for (p = htc->rxbuf_b ; p < htc->rxbuf_e && vct_islws(*p); p++) continue; if (p == htc->rxbuf_e) return (HTC_S_EMPTY); /* Do not return a partial H2 connection preface */ retval = H2_prism_complete(htc); if (retval != HTC_S_JUNK) return (retval); /* * Here we just look for NL[CR]NL to see that reception * is completed. More stringent validation happens later. */ while (1) { p = memchr(p, '\n', htc->rxbuf_e - p); if (p == NULL) return (HTC_S_MORE); if (++p == htc->rxbuf_e) return (HTC_S_MORE); if (*p == '\r' && ++p == htc->rxbuf_e) return (HTC_S_MORE); if (*p == '\n') break; } return (HTC_S_COMPLETE); } /*-------------------------------------------------------------------- * Dissect the headers of the HTTP protocol message. * Detect conditionals (headers which start with '^[Ii][Ff]-') */ static uint16_t http1_dissect_hdrs(struct http *hp, char *p, struct http_conn *htc, unsigned maxhdr) { char *q, *r, *s; int i; assert(p > htc->rxbuf_b); assert(p <= htc->rxbuf_e); hp->nhd = HTTP_HDR_FIRST; r = NULL; /* For FlexeLint */ for (; p < htc->rxbuf_e; p = r) { /* Find end of next header */ q = r = p; if (vct_iscrlf(p, htc->rxbuf_e)) break; while (r < htc->rxbuf_e) { if (vct_ishdrval(*r)) { r++; continue; } i = vct_iscrlf(r, htc->rxbuf_e); if (i == 0) { VSLb(hp->vsl, SLT_BogoHeader, "Header has ctrl char 0x%02x", *r); return (400); } q = r; r += i; assert(r <= htc->rxbuf_e); if (r == htc->rxbuf_e) break; if (vct_iscrlf(r, htc->rxbuf_e)) break; /* If line does not continue: got it. */ if (!vct_issp(*r)) break; /* Clear line continuation LWS to spaces */ while (q < r) *q++ = ' '; while (q < htc->rxbuf_e && vct_issp(*q)) *q++ = ' '; } /* Empty header = end of headers */ if (p == q) break; if (q - p > maxhdr) { VSLb(hp->vsl, SLT_BogoHeader, "Header too long: %.*s", (int)(q - p > 20 ? 20 : q - p), p); return (400); } if (vct_islws(*p)) { VSLb(hp->vsl, SLT_BogoHeader, "1st header has white space: %.*s", (int)(q - p > 20 ? 20 : q - p), p); return (400); } if (*p == ':') { VSLb(hp->vsl, SLT_BogoHeader, "Missing header name: %.*s", (int)(q - p > 20 ? 20 : q - p), p); return (400); } while (q > p && vct_issp(q[-1])) q--; *q = '\0'; for (s = p; *s != ':' && s < q; s++) { if (!vct_istchar(*s)) { VSLb(hp->vsl, SLT_BogoHeader, "Illegal char 0x%02x in header name", *s); return (400); } } if (*s != ':') { VSLb(hp->vsl, SLT_BogoHeader, "Header without ':' %.*s", (int)(q - p > 20 ? 20 : q - p), p); return (400); } if (hp->nhd < hp->shd) { hp->hdf[hp->nhd] = 0; hp->hd[hp->nhd].b = p; hp->hd[hp->nhd].e = q; hp->nhd++; } else { VSLb(hp->vsl, SLT_BogoHeader, "Too many headers: %.*s", (int)(q - p > 20 ? 20 : q - p), p); return (400); } } i = vct_iscrlf(p, htc->rxbuf_e); assert(i > 0); /* HTTP1_Complete guarantees this */ p += i; HTC_RxPipeline(htc, p); htc->rxbuf_e = p; return (0); } /*-------------------------------------------------------------------- * Deal with first line of HTTP protocol message. */ static uint16_t http1_splitline(struct http *hp, struct http_conn *htc, const int *hf, unsigned maxhdr) { char *p, *q; int i; assert(hf == HTTP1_Req || hf == HTTP1_Resp); CHECK_OBJ_NOTNULL(htc, HTTP_CONN_MAGIC); CHECK_OBJ_NOTNULL(hp, HTTP_MAGIC); assert(htc->rxbuf_e >= htc->rxbuf_b); AZ(hp->hd[hf[0]].b); AZ(hp->hd[hf[1]].b); AZ(hp->hd[hf[2]].b); /* Skip leading LWS */ for (p = htc->rxbuf_b ; vct_islws(*p); p++) continue; hp->hd[hf[0]].b = p; /* First field cannot contain SP or CTL */ for (; !vct_issp(*p); p++) { if (vct_isctl(*p)) return (400); } hp->hd[hf[0]].e = p; assert(Tlen(hp->hd[hf[0]])); *p++ = '\0'; /* Skip SP */ for (; vct_issp(*p); p++) { if (vct_isctl(*p)) return (400); } hp->hd[hf[1]].b = p; /* Second field cannot contain LWS or CTL */ for (; !vct_islws(*p); p++) { if (vct_isctl(*p)) return (400); } hp->hd[hf[1]].e = p; if (!Tlen(hp->hd[hf[1]])) return (400); /* Skip SP */ q = p; for (; vct_issp(*p); p++) { if (vct_isctl(*p)) return (400); } if (q < p) *q = '\0'; /* Nul guard for the 2nd field. If q == p * (the third optional field is not * present), the last nul guard will * cover this field. */ /* Third field is optional and cannot contain CTL except TAB */ q = p; for (; p < htc->rxbuf_e && !vct_iscrlf(p, htc->rxbuf_e); p++) { if (vct_isctl(*p) && !vct_issp(*p)) return (400); } if (p > q) { hp->hd[hf[2]].b = q; hp->hd[hf[2]].e = p; } /* Skip CRLF */ i = vct_iscrlf(p, htc->rxbuf_e); if (!i) return (400); *p = '\0'; p += i; http_Proto(hp); return (http1_dissect_hdrs(hp, p, htc, maxhdr)); } /*--------------------------------------------------------------------*/ static body_status_t http1_body_status(const struct http *hp, struct http_conn *htc, int request) { ssize_t cl; const char *b; CHECK_OBJ_NOTNULL(htc, HTTP_CONN_MAGIC); CHECK_OBJ_NOTNULL(hp, HTTP_MAGIC); htc->content_length = -1; cl = http_GetContentLength(hp); if (cl == -2) return (BS_ERROR); if (http_GetHdr(hp, H_Transfer_Encoding, &b)) { if (!http_coding_eq(b, chunked)) return (BS_ERROR); if (cl != -1) { /* * RFC7230 3.3.3 allows more lenient handling * but we're going to be strict. */ return (BS_ERROR); } return (BS_CHUNKED); } if (cl >= 0) { htc->content_length = cl; return (cl == 0 ? BS_NONE : BS_LENGTH); } if (hp->protover == 11 && request) return (BS_NONE); if (http_HdrIs(hp, H_Connection, "keep-alive")) { /* * Keep alive with neither TE=Chunked or C-Len is impossible. * We assume a zero length body. */ return (BS_NONE); } /* * Fall back to EOF transfer. */ return (BS_EOF); } /*--------------------------------------------------------------------*/ uint16_t HTTP1_DissectRequest(struct http_conn *htc, struct http *hp) { uint16_t retval; const char *p; const char *b = NULL, *e; CHECK_OBJ_NOTNULL(htc, HTTP_CONN_MAGIC); CHECK_OBJ_NOTNULL(hp, HTTP_MAGIC); retval = http1_splitline(hp, htc, HTTP1_Req, cache_param->http_req_hdr_len); if (retval != 0) return (retval); if (hp->protover < 10 || hp->protover > 11) return (400); /* RFC2616, section 5.2, point 1 */ if (http_scheme_at(hp->hd[HTTP_HDR_URL].b, http)) b = hp->hd[HTTP_HDR_URL].b + 7; else if (FEATURE(FEATURE_HTTPS_SCHEME) && http_scheme_at(hp->hd[HTTP_HDR_URL].b, https)) b = hp->hd[HTTP_HDR_URL].b + 8; if (b) { e = strchr(b, '/'); if (e) { http_Unset(hp, H_Host); http_PrintfHeader(hp, "Host: %.*s", (int)(e - b), b); hp->hd[HTTP_HDR_URL].b = e; } } htc->body_status = http1_body_status(hp, htc, 1); if (htc->body_status == BS_ERROR) return (400); p = http_GetMethod(hp); AN(p); if (htc->body_status == BS_EOF) { assert(hp->protover == 10); /* RFC1945 8.3 p32 and D.1.1 p58 */ if (http_method_eq(p, POST) || http_method_eq(p, PUT)) return (400); htc->body_status = BS_NONE; } /* HEAD with a body is a hard error */ if (htc->body_status != BS_NONE && http_method_eq(p, HEAD)) return (400); return (retval); } /*--------------------------------------------------------------------*/ uint16_t HTTP1_DissectResponse(struct http_conn *htc, struct http *hp, const struct http *rhttp) { uint16_t retval = 0; const char *p; CHECK_OBJ_NOTNULL(htc, HTTP_CONN_MAGIC); CHECK_OBJ_NOTNULL(hp, HTTP_MAGIC); CHECK_OBJ_NOTNULL(rhttp, HTTP_MAGIC); if (http1_splitline(hp, htc, HTTP1_Resp, cache_param->http_resp_hdr_len)) retval = 503; if (retval == 0 && hp->protover < 10) retval = 503; if (retval == 0 && hp->protover > rhttp->protover) http_SetH(hp, HTTP_HDR_PROTO, rhttp->hd[HTTP_HDR_PROTO].b); if (retval == 0 && Tlen(hp->hd[HTTP_HDR_STATUS]) != 3) retval = 503; if (retval == 0) { p = hp->hd[HTTP_HDR_STATUS].b; if (p[0] >= '1' && p[0] <= '9' && p[1] >= '0' && p[1] <= '9' && p[2] >= '0' && p[2] <= '9') hp->status = 100 * (p[0] - '0') + 10 * (p[1] - '0') + p[2] - '0'; else retval = 503; } if (retval != 0) { VSLb(hp->vsl, SLT_HttpGarbage, "%.*s", (int)(htc->rxbuf_e - htc->rxbuf_b), htc->rxbuf_b); assert(retval >= 100 && retval <= 999); assert(retval == 503); http_SetStatus(hp, 503, NULL); } if (hp->hd[HTTP_HDR_REASON].b == NULL || !Tlen(hp->hd[HTTP_HDR_REASON])) { http_SetH(hp, HTTP_HDR_REASON, http_Status2Reason(hp->status, NULL)); } htc->body_status = http1_body_status(hp, htc, 0); return (retval); } /*--------------------------------------------------------------------*/ static unsigned http1_WrTxt(const struct worker *wrk, const txt *hh, const char *suf) { unsigned u; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); AN(wrk); AN(hh); AN(hh->b); AN(hh->e); u = V1L_Write(wrk, hh->b, hh->e - hh->b); if (suf != NULL) u += V1L_Write(wrk, suf, -1); return (u); } unsigned HTTP1_Write(const struct worker *w, const struct http *hp, const int *hf) { unsigned u, l; assert(hf == HTTP1_Req || hf == HTTP1_Resp); AN(hp->hd[hf[0]].b); AN(hp->hd[hf[1]].b); AN(hp->hd[hf[2]].b); l = http1_WrTxt(w, &hp->hd[hf[0]], " "); l += http1_WrTxt(w, &hp->hd[hf[1]], " "); l += http1_WrTxt(w, &hp->hd[hf[2]], "\r\n"); for (u = HTTP_HDR_FIRST; u < hp->nhd; u++) l += http1_WrTxt(w, &hp->hd[u], "\r\n"); l += V1L_Write(w, "\r\n", -1); return (l); } varnish-7.5.0/bin/varnishd/http1/cache_http1_vfp.c000066400000000000000000000165451457605730600220550ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * HTTP1 Fetch Filters * * These filters are used for both req.body and beresp.body to handle * the HTTP/1 aspects (C-L/Chunked/EOF) * */ #include "config.h" #include #include "cache/cache_varnishd.h" #include "cache/cache_filter.h" #include "cache_http1.h" #include "vct.h" #include "vtcp.h" /*-------------------------------------------------------------------- * Read up to len bytes, returning pipelined data first. */ static ssize_t v1f_read(const struct vfp_ctx *vc, struct http_conn *htc, void *d, ssize_t len) { ssize_t l; unsigned char *p; ssize_t i = 0; CHECK_OBJ_NOTNULL(vc, VFP_CTX_MAGIC); CHECK_OBJ_NOTNULL(htc, HTTP_CONN_MAGIC); assert(len > 0); l = 0; p = d; if (htc->pipeline_b) { l = htc->pipeline_e - htc->pipeline_b; assert(l > 0); l = vmin(l, len); memcpy(p, htc->pipeline_b, l); p += l; len -= l; htc->pipeline_b += l; if (htc->pipeline_b == htc->pipeline_e) htc->pipeline_b = htc->pipeline_e = NULL; } if (len > 0) { i = read(*htc->rfd, p, len); if (i < 0) { VTCP_Assert(i); VSLbs(vc->wrk->vsl, SLT_FetchError, TOSTRAND(VAS_errtxt(errno))); return (i); } if (i == 0) htc->doclose = SC_RESP_CLOSE; } return (i + l); } /*-------------------------------------------------------------------- * Read a chunked HTTP object. * * XXX: Reading one byte at a time is pretty pessimal. */ static enum vfp_status v_matchproto_(vfp_pull_f) v1f_chunked_pull(struct vfp_ctx *vc, struct vfp_entry *vfe, void *ptr, ssize_t *lp) { struct http_conn *htc; char buf[20]; /* XXX: 20 is arbitrary */ char *q; unsigned u; uintmax_t cll; ssize_t cl, l, lr; CHECK_OBJ_NOTNULL(vc, VFP_CTX_MAGIC); CHECK_OBJ_NOTNULL(vfe, VFP_ENTRY_MAGIC); CAST_OBJ_NOTNULL(htc, vfe->priv1, HTTP_CONN_MAGIC); AN(ptr); AN(lp); l = *lp; *lp = 0; if (vfe->priv2 == -1) { /* Skip leading whitespace */ do { lr = v1f_read(vc, htc, buf, 1); if (lr <= 0) return (VFP_Error(vc, "chunked read err")); } while (vct_islws(buf[0])); if (!vct_ishex(buf[0])) return (VFP_Error(vc, "chunked header non-hex")); /* Collect hex digits, skipping leading zeros */ for (u = 1; u < sizeof buf; u++) { do { lr = v1f_read(vc, htc, buf + u, 1); if (lr <= 0) return (VFP_Error(vc, "chunked read err")); } while (u == 1 && buf[0] == '0' && buf[u] == '0'); if (!vct_ishex(buf[u])) break; } if (u >= sizeof buf) return (VFP_Error(vc, "chunked header too long")); /* Skip trailing white space */ while (vct_islws(buf[u]) && buf[u] != '\n') { lr = v1f_read(vc, htc, buf + u, 1); if (lr <= 0) return (VFP_Error(vc, "chunked read err")); } if (buf[u] != '\n') return (VFP_Error(vc, "chunked header no NL")); buf[u] = '\0'; cll = strtoumax(buf, &q, 16); if (q == NULL || *q != '\0') return (VFP_Error(vc, "chunked header number syntax")); cl = (ssize_t)cll; if (cl < 0 || (uintmax_t)cl != cll) return (VFP_Error(vc, "bogusly large chunk size")); vfe->priv2 = cl; } if (vfe->priv2 > 0) { if (vfe->priv2 < l) l = vfe->priv2; lr = v1f_read(vc, htc, ptr, l); if (lr <= 0) return (VFP_Error(vc, "chunked insufficient bytes")); *lp = lr; vfe->priv2 -= lr; if (vfe->priv2 == 0) vfe->priv2 = -1; return (VFP_OK); } AZ(vfe->priv2); if (v1f_read(vc, htc, buf, 1) <= 0) return (VFP_Error(vc, "chunked read err")); if (buf[0] == '\r' && v1f_read(vc, htc, buf, 1) <= 0) return (VFP_Error(vc, "chunked read err")); if (buf[0] != '\n') return (VFP_Error(vc, "chunked tail no NL")); return (VFP_END); } static const struct vfp v1f_chunked = { .name = "V1F_CHUNKED", .pull = v1f_chunked_pull, }; /*--------------------------------------------------------------------*/ static enum vfp_status v_matchproto_(vfp_pull_f) v1f_straight_pull(struct vfp_ctx *vc, struct vfp_entry *vfe, void *p, ssize_t *lp) { ssize_t l, lr; struct http_conn *htc; CHECK_OBJ_NOTNULL(vc, VFP_CTX_MAGIC); CHECK_OBJ_NOTNULL(vfe, VFP_ENTRY_MAGIC); CAST_OBJ_NOTNULL(htc, vfe->priv1, HTTP_CONN_MAGIC); AN(p); AN(lp); l = *lp; *lp = 0; if (vfe->priv2 == 0) // XXX: Optimize Content-Len: 0 out earlier return (VFP_END); l = vmin(l, vfe->priv2); lr = v1f_read(vc, htc, p, l); if (lr <= 0) return (VFP_Error(vc, "straight insufficient bytes")); *lp = lr; vfe->priv2 -= lr; if (vfe->priv2 == 0) return (VFP_END); return (VFP_OK); } static const struct vfp v1f_straight = { .name = "V1F_STRAIGHT", .pull = v1f_straight_pull, }; /*--------------------------------------------------------------------*/ static enum vfp_status v_matchproto_(vfp_pull_f) v1f_eof_pull(struct vfp_ctx *vc, struct vfp_entry *vfe, void *p, ssize_t *lp) { ssize_t l, lr; struct http_conn *htc; CHECK_OBJ_NOTNULL(vc, VFP_CTX_MAGIC); CHECK_OBJ_NOTNULL(vfe, VFP_ENTRY_MAGIC); CAST_OBJ_NOTNULL(htc, vfe->priv1, HTTP_CONN_MAGIC); AN(p); AN(lp); l = *lp; *lp = 0; lr = v1f_read(vc, htc, p, l); if (lr < 0) return (VFP_Error(vc, "eof socket fail")); if (lr == 0) return (VFP_END); *lp = lr; return (VFP_OK); } static const struct vfp v1f_eof = { .name = "V1F_EOF", .pull = v1f_eof_pull, }; /*-------------------------------------------------------------------- */ int V1F_Setup_Fetch(struct vfp_ctx *vfc, struct http_conn *htc) { struct vfp_entry *vfe; CHECK_OBJ_NOTNULL(vfc, VFP_CTX_MAGIC); CHECK_OBJ_NOTNULL(htc, HTTP_CONN_MAGIC); if (htc->body_status == BS_EOF) { assert(htc->content_length == -1); vfe = VFP_Push(vfc, &v1f_eof); if (vfe == NULL) return (ENOSPC); vfe->priv2 = 0; } else if (htc->body_status == BS_LENGTH) { assert(htc->content_length > 0); vfe = VFP_Push(vfc, &v1f_straight); if (vfe == NULL) return (ENOSPC); vfe->priv2 = htc->content_length; } else if (htc->body_status == BS_CHUNKED) { assert(htc->content_length == -1); vfe = VFP_Push(vfc, &v1f_chunked); if (vfe == NULL) return (ENOSPC); vfe->priv2 = -1; } else { WRONG("Wrong body_status"); } vfe->priv1 = htc; return (0); } varnish-7.5.0/bin/varnishd/http2/000077500000000000000000000000001457605730600166415ustar00rootroot00000000000000varnish-7.5.0/bin/varnishd/http2/cache_http2.h000066400000000000000000000160011457605730600211740ustar00rootroot00000000000000/*- * Copyright (c) 2016 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ struct h2_sess; struct h2_req; struct h2h_decode; struct h2_frame_s; #include "hpack/vhp.h" /**********************************************************************/ struct h2_error_s { const char *name; const char *txt; uint32_t val; int stream; int connection; int send_goaway; stream_close_t reason; }; typedef const struct h2_error_s *h2_error; #define H2_CUSTOM_ERRORS #define H2EC1(U,v,g,r,d) extern const struct h2_error_s H2CE_##U[1]; #define H2EC2(U,v,g,r,d) extern const struct h2_error_s H2SE_##U[1]; #define H2EC3(U,v,g,r,d) H2EC1(U,v,g,r,d) H2EC2(U,v,g,r,d) #define H2_ERROR(NAME, val, sc, goaway, reason, desc) \ H2EC##sc(NAME, val, goaway, reason, desc) #include "tbl/h2_error.h" #undef H2EC1 #undef H2EC2 #undef H2EC3 /**********************************************************************/ typedef h2_error h2_rxframe_f(struct worker *, struct h2_sess *, struct h2_req *); typedef const struct h2_frame_s *h2_frame; struct h2_frame_s { const char *name; h2_rxframe_f *rxfunc; uint8_t type; uint8_t flags; h2_error act_szero; h2_error act_snonzero; h2_error act_sidle; int respect_window; h2_frame continuation; uint8_t final_flags; int overhead; }; #define H2_FRAME(l,U,...) extern const struct h2_frame_s H2_F_##U[1]; #include "tbl/h2_frames.h" /**********************************************************************/ struct h2_settings { #define H2_SETTING(U,l,...) uint32_t l; #include "tbl/h2_settings.h" }; typedef void h2_setsetting_f(struct h2_settings*, uint32_t); struct h2_setting_s { const char *name; h2_setsetting_f *setfunc; uint16_t ident; uint32_t defval; uint32_t minval; uint32_t maxval; h2_error range_error; }; #define H2_SETTING(U,...) extern const struct h2_setting_s H2_SET_##U[1]; #include "tbl/h2_settings.h" /**********************************************************************/ enum h2_stream_e { H2_STREAM__DUMMY = -1, #define H2_STREAM(U,s,d) H2_S_##U, #include "tbl/h2_stream.h" }; #define H2_FRAME_FLAGS(l,u,v) extern const uint8_t H2FF_##u; #include "tbl/h2_frames.h" struct h2_rxbuf { unsigned magic; #define H2_RXBUF_MAGIC 0x73f9fb27 unsigned size; uint64_t tail; uint64_t head; struct stv_buffer *stvbuf; uint8_t data[]; }; struct h2_req { unsigned magic; #define H2_REQ_MAGIC 0x03411584 uint32_t stream; int scheduled; enum h2_stream_e state; int counted; struct h2_sess *h2sess; struct req *req; double t_send; double t_winupd; pthread_cond_t *cond; VTAILQ_ENTRY(h2_req) list; int64_t t_window; int64_t r_window; /* Where to wake this stream up */ struct worker *wrk; struct h2_rxbuf *rxbuf; VTAILQ_ENTRY(h2_req) tx_list; h2_error error; }; VTAILQ_HEAD(h2_req_s, h2_req); struct h2_sess { unsigned magic; #define H2_SESS_MAGIC 0xa16f7e4b pthread_t rxthr; pthread_cond_t *cond; pthread_cond_t winupd_cond[1]; struct sess *sess; int refcnt; int open_streams; int winup_streams; uint32_t highest_stream; int goaway; int bogosity; int do_sweep; struct h2_req *req0; struct h2_req_s streams; struct req *srq; struct ws *ws; struct http_conn *htc; struct vsl_log *vsl; struct h2h_decode *decode; struct vht_table dectbl[1]; unsigned rxf_len; unsigned rxf_type; unsigned rxf_flags; unsigned rxf_stream; uint8_t *rxf_data; struct h2_settings remote_settings; struct h2_settings local_settings; struct req *new_req; uint32_t goaway_last_stream; VTAILQ_HEAD(,h2_req) txqueue; h2_error error; // rst rate limit parameters, copied from h2_* parameters vtim_dur rapid_reset; int64_t rapid_reset_limit; vtim_dur rapid_reset_period; // rst rate limit state double rst_budget; vtim_real last_rst; }; #define ASSERT_RXTHR(h2) do {assert(h2->rxthr == pthread_self());} while(0) /* http2/cache_http2_panic.c */ #ifdef TRANSPORT_MAGIC vtr_sess_panic_f h2_sess_panic; #endif /* http2/cache_http2_deliver.c */ #ifdef TRANSPORT_MAGIC vtr_deliver_f h2_deliver; vtr_minimal_response_f h2_minimal_response; #endif /* TRANSPORT_MAGIC */ /* http2/cache_http2_hpack.c */ struct h2h_decode { unsigned magic; #define H2H_DECODE_MAGIC 0xd092bde4 unsigned has_scheme:1; h2_error error; enum vhd_ret_e vhd_ret; char *out; char *reset; size_t out_l; size_t out_u; size_t namelen; struct vhd_decode vhd[1]; }; void h2h_decode_init(const struct h2_sess *h2); h2_error h2h_decode_fini(const struct h2_sess *h2); h2_error h2h_decode_bytes(struct h2_sess *h2, const uint8_t *ptr, size_t len); /* cache_http2_send.c */ void H2_Send_Get(struct worker *, struct h2_sess *, struct h2_req *); void H2_Send_Rel(struct h2_sess *, const struct h2_req *); void H2_Send_Frame(struct worker *, struct h2_sess *, h2_frame type, uint8_t flags, uint32_t len, uint32_t stream, const void *); void H2_Send_RST(struct worker *wrk, struct h2_sess *h2, const struct h2_req *r2, uint32_t stream, h2_error h2e); void H2_Send(struct worker *, struct h2_req *, h2_frame type, uint8_t flags, uint32_t len, const void *, uint64_t *acct); /* cache_http2_proto.c */ struct h2_req * h2_new_req(struct h2_sess *, unsigned stream, struct req *); h2_error h2_stream_tmo(struct h2_sess *, const struct h2_req *, vtim_real); void h2_del_req(struct worker *, struct h2_req *); void h2_kill_req(struct worker *, struct h2_sess *, struct h2_req *, h2_error); int h2_rxframe(struct worker *, struct h2_sess *); h2_error h2_set_setting(struct h2_sess *, const uint8_t *); void h2_req_body(struct req*); task_func_t h2_do_req; #ifdef TRANSPORT_MAGIC vtr_req_fail_f h2_req_fail; #endif varnish-7.5.0/bin/varnishd/http2/cache_http2_deliver.c000066400000000000000000000210501457605730600227010ustar00rootroot00000000000000/*- * Copyright (c) 2016 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #include "config.h" #include "cache/cache_varnishd.h" #include #include #include "cache/cache_filter.h" #include "cache/cache_transport.h" #include "http2/cache_http2.h" #include "vct.h" /**********************************************************************/ struct hpack_static { uint8_t idx; const char * name; const char * val; }; static const struct hpack_static hp_static[] = { #define HPS(I,N,V) [I] = { I, N ":", V }, #include "tbl/vhp_static.h" { 0, "\377:", ""} // Terminator }; static const struct hpack_static *hp_idx[256]; void V2D_Init(void) { int i; #define HPS(I,N,V) \ i = hp_static[I].name[0]; \ if (hp_idx[i] == NULL) hp_idx[i] = &hp_static[I]; #include "tbl/vhp_static.h" #undef HPS } /**********************************************************************/ static int v_matchproto_(vdp_init_f) h2_init(VRT_CTX, struct vdp_ctx *vdc, void **priv, struct objcore *oc) { struct h2_req *r2; CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(vdc, VDP_CTX_MAGIC); AN(priv); CAST_OBJ_NOTNULL(r2, *priv, H2_REQ_MAGIC); (void)r2; (void)oc; return (0); } static int v_matchproto_(vdp_fini_f) h2_fini(struct vdp_ctx *vdc, void **priv) { struct h2_req *r2; CHECK_OBJ_NOTNULL(vdc, VDP_CTX_MAGIC); CHECK_OBJ_NOTNULL(vdc->wrk, WORKER_MAGIC); TAKE_OBJ_NOTNULL(r2, priv, H2_REQ_MAGIC); if (r2->error) return (0); if (vdc->retval < 0) { r2->error = H2SE_INTERNAL_ERROR; /* XXX: proper error? */ H2_Send_Get(vdc->wrk, r2->h2sess, r2); H2_Send_RST(vdc->wrk, r2->h2sess, r2, r2->stream, r2->error); H2_Send_Rel(r2->h2sess, r2); return (0); } H2_Send_Get(vdc->wrk, r2->h2sess, r2); H2_Send(vdc->wrk, r2, H2_F_DATA, H2FF_DATA_END_STREAM, 0, "", NULL); H2_Send_Rel(r2->h2sess, r2); return (0); } static int v_matchproto_(vdp_bytes_f) h2_bytes(struct vdp_ctx *vdc, enum vdp_action act, void **priv, const void *ptr, ssize_t len) { struct h2_req *r2; CHECK_OBJ_NOTNULL(vdc, VDP_CTX_MAGIC); CAST_OBJ_NOTNULL(r2, *priv, H2_REQ_MAGIC); (void)act; if ((r2->h2sess->error || r2->error)) return (-1); if (len == 0) return (0); H2_Send_Get(vdc->wrk, r2->h2sess, r2); vdc->bytes_done = 0; H2_Send(vdc->wrk, r2, H2_F_DATA, H2FF_NONE, len, ptr, &vdc->bytes_done); H2_Send_Rel(r2->h2sess, r2); return (0); } static const struct vdp h2_vdp = { .name = "H2B", .init = h2_init, .bytes = h2_bytes, .fini = h2_fini, }; static inline size_t h2_status(uint8_t *p, uint16_t status) { size_t l = 1; switch (status) { case 200: *p = 0x80 | 8; break; case 204: *p = 0x80 | 9; break; case 206: *p = 0x80 | 10; break; case 304: *p = 0x80 | 11; break; case 400: *p = 0x80 | 12; break; case 404: *p = 0x80 | 13; break; case 500: *p = 0x80 | 14; break; default: *p++ = 0x18; *p++ = 0x03; l = 2; l += snprintf((char*)p, 4, "%03d", status); assert(l == 5); break; } return (l); } int v_matchproto_(vtr_minimal_response_f) h2_minimal_response(struct req *req, uint16_t status) { struct h2_req *r2; size_t l; uint8_t buf[6]; CHECK_OBJ_NOTNULL(req, REQ_MAGIC); CAST_OBJ_NOTNULL(r2, req->transport_priv, H2_REQ_MAGIC); assert(status >= 100); assert(status < 1000); l = h2_status(buf, status); assert(l < sizeof(buf)); VSLb(req->vsl, SLT_RespProtocol, "HTTP/2.0"); VSLb(req->vsl, SLT_RespStatus, "%03d", status); VSLbs(req->vsl, SLT_RespReason, TOSTRAND(http_Status2Reason(status, NULL))); if (status >= 400) req->err_code = status; /* XXX return code checking once H2_Send returns anything but 0 */ H2_Send_Get(req->wrk, r2->h2sess, r2); H2_Send(req->wrk, r2, H2_F_HEADERS, H2FF_HEADERS_END_HEADERS | (status < 200 ? 0 : H2FF_HEADERS_END_STREAM), l, buf, NULL); H2_Send_Rel(r2->h2sess, r2); return (0); } static void h2_enc_len(struct vsb *vsb, unsigned bits, unsigned val, uint8_t b0) { assert(bits < 8); unsigned mask = (1U << bits) - 1U; if (val >= mask) { VSB_putc(vsb, b0 | (uint8_t)mask); val -= mask; while (val >= 128) { VSB_putc(vsb, 0x80 | ((uint8_t)val & 0x7f)); val >>= 7; } } VSB_putc(vsb, (uint8_t)val); } /* * Hand-crafted-H2-HEADERS-R-Us: * * This is a handbuilt HEADERS frame for when we run out of workspace * during delivery. */ static const uint8_t h2_500_resp[] = { // :status 500 0x8e, // content-length 0 0x1f, 0x0d, 0x01, 0x30, // server Varnish 0x1f, 0x27, 0x07, 'V', 'a', 'r', 'n', 'i', 's', 'h', }; static void h2_build_headers(struct vsb *resp, struct req *req) { unsigned u, l; int i; struct http *hp; const char *r; const struct hpack_static *hps; uint8_t buf[6]; ssize_t sz, sz1; assert(req->resp->status % 1000 >= 100); l = h2_status(buf, req->resp->status % 1000); VSB_bcat(resp, buf, l); hp = req->resp; for (u = HTTP_HDR_FIRST; u < hp->nhd && !VSB_error(resp); u++) { r = strchr(hp->hd[u].b, ':'); AN(r); if (http_IsFiltered(hp, u, HTTPH_C_SPECIFIC)) continue; //rfc7540,l,2999,3006 hps = hp_idx[tolower(*hp->hd[u].b)]; sz = 1 + r - hp->hd[u].b; assert(sz > 0); while (hps != NULL && hps->idx > 0) { i = strncasecmp(hps->name, hp->hd[u].b, sz); if (i < 0) { hps++; continue; } if (i > 0) hps = NULL; break; } if (hps != NULL) { VSLb(req->vsl, SLT_Debug, "HP {%d, \"%s\", \"%s\"} <%s>", hps->idx, hps->name, hps->val, hp->hd[u].b); h2_enc_len(resp, 4, hps->idx, 0x10); } else { VSB_putc(resp, 0x10); sz--; h2_enc_len(resp, 7, sz, 0); for (sz1 = 0; sz1 < sz; sz1++) VSB_putc(resp, tolower(hp->hd[u].b[sz1])); } while (vct_islws(*++r)) continue; sz = hp->hd[u].e - r; h2_enc_len(resp, 7, sz, 0); VSB_bcat(resp, r, sz); } } void v_matchproto_(vtr_deliver_f) h2_deliver(struct req *req, struct boc *boc, int sendbody) { size_t sz; const char *r; struct sess *sp; struct h2_req *r2; struct vsb resp[1]; struct vrt_ctx ctx[1]; uintptr_t ss; CHECK_OBJ_NOTNULL(req, REQ_MAGIC); CHECK_OBJ_ORNULL(boc, BOC_MAGIC); CHECK_OBJ_NOTNULL(req->objcore, OBJCORE_MAGIC); CAST_OBJ_NOTNULL(r2, req->transport_priv, H2_REQ_MAGIC); sp = req->sp; CHECK_OBJ_NOTNULL(sp, SESS_MAGIC); VSLb(req->vsl, SLT_RespProtocol, "HTTP/2.0"); (void)http_DoConnection(req->resp, SC_RESP_CLOSE); ss = WS_Snapshot(req->ws); WS_VSB_new(resp, req->ws); h2_build_headers(resp, req); r = WS_VSB_finish(resp, req->ws, &sz); if (r == NULL) { VSLb(req->vsl, SLT_Error, "workspace_client overflow"); VSLb(req->vsl, SLT_RespStatus, "500"); VSLb(req->vsl, SLT_RespReason, "Internal Server Error"); req->wrk->stats->client_resp_500++; r = (const char*)h2_500_resp; sz = sizeof h2_500_resp; sendbody = 0; } AZ(req->wrk->v1l); r2->t_send = req->t_prev; H2_Send_Get(req->wrk, r2->h2sess, r2); H2_Send(req->wrk, r2, H2_F_HEADERS, (sendbody ? 0 : H2FF_HEADERS_END_STREAM) | H2FF_HEADERS_END_HEADERS, sz, r, &req->acct.resp_hdrbytes); H2_Send_Rel(r2->h2sess, r2); WS_Reset(req->ws, ss); /* XXX someone into H2 please add appropriate error handling */ if (sendbody) { INIT_OBJ(ctx, VRT_CTX_MAGIC); VCL_Req2Ctx(ctx, req); if (!VDP_Push(ctx, req->vdc, req->ws, &h2_vdp, r2)) (void)VDP_DeliverObj(req->vdc, req->objcore); } AZ(req->wrk->v1l); req->acct.resp_bodybytes += VDP_Close(req->vdc, req->objcore, boc); } varnish-7.5.0/bin/varnishd/http2/cache_http2_hpack.c000066400000000000000000000261771457605730600223540ustar00rootroot00000000000000/*- * Copyright (c) 2016 Varnish Software AS * All rights reserved. * * Author: Martin Blix Grydeland * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #include "config.h" #include "cache/cache_varnishd.h" #include #include #include "http2/cache_http2.h" #include "vct.h" // rfc9113,l,2493,2528 static h2_error h2h_checkhdr(const struct http *hp, const char *b, size_t namelen, size_t len) { const char *p; enum { FLD_NAME_FIRST, FLD_NAME, FLD_VALUE_FIRST, FLD_VALUE } state; CHECK_OBJ_NOTNULL(hp, HTTP_MAGIC); AN(b); assert(namelen >= 2); /* 2 chars from the ': ' that we added */ assert(namelen <= len); assert(b[namelen - 2] == ':'); assert(b[namelen - 1] == ' '); if (namelen == 2) { VSLb(hp->vsl, SLT_BogoHeader, "Empty name"); return (H2SE_PROTOCOL_ERROR); } // VSLb(hp->vsl, SLT_Debug, "CHDR [%.*s] [%.*s]", // (int)namelen, b, (int)(len - namelen), b + namelen); state = FLD_NAME_FIRST; for (p = b; p < b + namelen - 2; p++) { switch(state) { case FLD_NAME_FIRST: state = FLD_NAME; if (*p == ':') break; /* FALL_THROUGH */ case FLD_NAME: if (isupper(*p)) { VSLb(hp->vsl, SLT_BogoHeader, "Illegal field header name (upper-case): %.*s", (int)(len > 20 ? 20 : len), b); return (H2SE_PROTOCOL_ERROR); } if (!vct_istchar(*p) || *p == ':') { VSLb(hp->vsl, SLT_BogoHeader, "Illegal field header name (non-token): %.*s", (int)(len > 20 ? 20 : len), b); return (H2SE_PROTOCOL_ERROR); } break; default: WRONG("http2 field name validation state"); } } state = FLD_VALUE_FIRST; for (p = b + namelen; p < b + len; p++) { switch(state) { case FLD_VALUE_FIRST: if (vct_issp(*p)) { VSLb(hp->vsl, SLT_BogoHeader, "Illegal field value start %.*s", (int)(len > 20 ? 20 : len), b); return (H2SE_PROTOCOL_ERROR); } state = FLD_VALUE; /* FALL_THROUGH */ case FLD_VALUE: if (!vct_ishdrval(*p)) { VSLb(hp->vsl, SLT_BogoHeader, "Illegal field value %.*s", (int)(len > 20 ? 20 : len), b); return (H2SE_PROTOCOL_ERROR); } break; default: WRONG("http2 field value validation state"); } } if (state == FLD_VALUE && vct_issp(b[len - 1])) { VSLb(hp->vsl, SLT_BogoHeader, "Illegal field value (end) %.*s", (int)(len > 20 ? 20 : len), b); return (H2SE_PROTOCOL_ERROR); } return (0); } static h2_error h2h_addhdr(struct h2h_decode *d, struct http *hp, char *b, size_t namelen, size_t len) { /* XXX: This might belong in cache/cache_http.c */ const char *b0; int disallow_empty; unsigned n; char *p; unsigned u; CHECK_OBJ_NOTNULL(hp, HTTP_MAGIC); AN(b); assert(namelen >= 2); /* 2 chars from the ': ' that we added */ assert(namelen <= len); disallow_empty = 0; if (len > UINT_MAX) { /* XXX: cache_param max header size */ VSLb(hp->vsl, SLT_BogoHeader, "Header too large: %.20s", b); return (H2SE_ENHANCE_YOUR_CALM); } b0 = b; if (b[0] == ':') { /* Match H/2 pseudo headers */ /* XXX: Should probably have some include tbl for pseudo-headers */ if (!strncmp(b, ":method: ", namelen)) { b += namelen; len -= namelen; n = HTTP_HDR_METHOD; disallow_empty = 1; /* First field cannot contain SP or CTL */ for (p = b, u = 0; u < len; p++, u++) { if (vct_issp(*p) || vct_isctl(*p)) return (H2SE_PROTOCOL_ERROR); } } else if (!strncmp(b, ":path: ", namelen)) { b += namelen; len -= namelen; n = HTTP_HDR_URL; disallow_empty = 1; // rfc9113,l,2693,2705 if (len > 0 && *b != '/' && strncmp(b, "*", len) != 0) { VSLb(hp->vsl, SLT_BogoHeader, "Illegal :path pseudo-header %.*s", (int)len, b); return (H2SE_PROTOCOL_ERROR); } /* Second field cannot contain LWS or CTL */ for (p = b, u = 0; u < len; p++, u++) { if (vct_islws(*p) || vct_isctl(*p)) return (H2SE_PROTOCOL_ERROR); } } else if (!strncmp(b, ":scheme: ", namelen)) { /* XXX: What to do about this one? (typically "http" or "https"). For now set it as a normal header, stripping the first ':'. */ if (d->has_scheme) { VSLb(hp->vsl, SLT_BogoHeader, "Duplicate pseudo-header %.*s%.*s", (int)namelen, b0, (int)(len > 20 ? 20 : len), b); return (H2SE_PROTOCOL_ERROR); } b++; len-=1; n = hp->nhd; d->has_scheme = 1; for (p = b + namelen, u = 0; u < len-namelen; p++, u++) { if (vct_issp(*p) || vct_isctl(*p)) return (H2SE_PROTOCOL_ERROR); } if (!u) return (H2SE_PROTOCOL_ERROR); } else if (!strncmp(b, ":authority: ", namelen)) { b+=6; len-=6; memcpy(b, "host", 4); n = hp->nhd; } else { /* Unknown pseudo-header */ VSLb(hp->vsl, SLT_BogoHeader, "Unknown pseudo-header: %.*s", (int)(len > 20 ? 20 : len), b); return (H2SE_PROTOCOL_ERROR); // rfc7540,l,2990,2992 } } else n = hp->nhd; if (n < HTTP_HDR_FIRST) { /* Check for duplicate pseudo-header */ if (hp->hd[n].b != NULL) { VSLb(hp->vsl, SLT_BogoHeader, "Duplicate pseudo-header %.*s%.*s", (int)namelen, b0, (int)(len > 20 ? 20 : len), b); return (H2SE_PROTOCOL_ERROR); // rfc7540,l,3158,3162 } } else { /* Check for space in struct http */ if (n >= hp->shd) { VSLb(hp->vsl, SLT_LostHeader, "Too many headers: %.*s", (int)(len > 20 ? 20 : len), b); return (H2SE_ENHANCE_YOUR_CALM); } hp->nhd++; } hp->hd[n].b = b; hp->hd[n].e = b + len; if (disallow_empty && !Tlen(hp->hd[n])) { VSLb(hp->vsl, SLT_BogoHeader, "Empty pseudo-header %.*s", (int)namelen, b0); return (H2SE_PROTOCOL_ERROR); } return (0); } void h2h_decode_init(const struct h2_sess *h2) { struct h2h_decode *d; CHECK_OBJ_NOTNULL(h2, H2_SESS_MAGIC); CHECK_OBJ_NOTNULL(h2->new_req, REQ_MAGIC); CHECK_OBJ_NOTNULL(h2->new_req->http, HTTP_MAGIC); AN(h2->decode); d = h2->decode; INIT_OBJ(d, H2H_DECODE_MAGIC); VHD_Init(d->vhd); d->out_l = WS_ReserveAll(h2->new_req->http->ws); /* * Can't do any work without any buffer * space. Require non-zero size. */ XXXAN(d->out_l); d->out = WS_Reservation(h2->new_req->http->ws); d->reset = d->out; } /* Possible error returns: * * H2E_COMPRESSION_ERROR: Lost compression state due to incomplete header * block. This is a connection level error. * * H2E_ENHANCE_YOUR_CALM: Ran out of workspace or http header space. This * is a stream level error. */ h2_error h2h_decode_fini(const struct h2_sess *h2) { h2_error ret; struct h2h_decode *d; CHECK_OBJ_NOTNULL(h2, H2_SESS_MAGIC); d = h2->decode; CHECK_OBJ_NOTNULL(h2->new_req, REQ_MAGIC); CHECK_OBJ_NOTNULL(d, H2H_DECODE_MAGIC); WS_ReleaseP(h2->new_req->http->ws, d->out); if (d->vhd_ret != VHD_OK) { /* HPACK header block didn't finish at an instruction boundary */ VSLb(h2->new_req->http->vsl, SLT_BogoHeader, "HPACK compression error/fini (%s)", VHD_Error(d->vhd_ret)); ret = H2CE_COMPRESSION_ERROR; } else if (d->error == NULL && !d->has_scheme) { VSLb(h2->vsl, SLT_Debug, "Missing :scheme"); ret = H2SE_MISSING_SCHEME; //rfc7540,l,3087,3090 } else ret = d->error; FINI_OBJ(d); return (ret); } /* Possible error returns: * * H2E_COMPRESSION_ERROR: Lost compression state due to invalid header * block. This is a connection level error. * * H2E_PROTOCOL_ERROR: Malformed header or duplicate pseudo-header. * Violation of field name/value charsets */ h2_error h2h_decode_bytes(struct h2_sess *h2, const uint8_t *in, size_t in_l) { struct http *hp; struct h2h_decode *d; size_t in_u = 0; const char *r, *e; CHECK_OBJ_NOTNULL(h2, H2_SESS_MAGIC); CHECK_OBJ_NOTNULL(h2->new_req, REQ_MAGIC); hp = h2->new_req->http; CHECK_OBJ_NOTNULL(hp, HTTP_MAGIC); CHECK_OBJ_NOTNULL(hp->ws, WS_MAGIC); r = WS_Reservation(hp->ws); AN(r); e = r + WS_ReservationSize(hp->ws); d = h2->decode; CHECK_OBJ_NOTNULL(d, H2H_DECODE_MAGIC); /* Only H2E_ENHANCE_YOUR_CALM indicates that we should continue processing. Other errors should have been returned and handled by the caller. */ assert(d->error == 0 || d->error == H2SE_ENHANCE_YOUR_CALM); while (1) { AN(d->out); assert(d->out_u <= d->out_l); d->vhd_ret = VHD_Decode(d->vhd, h2->dectbl, in, in_l, &in_u, d->out, d->out_l, &d->out_u); if (d->vhd_ret < 0) { VSLb(hp->vsl, SLT_BogoHeader, "HPACK compression error (%s)", VHD_Error(d->vhd_ret)); d->error = H2CE_COMPRESSION_ERROR; break; } else if (d->vhd_ret == VHD_OK || d->vhd_ret == VHD_MORE) { assert(in_u == in_l); break; } if (d->error == H2SE_ENHANCE_YOUR_CALM) { d->out_u = 0; assert(d->out_u < d->out_l); continue; } switch (d->vhd_ret) { case VHD_NAME_SEC: /* XXX: header flag for never-indexed header */ case VHD_NAME: assert(d->namelen == 0); if (d->out_l - d->out_u < 2) { d->error = H2SE_ENHANCE_YOUR_CALM; break; } d->out[d->out_u++] = ':'; d->out[d->out_u++] = ' '; d->namelen = d->out_u; break; case VHD_VALUE_SEC: /* XXX: header flag for never-indexed header */ case VHD_VALUE: assert(d->namelen > 0); if (d->out_l - d->out_u < 1) { d->error = H2SE_ENHANCE_YOUR_CALM; break; } d->error = h2h_checkhdr(hp, d->out, d->namelen, d->out_u); if (d->error) break; d->error = h2h_addhdr(d, hp, d->out, d->namelen, d->out_u); if (d->error) break; d->out[d->out_u++] = '\0'; /* Zero guard */ d->out += d->out_u; d->out_l -= d->out_u; d->out_u = 0; d->namelen = 0; break; case VHD_BUF: d->error = H2SE_ENHANCE_YOUR_CALM; break; default: WRONG("Unhandled return value"); break; } if (d->error == H2SE_ENHANCE_YOUR_CALM) { d->out = d->reset; d->out_l = e - d->out; d->out_u = 0; assert(d->out_l > 0); } else if (d->error) break; } if (d->error == H2SE_ENHANCE_YOUR_CALM) return (0); /* Stream error, delay reporting until h2h_decode_fini so that we can process the complete header block */ return (d->error); } varnish-7.5.0/bin/varnishd/http2/cache_http2_panic.c000066400000000000000000000077221457605730600223530ustar00rootroot00000000000000/*- * Copyright (c) 2016 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #include "config.h" #include #include "cache/cache_varnishd.h" #include "cache/cache_transport.h" #include "http2/cache_http2.h" static const char * h2_panic_error(const struct h2_error_s *e) { if (e == NULL) return ("(null)"); else return (e->name); } static void h2_panic_settings(struct vsb *vsb, const struct h2_settings *s) { int cont = 0; #define H2_SETTING(U,l,...) \ do { \ if (cont) \ VSB_printf(vsb, ", "); \ cont = 1; \ VSB_printf(vsb, "0x%x", s->l); \ } while (0); #include "tbl/h2_settings.h" #undef H2_SETTING } void h2_sess_panic(struct vsb *vsb, const struct sess *sp) { uintptr_t *up; struct h2_sess *h2; struct h2_req *r2; AZ(SES_Get_proto_priv(sp, &up)); AN(up); h2 = (void*)*up; if (PAN_dump_struct(vsb, h2, H2_SESS_MAGIC, "h2_sess")) return; VSB_printf(vsb, "refcnt = %d, bogosity = %d, error = %s\n", h2->refcnt, h2->bogosity, h2_panic_error(h2->error)); VSB_printf(vsb, "open_streams = %u, highest_stream = %u," " goaway_last_stream = %u,\n", h2->open_streams, h2->highest_stream, h2->goaway_last_stream); VSB_cat(vsb, "local_settings = {"); h2_panic_settings(vsb, &h2->local_settings); VSB_cat(vsb, "},\n"); VSB_cat(vsb, "remote_settings = {"); h2_panic_settings(vsb, &h2->remote_settings); VSB_cat(vsb, "},\n"); VSB_printf(vsb, "{rxf_len, rxf_type, rxf_flags, rxf_stream} =" " {%u, %u, 0x%x, %u},\n", h2->rxf_len, h2->rxf_type, h2->rxf_flags, h2->rxf_stream); VTAILQ_FOREACH(r2, &h2->streams, list) { if (PAN_dump_struct(vsb, r2, H2_REQ_MAGIC, "stream")) continue; VSB_printf(vsb, "id = %u, state = ", r2->stream); switch (r2->state) { #define H2_STREAM(U,sd,d) case H2_S_##U: VSB_printf(vsb, "%s", sd); break; #include default: VSB_printf(vsb, " 0x%x", r2->state); break; } VSB_cat(vsb, ",\n"); VSB_printf(vsb, "h2_sess = %p, scheduled = %d, error = %s,\n", r2->h2sess, r2->scheduled, h2_panic_error(r2->error)); VSB_printf(vsb, "t_send = %f, t_winupd = %f,\n", r2->t_send, r2->t_winupd); VSB_printf(vsb, "t_window = %jd, r_window = %jd,\n", (intmax_t)r2->t_window, (intmax_t)r2->r_window); if (!PAN_dump_struct(vsb, r2->rxbuf, H2_RXBUF_MAGIC, "rxbuf")) { VSB_printf(vsb, "stvbuf = %p,\n", r2->rxbuf->stvbuf); VSB_printf(vsb, "{size, tail, head} = {%u, %ju, %ju},\n", r2->rxbuf->size, (uintmax_t)r2->rxbuf->tail, (uintmax_t)r2->rxbuf->head); VSB_indent(vsb, -2); VSB_cat(vsb, "},\n"); } VSB_indent(vsb, -2); VSB_cat(vsb, "},\n"); } VSB_indent(vsb, -2); } varnish-7.5.0/bin/varnishd/http2/cache_http2_proto.c000066400000000000000000001144251457605730600224230ustar00rootroot00000000000000/*- * Copyright (c) 2016-2019 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #include "config.h" #include "cache/cache_varnishd.h" #include #include #include "cache/cache_transport.h" #include "cache/cache_filter.h" #include "http2/cache_http2.h" #include "cache/cache_objhead.h" #include "storage/storage.h" #include "vend.h" #include "vtcp.h" #include "vtim.h" #define H2_CUSTOM_ERRORS #define H2EC1(U,v,g,r,d) \ const struct h2_error_s H2CE_##U[1] = {{#U,d,v,0,1,g,r}}; #define H2EC2(U,v,g,r,d) \ const struct h2_error_s H2SE_##U[1] = {{#U,d,v,1,0,g,r}}; #define H2EC3(U,v,g,r,d) H2EC1(U,v,g,r,d) H2EC2(U,v,g,r,d) #define H2_ERROR(NAME, val, sc, goaway, reason, desc) \ H2EC##sc(NAME, val, goaway, reason, desc) #include "tbl/h2_error.h" #undef H2EC1 #undef H2EC2 #undef H2EC3 static const struct h2_error_s H2NN_ERROR[1] = {{ "UNKNOWN_ERROR", "Unknown error number", 0xffffffff, 1, 1, 0, SC_RX_JUNK }}; enum h2frame { #define H2_FRAME(l,u,t,f,...) H2F_##u = t, #include "tbl/h2_frames.h" }; static const char * h2_framename(enum h2frame h2f) { switch (h2f) { #define H2_FRAME(l,u,t,f,...) case H2F_##u: return (#u); #include "tbl/h2_frames.h" default: return (NULL); } } #define H2_FRAME_FLAGS(l,u,v) const uint8_t H2FF_##u = v; #include "tbl/h2_frames.h" /********************************************************************** */ static const h2_error stream_errors[] = { #define H2EC1(U,v,g,r,d) #define H2EC2(U,v,g,r,d) [v] = H2SE_##U, #define H2EC3(U,v,g,r,d) H2EC1(U,v,g,r,d) H2EC2(U,v,g,r,d) #define H2_ERROR(NAME, val, sc, goaway, reason, desc) \ H2EC##sc(NAME, val, goaway, reason, desc) #include "tbl/h2_error.h" #undef H2EC1 #undef H2EC2 #undef H2EC3 }; #define NSTREAMERRORS (sizeof(stream_errors)/sizeof(stream_errors[0])) static h2_error h2_streamerror(uint32_t u) { if (u < NSTREAMERRORS && stream_errors[u] != NULL) return (stream_errors[u]); else return (H2NN_ERROR); } /********************************************************************** */ static const h2_error conn_errors[] = { #define H2EC1(U,v,g,r,d) [v] = H2CE_##U, #define H2EC2(U,v,g,r,d) #define H2EC3(U,v,g,r,d) H2EC1(U,v,g,r,d) H2EC2(U,v,g,r,d) #define H2_ERROR(NAME, val, sc, goaway, reason, desc) \ H2EC##sc(NAME, val, goaway, reason, desc) #include "tbl/h2_error.h" #undef H2EC1 #undef H2EC2 #undef H2EC3 }; #define NCONNERRORS (sizeof(conn_errors)/sizeof(conn_errors[0])) static h2_error h2_connectionerror(uint32_t u) { if (u < NCONNERRORS && conn_errors[u] != NULL) return (conn_errors[u]); else return (H2NN_ERROR); } /**********************************************************************/ struct h2_req * h2_new_req(struct h2_sess *h2, unsigned stream, struct req *req) { struct h2_req *r2; ASSERT_RXTHR(h2); if (req == NULL) req = Req_New(h2->sess); CHECK_OBJ_NOTNULL(req, REQ_MAGIC); r2 = WS_Alloc(req->ws, sizeof *r2); AN(r2); INIT_OBJ(r2, H2_REQ_MAGIC); r2->state = H2_S_IDLE; r2->h2sess = h2; r2->stream = stream; r2->req = req; if (stream) r2->counted = 1; r2->r_window = h2->local_settings.initial_window_size; r2->t_window = h2->remote_settings.initial_window_size; req->transport_priv = r2; Lck_Lock(&h2->sess->mtx); if (stream) h2->open_streams++; VTAILQ_INSERT_TAIL(&h2->streams, r2, list); Lck_Unlock(&h2->sess->mtx); h2->refcnt++; return (r2); } void h2_del_req(struct worker *wrk, struct h2_req *r2) { struct h2_sess *h2; struct sess *sp; struct stv_buffer *stvbuf; CHECK_OBJ_NOTNULL(r2, H2_REQ_MAGIC); AZ(r2->scheduled); h2 = r2->h2sess; CHECK_OBJ_NOTNULL(h2, H2_SESS_MAGIC); ASSERT_RXTHR(h2); sp = h2->sess; Lck_Lock(&sp->mtx); assert(h2->refcnt > 0); --h2->refcnt; /* XXX: PRIORITY reshuffle */ VTAILQ_REMOVE(&h2->streams, r2, list); Lck_Unlock(&sp->mtx); assert(!WS_IsReserved(r2->req->ws)); AZ(r2->req->ws->r); CHECK_OBJ_ORNULL(r2->rxbuf, H2_RXBUF_MAGIC); if (r2->rxbuf) { stvbuf = r2->rxbuf->stvbuf; r2->rxbuf = NULL; STV_FreeBuf(wrk, &stvbuf); AZ(stvbuf); } Req_Cleanup(sp, wrk, r2->req); if (FEATURE(FEATURE_BUSY_STATS_RATE)) WRK_AddStat(wrk); Req_Release(r2->req); } void h2_kill_req(struct worker *wrk, struct h2_sess *h2, struct h2_req *r2, h2_error h2e) { ASSERT_RXTHR(h2); AN(h2e); Lck_Lock(&h2->sess->mtx); VSLb(h2->vsl, SLT_Debug, "KILL st=%u state=%d sched=%d", r2->stream, r2->state, r2->scheduled); if (r2->counted) { assert(h2->open_streams > 0); h2->open_streams--; r2->counted = 0; } if (r2->error == NULL) r2->error = h2e; if (r2->scheduled) { if (r2->cond != NULL) PTOK(pthread_cond_signal(r2->cond)); r2 = NULL; } else { if (r2->state == H2_S_OPEN && h2->new_req == r2->req) (void)h2h_decode_fini(h2); } Lck_Unlock(&h2->sess->mtx); if (r2 != NULL) h2_del_req(wrk, r2); } /**********************************************************************/ static void h2_vsl_frame(const struct h2_sess *h2, const void *ptr, size_t len) { const uint8_t *b; struct vsb *vsb; const char *p; unsigned u; if (VSL_tag_is_masked(SLT_H2RxHdr) && VSL_tag_is_masked(SLT_H2RxBody)) return; AN(ptr); assert(len >= 9); b = ptr; vsb = VSB_new_auto(); AN(vsb); p = h2_framename((enum h2frame)b[3]); if (p != NULL) VSB_cat(vsb, p); else VSB_quote(vsb, b + 3, 1, VSB_QUOTE_HEX); u = vbe32dec(b) >> 8; VSB_printf(vsb, "[%u] ", u); VSB_quote(vsb, b + 4, 1, VSB_QUOTE_HEX); VSB_putc(vsb, ' '); VSB_quote(vsb, b + 5, 4, VSB_QUOTE_HEX); if (u > 0) { VSB_putc(vsb, ' '); VSB_quote(vsb, b + 9, len - 9, VSB_QUOTE_HEX); } AZ(VSB_finish(vsb)); Lck_Lock(&h2->sess->mtx); VSLb_bin(h2->vsl, SLT_H2RxHdr, 9, b); if (len > 9) VSLb_bin(h2->vsl, SLT_H2RxBody, len - 9, b + 9); VSLb(h2->vsl, SLT_Debug, "H2RXF %s", VSB_data(vsb)); Lck_Unlock(&h2->sess->mtx); VSB_destroy(&vsb); } /********************************************************************** */ static h2_error v_matchproto_(h2_rxframe_f) h2_rx_ping(struct worker *wrk, struct h2_sess *h2, struct h2_req *r2) { CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); ASSERT_RXTHR(h2); CHECK_OBJ_NOTNULL(r2, H2_REQ_MAGIC); assert(r2 == h2->req0); if (h2->rxf_len != 8) // rfc7540,l,2364,2366 return (H2CE_FRAME_SIZE_ERROR); AZ(h2->rxf_stream); // rfc7540,l,2359,2362 if (h2->rxf_flags != 0) // We never send pings return (H2SE_PROTOCOL_ERROR); H2_Send_Get(wrk, h2, r2); H2_Send_Frame(wrk, h2, H2_F_PING, H2FF_PING_ACK, 8, 0, h2->rxf_data); H2_Send_Rel(h2, r2); return (0); } /********************************************************************** */ static h2_error v_matchproto_(h2_rxframe_f) h2_rx_push_promise(struct worker *wrk, struct h2_sess *h2, struct h2_req *r2) { CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); ASSERT_RXTHR(h2); CHECK_OBJ_ORNULL(r2, H2_REQ_MAGIC); // rfc7540,l,2262,2267 return (H2CE_PROTOCOL_ERROR); } /********************************************************************** */ static h2_error h2_rapid_reset(struct worker *wrk, struct h2_sess *h2, struct h2_req *r2) { vtim_real now; vtim_dur d; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); ASSERT_RXTHR(h2); CHECK_OBJ_NOTNULL(r2, H2_REQ_MAGIC); if (h2->rapid_reset_limit == 0) return (0); now = VTIM_real(); CHECK_OBJ_NOTNULL(r2->req, REQ_MAGIC); AN(r2->req->t_first); if (now - r2->req->t_first > h2->rapid_reset) return (0); d = now - h2->last_rst; h2->rst_budget += h2->rapid_reset_limit * d / h2->rapid_reset_period; h2->rst_budget = vmin_t(double, h2->rst_budget, h2->rapid_reset_limit); h2->last_rst = now; if (h2->rst_budget < 1.0) { Lck_Lock(&h2->sess->mtx); VSLb(h2->vsl, SLT_Error, "H2: Hit RST limit. Closing session."); Lck_Unlock(&h2->sess->mtx); return (H2CE_RAPID_RESET); } h2->rst_budget -= 1.0; return (0); } static h2_error v_matchproto_(h2_rxframe_f) h2_rx_rst_stream(struct worker *wrk, struct h2_sess *h2, struct h2_req *r2) { h2_error h2e; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); ASSERT_RXTHR(h2); CHECK_OBJ_ORNULL(r2, H2_REQ_MAGIC); if (h2->rxf_len != 4) // rfc7540,l,2003,2004 return (H2CE_FRAME_SIZE_ERROR); if (r2 == NULL) return (0); h2e = h2_rapid_reset(wrk, h2, r2); h2_kill_req(wrk, h2, r2, h2_streamerror(vbe32dec(h2->rxf_data))); return (h2e); } /********************************************************************** */ static h2_error v_matchproto_(h2_rxframe_f) h2_rx_goaway(struct worker *wrk, struct h2_sess *h2, struct h2_req *r2) { CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); ASSERT_RXTHR(h2); CHECK_OBJ_NOTNULL(r2, H2_REQ_MAGIC); assert(r2 == h2->req0); h2->goaway = 1; h2->goaway_last_stream = vbe32dec(h2->rxf_data); h2->error = h2_connectionerror(vbe32dec(h2->rxf_data + 4)); Lck_Lock(&h2->sess->mtx); VSLb(h2->vsl, SLT_Debug, "GOAWAY %s", h2->error->name); Lck_Unlock(&h2->sess->mtx); return (h2->error); } static void h2_tx_goaway(struct worker *wrk, struct h2_sess *h2, h2_error h2e) { char b[8]; ASSERT_RXTHR(h2); AN(h2e); if (h2->goaway || !h2e->send_goaway) return; h2->goaway = 1; vbe32enc(b, h2->highest_stream); vbe32enc(b + 4, h2e->val); H2_Send_Get(wrk, h2, h2->req0); H2_Send_Frame(wrk, h2, H2_F_GOAWAY, 0, 8, 0, b); H2_Send_Rel(h2, h2->req0); } /********************************************************************** */ static h2_error v_matchproto_(h2_rxframe_f) h2_rx_window_update(struct worker *wrk, struct h2_sess *h2, struct h2_req *r2) { uint32_t wu; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); ASSERT_RXTHR(h2); CHECK_OBJ_ORNULL(r2, H2_REQ_MAGIC); if (h2->rxf_len != 4) return (H2CE_FRAME_SIZE_ERROR); wu = vbe32dec(h2->rxf_data) & ~(1LU<<31); if (wu == 0) return (H2SE_PROTOCOL_ERROR); if (r2 == NULL) return (0); Lck_Lock(&h2->sess->mtx); r2->t_window += wu; if (r2 == h2->req0) PTOK(pthread_cond_broadcast(h2->winupd_cond)); else if (r2->cond != NULL) PTOK(pthread_cond_signal(r2->cond)); Lck_Unlock(&h2->sess->mtx); if (r2->t_window >= (1LL << 31)) return (H2SE_FLOW_CONTROL_ERROR); return (0); } /********************************************************************** * Incoming PRIORITY, possibly an ACK of one we sent. */ static h2_error v_matchproto_(h2_rxframe_f) h2_rx_priority(struct worker *wrk, struct h2_sess *h2, struct h2_req *r2) { CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); ASSERT_RXTHR(h2); CHECK_OBJ_ORNULL(r2, H2_REQ_MAGIC); return (0); } /********************************************************************** * Incoming SETTINGS, possibly an ACK of one we sent. */ #define H2_SETTING(U,l, ...) \ static void v_matchproto_(h2_setsetting_f) \ h2_setting_##l(struct h2_settings* s, uint32_t v) \ { \ s -> l = v; \ } #include #define H2_SETTING(U, l, ...) \ const struct h2_setting_s H2_SET_##U[1] = {{ \ #l, \ h2_setting_##l, \ __VA_ARGS__ \ }}; #include static const struct h2_setting_s * const h2_setting_tbl[] = { #define H2_SETTING(U,l,v, ...) [v] = H2_SET_##U, #include }; #define H2_SETTING_TBL_LEN (sizeof(h2_setting_tbl)/sizeof(h2_setting_tbl[0])) static void h2_win_adjust(const struct h2_sess *h2, uint32_t oldval, uint32_t newval) { struct h2_req *r2; Lck_AssertHeld(&h2->sess->mtx); // rfc7540,l,2668,2674 VTAILQ_FOREACH(r2, &h2->streams, list) { CHECK_OBJ_NOTNULL(r2, H2_REQ_MAGIC); if (r2 == h2->req0) continue; // rfc7540,l,2699,2699 switch (r2->state) { case H2_S_IDLE: case H2_S_OPEN: case H2_S_CLOS_REM: /* * We allow a window to go negative, as per * rfc7540,l,2676,2680 */ r2->t_window += (int64_t)newval - oldval; break; default: break; } } } h2_error h2_set_setting(struct h2_sess *h2, const uint8_t *d) { const struct h2_setting_s *s; uint16_t x; uint32_t y; x = vbe16dec(d); y = vbe32dec(d + 2); if (x >= H2_SETTING_TBL_LEN || h2_setting_tbl[x] == NULL) { // rfc7540,l,2181,2182 Lck_Lock(&h2->sess->mtx); VSLb(h2->vsl, SLT_Debug, "H2SETTING unknown setting 0x%04x=%08x (ignored)", x, y); Lck_Unlock(&h2->sess->mtx); return (0); } s = h2_setting_tbl[x]; AN(s); if (y < s->minval || y > s->maxval) { Lck_Lock(&h2->sess->mtx); VSLb(h2->vsl, SLT_Debug, "H2SETTING invalid %s=0x%08x", s->name, y); Lck_Unlock(&h2->sess->mtx); AN(s->range_error); if (!DO_DEBUG(DBG_H2_NOCHECK)) return (s->range_error); } Lck_Lock(&h2->sess->mtx); if (s == H2_SET_INITIAL_WINDOW_SIZE) h2_win_adjust(h2, h2->remote_settings.initial_window_size, y); VSLb(h2->vsl, SLT_Debug, "H2SETTING %s=0x%08x", s->name, y); Lck_Unlock(&h2->sess->mtx); AN(s->setfunc); s->setfunc(&h2->remote_settings, y); return (0); } static h2_error v_matchproto_(h2_rxframe_f) h2_rx_settings(struct worker *wrk, struct h2_sess *h2, struct h2_req *r2) { const uint8_t *p; unsigned l; h2_error retval = 0; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); ASSERT_RXTHR(h2); CHECK_OBJ_NOTNULL(r2, H2_REQ_MAGIC); assert(r2 == h2->req0); AZ(h2->rxf_stream); if (h2->rxf_flags == H2FF_SETTINGS_ACK) { if (h2->rxf_len > 0) // rfc7540,l,2047,2049 return (H2CE_FRAME_SIZE_ERROR); return (0); } else { if (h2->rxf_len % 6) // rfc7540,l,2062,2064 return (H2CE_PROTOCOL_ERROR); p = h2->rxf_data; for (l = h2->rxf_len; l >= 6; l -= 6, p += 6) { retval = h2_set_setting(h2, p); if (retval) return (retval); } H2_Send_Get(wrk, h2, r2); H2_Send_Frame(wrk, h2, H2_F_SETTINGS, H2FF_SETTINGS_ACK, 0, 0, NULL); H2_Send_Rel(h2, r2); } return (0); } /********************************************************************** * Incoming HEADERS, this is where the partys at... */ void v_matchproto_(task_func_t) h2_do_req(struct worker *wrk, void *priv) { struct req *req; struct h2_req *r2; struct h2_sess *h2; CAST_OBJ_NOTNULL(req, priv, REQ_MAGIC); CAST_OBJ_NOTNULL(r2, req->transport_priv, H2_REQ_MAGIC); THR_SetRequest(req); CNT_Embark(wrk, req); if (CNT_Request(req) != REQ_FSM_DISEMBARK) { wrk->stats->client_req++; assert(!WS_IsReserved(req->ws)); AZ(req->top->vcl0); h2 = r2->h2sess; CHECK_OBJ_NOTNULL(h2, H2_SESS_MAGIC); Lck_Lock(&h2->sess->mtx); r2->scheduled = 0; r2->state = H2_S_CLOSED; r2->h2sess->do_sweep = 1; Lck_Unlock(&h2->sess->mtx); } THR_SetRequest(NULL); } static h2_error h2_end_headers(struct worker *wrk, struct h2_sess *h2, struct req *req, struct h2_req *r2) { h2_error h2e; ssize_t cl; ASSERT_RXTHR(h2); assert(r2->state == H2_S_OPEN); h2e = h2h_decode_fini(h2); h2->new_req = NULL; if (h2e != NULL) { Lck_Lock(&h2->sess->mtx); VSLb(h2->vsl, SLT_Debug, "HPACK/FINI %s", h2e->name); Lck_Unlock(&h2->sess->mtx); assert(!WS_IsReserved(r2->req->ws)); h2_del_req(wrk, r2); return (h2e); } VSLb_ts_req(req, "Req", req->t_req); // XXX: Smarter to do this already at HPACK time into tail end of // XXX: WS, then copy back once all headers received. // XXX: Have I mentioned H/2 Is hodge-podge ? http_CollectHdrSep(req->http, H_Cookie, "; "); // rfc7540,l,3114,3120 cl = http_GetContentLength(req->http); assert(cl >= -2); if (cl == -2) { VSLb(h2->vsl, SLT_Debug, "Non-parseable Content-Length"); return (H2SE_PROTOCOL_ERROR); } if (req->req_body_status == NULL) { if (cl == -1) req->req_body_status = BS_EOF; else { /* Note: If cl==0 here, we still need to have * req_body_status==BS_LENGTH, so that there will * be a wait for the stream to reach H2_S_CLOS_REM * while dealing with the request body. */ req->req_body_status = BS_LENGTH; } /* Set req->htc->content_length because this is used as * the hint in vrb_pull() for how large the storage * buffers need to be */ req->htc->content_length = cl; } else { /* A HEADER frame contained END_STREAM */ assert (req->req_body_status == BS_NONE); r2->state = H2_S_CLOS_REM; if (cl > 0) return (H2CE_PROTOCOL_ERROR); //rfc7540,l,1838,1840 } if (req->http->hd[HTTP_HDR_METHOD].b == NULL) { VSLb(h2->vsl, SLT_Debug, "Missing :method"); return (H2SE_PROTOCOL_ERROR); //rfc7540,l,3087,3090 } if (req->http->hd[HTTP_HDR_URL].b == NULL) { VSLb(h2->vsl, SLT_Debug, "Missing :path"); return (H2SE_PROTOCOL_ERROR); //rfc7540,l,3087,3090 } AN(req->http->hd[HTTP_HDR_PROTO].b); if (*req->http->hd[HTTP_HDR_URL].b == '*' && (Tlen(req->http->hd[HTTP_HDR_METHOD]) != 7 || strncmp(req->http->hd[HTTP_HDR_METHOD].b, "OPTIONS", 7))) { VSLb(h2->vsl, SLT_BogoHeader, "Illegal :path pseudo-header"); return (H2SE_PROTOCOL_ERROR); //rfc7540,l,3068,3071 } assert(req->req_step == R_STP_TRANSPORT); VCL_TaskEnter(req->privs); VCL_TaskEnter(req->top->privs); req->task->func = h2_do_req; req->task->priv = req; r2->scheduled = 1; if (Pool_Task(wrk->pool, req->task, TASK_QUEUE_STR) != 0) { r2->scheduled = 0; r2->state = H2_S_CLOSED; return (H2SE_REFUSED_STREAM); //rfc7540,l,3326,3329 } return (0); } static h2_error v_matchproto_(h2_rxframe_f) h2_rx_headers(struct worker *wrk, struct h2_sess *h2, struct h2_req *r2) { struct req *req; h2_error h2e; const uint8_t *p; size_t l; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); ASSERT_RXTHR(h2); if (r2 == NULL) { if (h2->rxf_stream <= h2->highest_stream) return (H2CE_PROTOCOL_ERROR); // rfc7540,l,1153,1158 /* NB: we don't need to guard the read of h2->open_streams * because headers are handled sequentially so it cannot * increase under our feet. */ if (h2->open_streams >= h2->local_settings.max_concurrent_streams) { VSLb(h2->vsl, SLT_Debug, "H2: stream %u: Hit maximum number of " "concurrent streams", h2->rxf_stream); return (H2SE_REFUSED_STREAM); // rfc7540,l,1200,1205 } h2->highest_stream = h2->rxf_stream; r2 = h2_new_req(h2, h2->rxf_stream, NULL); } CHECK_OBJ_NOTNULL(r2, H2_REQ_MAGIC); if (r2->state != H2_S_IDLE) return (H2CE_PROTOCOL_ERROR); // XXX spec ? r2->state = H2_S_OPEN; req = r2->req; CHECK_OBJ_NOTNULL(req, REQ_MAGIC); req->vsl->wid = VXID_Get(wrk, VSL_CLIENTMARKER); VSLb(req->vsl, SLT_Begin, "req %ju rxreq", VXID(req->sp->vxid)); VSL(SLT_Link, req->sp->vxid, "req %ju rxreq", VXID(req->vsl->wid)); h2->new_req = req; req->sp = h2->sess; req->transport = &HTTP2_transport; req->t_first = VTIM_real(); req->t_req = VTIM_real(); req->t_prev = req->t_first; VSLb_ts_req(req, "Start", req->t_first); req->acct.req_hdrbytes += h2->rxf_len; HTTP_Setup(req->http, req->ws, req->vsl, SLT_ReqMethod); http_SetH(req->http, HTTP_HDR_PROTO, "HTTP/2.0"); h2h_decode_init(h2); p = h2->rxf_data; l = h2->rxf_len; if (h2->rxf_flags & H2FF_HEADERS_PADDED) { if (*p + 1 > l) return (H2CE_PROTOCOL_ERROR); // rfc7540,l,1884,1887 l -= 1 + *p; p += 1; } if (h2->rxf_flags & H2FF_HEADERS_PRIORITY) { if (l < 5) return (H2CE_PROTOCOL_ERROR); l -= 5; p += 5; } h2e = h2h_decode_bytes(h2, p, l); if (h2e != NULL) { Lck_Lock(&h2->sess->mtx); VSLb(h2->vsl, SLT_Debug, "HPACK(hdr) %s", h2e->name); Lck_Unlock(&h2->sess->mtx); (void)h2h_decode_fini(h2); assert(!WS_IsReserved(r2->req->ws)); h2_del_req(wrk, r2); return (h2e); } if (h2->rxf_flags & H2FF_HEADERS_END_STREAM) req->req_body_status = BS_NONE; if (h2->rxf_flags & H2FF_HEADERS_END_HEADERS) return (h2_end_headers(wrk, h2, req, r2)); return (0); } /**********************************************************************/ static h2_error v_matchproto_(h2_rxframe_f) h2_rx_continuation(struct worker *wrk, struct h2_sess *h2, struct h2_req *r2) { struct req *req; h2_error h2e; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); ASSERT_RXTHR(h2); CHECK_OBJ_ORNULL(r2, H2_REQ_MAGIC); if (r2 == NULL || r2->state != H2_S_OPEN || r2->req != h2->new_req) return (H2CE_PROTOCOL_ERROR); // XXX spec ? req = r2->req; h2e = h2h_decode_bytes(h2, h2->rxf_data, h2->rxf_len); r2->req->acct.req_hdrbytes += h2->rxf_len; if (h2e != NULL) { Lck_Lock(&h2->sess->mtx); VSLb(h2->vsl, SLT_Debug, "HPACK(cont) %s", h2e->name); Lck_Unlock(&h2->sess->mtx); (void)h2h_decode_fini(h2); assert(!WS_IsReserved(r2->req->ws)); h2_del_req(wrk, r2); return (h2e); } if (h2->rxf_flags & H2FF_HEADERS_END_HEADERS) return (h2_end_headers(wrk, h2, req, r2)); return (0); } /**********************************************************************/ static h2_error v_matchproto_(h2_rxframe_f) h2_rx_data(struct worker *wrk, struct h2_sess *h2, struct h2_req *r2) { char buf[4]; ssize_t l; uint64_t l2, head; const uint8_t *src; unsigned len; /* XXX: Shouldn't error handling, setting of r2->error and * r2->cond signalling be handled more generally at the end of * procframe()??? */ CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); ASSERT_RXTHR(h2); CHECK_OBJ_ORNULL(r2, H2_REQ_MAGIC); if (r2 == NULL) return (0); if (r2->state >= H2_S_CLOS_REM) { r2->error = H2SE_STREAM_CLOSED; return (H2SE_STREAM_CLOSED); // rfc7540,l,1766,1769 } Lck_Lock(&h2->sess->mtx); CHECK_OBJ_ORNULL(r2->rxbuf, H2_RXBUF_MAGIC); if (h2->error != NULL || r2->error != NULL) { if (r2->cond) PTOK(pthread_cond_signal(r2->cond)); Lck_Unlock(&h2->sess->mtx); return (h2->error != NULL ? h2->error : r2->error); } /* Check padding if present */ src = h2->rxf_data; len = h2->rxf_len; if (h2->rxf_flags & H2FF_DATA_PADDED) { if (*src >= len) { VSLb(h2->vsl, SLT_Debug, "H2: stream %u: Padding larger than frame length", h2->rxf_stream); r2->error = H2CE_PROTOCOL_ERROR; if (r2->cond) PTOK(pthread_cond_signal(r2->cond)); Lck_Unlock(&h2->sess->mtx); return (H2CE_PROTOCOL_ERROR); } len -= 1 + *src; src += 1; } /* Check against the Content-Length header if given */ if (r2->req->htc->content_length >= 0) { if (r2->rxbuf) l = r2->rxbuf->head; else l = 0; l += len; if (l > r2->req->htc->content_length || ((h2->rxf_flags & H2FF_DATA_END_STREAM) && l != r2->req->htc->content_length)) { VSLb(h2->vsl, SLT_Debug, "H2: stream %u: Received data and Content-Length" " mismatch", h2->rxf_stream); r2->error = H2SE_PROTOCOL_ERROR; if (r2->cond) PTOK(pthread_cond_signal(r2->cond)); Lck_Unlock(&h2->sess->mtx); return (H2SE_PROTOCOL_ERROR); } } /* Check and charge connection window. The entire frame including * padding (h2->rxf_len) counts towards the window. */ if (h2->rxf_len > h2->req0->r_window) { VSLb(h2->vsl, SLT_Debug, "H2: stream %u: Exceeded connection receive window", h2->rxf_stream); r2->error = H2CE_FLOW_CONTROL_ERROR; if (r2->cond) PTOK(pthread_cond_signal(r2->cond)); Lck_Unlock(&h2->sess->mtx); return (H2CE_FLOW_CONTROL_ERROR); } h2->req0->r_window -= h2->rxf_len; if (h2->req0->r_window < cache_param->h2_rx_window_low_water) { h2->req0->r_window += cache_param->h2_rx_window_increment; vbe32enc(buf, cache_param->h2_rx_window_increment); Lck_Unlock(&h2->sess->mtx); H2_Send_Get(wrk, h2, h2->req0); H2_Send_Frame(wrk, h2, H2_F_WINDOW_UPDATE, 0, 4, 0, buf); H2_Send_Rel(h2, h2->req0); Lck_Lock(&h2->sess->mtx); } /* Check stream window. The entire frame including padding * (h2->rxf_len) counts towards the window. */ if (h2->rxf_len > r2->r_window) { VSLb(h2->vsl, SLT_Debug, "H2: stream %u: Exceeded stream receive window", h2->rxf_stream); r2->error = H2SE_FLOW_CONTROL_ERROR; if (r2->cond) PTOK(pthread_cond_signal(r2->cond)); Lck_Unlock(&h2->sess->mtx); return (H2SE_FLOW_CONTROL_ERROR); } /* Handle zero size frame before starting to allocate buffers */ if (len == 0) { r2->r_window -= h2->rxf_len; /* Handle the specific corner case where the entire window * has been exhausted using nothing but padding * bytes. Since no bytes have been buffered, no bytes * would be consumed by the request thread and no stream * window updates sent. Unpaint ourselves from this corner * by sending a stream window update here. */ CHECK_OBJ_ORNULL(r2->rxbuf, H2_RXBUF_MAGIC); if (r2->r_window == 0 && (r2->rxbuf == NULL || r2->rxbuf->tail == r2->rxbuf->head)) { if (r2->rxbuf) l = r2->rxbuf->size; else l = h2->local_settings.initial_window_size; r2->r_window += l; Lck_Unlock(&h2->sess->mtx); vbe32enc(buf, l); H2_Send_Get(wrk, h2, h2->req0); H2_Send_Frame(wrk, h2, H2_F_WINDOW_UPDATE, 0, 4, r2->stream, buf); H2_Send_Rel(h2, h2->req0); Lck_Lock(&h2->sess->mtx); } if (h2->rxf_flags & H2FF_DATA_END_STREAM) r2->state = H2_S_CLOS_REM; if (r2->cond) PTOK(pthread_cond_signal(r2->cond)); Lck_Unlock(&h2->sess->mtx); return (0); } /* Make the buffer on demand */ if (r2->rxbuf == NULL) { unsigned bufsize; size_t bstest; struct stv_buffer *stvbuf; struct h2_rxbuf *rxbuf; Lck_Unlock(&h2->sess->mtx); bufsize = h2->local_settings.initial_window_size; if (bufsize < r2->r_window) { /* This will not happen because we do not have any * mechanism to change the initial window size on * a running session. But if we gain that ability, * this future proofs it. */ bufsize = r2->r_window; } assert(bufsize > 0); if ((h2->rxf_flags & H2FF_DATA_END_STREAM) && bufsize > len) /* Cap the buffer size when we know this is the * single data frame. */ bufsize = len; CHECK_OBJ_NOTNULL(stv_h2_rxbuf, STEVEDORE_MAGIC); stvbuf = STV_AllocBuf(wrk, stv_h2_rxbuf, bufsize + sizeof *rxbuf); if (stvbuf == NULL) { VSLb(h2->vsl, SLT_Debug, "H2: stream %u: Failed to allocate request body" " buffer", h2->rxf_stream); Lck_Lock(&h2->sess->mtx); r2->error = H2SE_INTERNAL_ERROR; if (r2->cond) PTOK(pthread_cond_signal(r2->cond)); Lck_Unlock(&h2->sess->mtx); return (H2SE_INTERNAL_ERROR); } rxbuf = STV_GetBufPtr(stvbuf, &bstest); AN(rxbuf); assert(bstest >= bufsize + sizeof *rxbuf); assert(PAOK(rxbuf)); INIT_OBJ(rxbuf, H2_RXBUF_MAGIC); rxbuf->size = bufsize; rxbuf->stvbuf = stvbuf; r2->rxbuf = rxbuf; Lck_Lock(&h2->sess->mtx); } CHECK_OBJ_NOTNULL(r2->rxbuf, H2_RXBUF_MAGIC); assert(r2->rxbuf->tail <= r2->rxbuf->head); l = r2->rxbuf->head - r2->rxbuf->tail; assert(l <= r2->rxbuf->size); l = r2->rxbuf->size - l; assert(len <= l); /* Stream window handling ensures this */ Lck_Unlock(&h2->sess->mtx); l = len; head = r2->rxbuf->head; do { l2 = l; if ((head % r2->rxbuf->size) + l2 > r2->rxbuf->size) l2 = r2->rxbuf->size - (head % r2->rxbuf->size); assert(l2 > 0); memcpy(&r2->rxbuf->data[head % r2->rxbuf->size], src, l2); src += l2; head += l2; l -= l2; } while (l > 0); Lck_Lock(&h2->sess->mtx); /* Charge stream window. The entire frame including padding * (h2->rxf_len) counts towards the window. The used padding * bytes will be included in the next connection window update * sent when the buffer bytes are consumed because that is * calculated against the available buffer space. */ r2->r_window -= h2->rxf_len; r2->rxbuf->head += len; assert(r2->rxbuf->tail <= r2->rxbuf->head); if (h2->rxf_flags & H2FF_DATA_END_STREAM) r2->state = H2_S_CLOS_REM; if (r2->cond) PTOK(pthread_cond_signal(r2->cond)); Lck_Unlock(&h2->sess->mtx); return (0); } static enum vfp_status v_matchproto_(vfp_pull_f) h2_vfp_body(struct vfp_ctx *vc, struct vfp_entry *vfe, void *ptr, ssize_t *lp) { struct h2_req *r2; struct h2_sess *h2; enum vfp_status retval; ssize_t l, l2; uint64_t tail; uint8_t *dst; char buf[4]; int i; CHECK_OBJ_NOTNULL(vc, VFP_CTX_MAGIC); CHECK_OBJ_NOTNULL(vfe, VFP_ENTRY_MAGIC); CAST_OBJ_NOTNULL(r2, vfe->priv1, H2_REQ_MAGIC); h2 = r2->h2sess; AN(ptr); AN(lp); assert(*lp >= 0); Lck_Lock(&h2->sess->mtx); r2->cond = &vc->wrk->cond; while (1) { CHECK_OBJ_ORNULL(r2->rxbuf, H2_RXBUF_MAGIC); if (r2->rxbuf) { assert(r2->rxbuf->tail <= r2->rxbuf->head); l = r2->rxbuf->head - r2->rxbuf->tail; } else l = 0; if (h2->error != NULL || r2->error != NULL) retval = VFP_ERROR; else if (r2->state >= H2_S_CLOS_REM && l <= *lp) retval = VFP_END; else { if (l > *lp) l = *lp; retval = VFP_OK; } if (retval != VFP_OK || l > 0) break; i = Lck_CondWaitTimeout(r2->cond, &h2->sess->mtx, SESS_TMO(h2->sess, timeout_idle)); if (i == ETIMEDOUT) { retval = VFP_ERROR; break; } } r2->cond = NULL; Lck_Unlock(&h2->sess->mtx); if (l == 0 || retval == VFP_ERROR) { *lp = 0; return (retval); } *lp = l; dst = ptr; tail = r2->rxbuf->tail; do { l2 = l; if ((tail % r2->rxbuf->size) + l2 > r2->rxbuf->size) l2 = r2->rxbuf->size - (tail % r2->rxbuf->size); assert(l2 > 0); memcpy(dst, &r2->rxbuf->data[tail % r2->rxbuf->size], l2); dst += l2; tail += l2; l -= l2; } while (l > 0); Lck_Lock(&h2->sess->mtx); CHECK_OBJ_NOTNULL(r2->rxbuf, H2_RXBUF_MAGIC); r2->rxbuf->tail = tail; assert(r2->rxbuf->tail <= r2->rxbuf->head); if (r2->r_window < cache_param->h2_rx_window_low_water && r2->state < H2_S_CLOS_REM) { /* l is free buffer space */ /* l2 is calculated window increment */ l = r2->rxbuf->size - (r2->rxbuf->head - r2->rxbuf->tail); assert(r2->r_window <= l); l2 = cache_param->h2_rx_window_increment; if (r2->r_window + l2 > l) l2 = l - r2->r_window; r2->r_window += l2; } else l2 = 0; Lck_Unlock(&h2->sess->mtx); if (l2 > 0) { vbe32enc(buf, l2); H2_Send_Get(vc->wrk, h2, r2); H2_Send_Frame(vc->wrk, h2, H2_F_WINDOW_UPDATE, 0, 4, r2->stream, buf); H2_Send_Rel(h2, r2); } return (retval); } static void h2_vfp_body_fini(struct vfp_ctx *vc, struct vfp_entry *vfe) { struct h2_req *r2; struct h2_sess *h2; struct stv_buffer *stvbuf = NULL; CHECK_OBJ_NOTNULL(vc, VFP_CTX_MAGIC); CHECK_OBJ_NOTNULL(vfe, VFP_ENTRY_MAGIC); CAST_OBJ_NOTNULL(r2, vfe->priv1, H2_REQ_MAGIC); CHECK_OBJ_NOTNULL(r2->req, REQ_MAGIC); h2 = r2->h2sess; if (vc->failed) { CHECK_OBJ_NOTNULL(r2->req->wrk, WORKER_MAGIC); H2_Send_Get(r2->req->wrk, h2, r2); H2_Send_RST(r2->req->wrk, h2, r2, r2->stream, H2SE_REFUSED_STREAM); H2_Send_Rel(h2, r2); Lck_Lock(&h2->sess->mtx); r2->error = H2SE_REFUSED_STREAM; Lck_Unlock(&h2->sess->mtx); } if (r2->state >= H2_S_CLOS_REM && r2->rxbuf != NULL) { Lck_Lock(&h2->sess->mtx); CHECK_OBJ_ORNULL(r2->rxbuf, H2_RXBUF_MAGIC); if (r2->rxbuf != NULL) { stvbuf = r2->rxbuf->stvbuf; r2->rxbuf = NULL; } Lck_Unlock(&h2->sess->mtx); if (stvbuf != NULL) { STV_FreeBuf(vc->wrk, &stvbuf); AZ(stvbuf); } } } static const struct vfp h2_body = { .name = "H2_BODY", .pull = h2_vfp_body, .fini = h2_vfp_body_fini }; void v_matchproto_(vtr_req_body_t) h2_req_body(struct req *req) { struct h2_req *r2; struct vfp_entry *vfe; CHECK_OBJ(req, REQ_MAGIC); CAST_OBJ_NOTNULL(r2, req->transport_priv, H2_REQ_MAGIC); vfe = VFP_Push(req->vfc, &h2_body); AN(vfe); vfe->priv1 = r2; } /**********************************************************************/ void v_matchproto_(vtr_req_fail_f) h2_req_fail(struct req *req, stream_close_t reason) { assert(reason != SC_NULL); assert(req->sp->fd != 0); VSLb(req->vsl, SLT_Debug, "H2FAILREQ"); } /**********************************************************************/ static enum htc_status_e v_matchproto_(htc_complete_f) h2_frame_complete(struct http_conn *htc) { struct h2_sess *h2; CHECK_OBJ_NOTNULL(htc, HTTP_CONN_MAGIC); CAST_OBJ_NOTNULL(h2, htc->priv, H2_SESS_MAGIC); if (htc->rxbuf_b + 9 > htc->rxbuf_e || htc->rxbuf_b + 9 + (vbe32dec(htc->rxbuf_b) >> 8) > htc->rxbuf_e) return (HTC_S_MORE); return (HTC_S_COMPLETE); } /**********************************************************************/ static h2_error h2_procframe(struct worker *wrk, struct h2_sess *h2, h2_frame h2f) { struct h2_req *r2; h2_error h2e; ASSERT_RXTHR(h2); if (h2->rxf_stream == 0 && h2f->act_szero != 0) return (h2f->act_szero); if (h2->rxf_stream != 0 && h2f->act_snonzero != 0) return (h2f->act_snonzero); if (h2->rxf_stream > h2->highest_stream && h2f->act_sidle != 0) return (h2f->act_sidle); if (h2->rxf_stream != 0 && !(h2->rxf_stream & 1)) { // rfc7540,l,1140,1145 // rfc7540,l,1153,1158 /* No even streams, we don't do PUSH_PROMISE */ Lck_Lock(&h2->sess->mtx); VSLb(h2->vsl, SLT_Debug, "H2: illegal stream (=%u)", h2->rxf_stream); Lck_Unlock(&h2->sess->mtx); return (H2CE_PROTOCOL_ERROR); } VTAILQ_FOREACH(r2, &h2->streams, list) if (r2->stream == h2->rxf_stream) break; if (h2->new_req != NULL && h2f != H2_F_CONTINUATION) return (H2CE_PROTOCOL_ERROR); // rfc7540,l,1859,1863 h2e = h2f->rxfunc(wrk, h2, r2); if (h2e == NULL) return (NULL); if (h2->rxf_stream == 0 || h2e->connection) return (h2e); // Connection errors one level up H2_Send_Get(wrk, h2, h2->req0); H2_Send_RST(wrk, h2, h2->req0, h2->rxf_stream, h2e); H2_Send_Rel(h2, h2->req0); return (NULL); } h2_error h2_stream_tmo(struct h2_sess *h2, const struct h2_req *r2, vtim_real now) { h2_error h2e = NULL; CHECK_OBJ_NOTNULL(h2, H2_SESS_MAGIC); CHECK_OBJ_NOTNULL(r2, H2_REQ_MAGIC); Lck_AssertHeld(&h2->sess->mtx); /* NB: when now is NAN, it means that h2_window_timeout was hit * on a lock condwait operation. */ if (isnan(now)) AN(r2->t_winupd); if (h2->error != NULL && h2->error->connection && !h2->error->send_goaway) return (h2->error); if (r2->t_winupd == 0 && r2->t_send == 0) return (NULL); if (isnan(now) || (r2->t_winupd != 0 && now - r2->t_winupd > cache_param->h2_window_timeout)) { VSLb(h2->vsl, SLT_Debug, "H2: stream %u: Hit h2_window_timeout", r2->stream); h2e = H2SE_BROKE_WINDOW; } if (h2e == NULL && r2->t_send != 0 && now - r2->t_send > SESS_TMO(h2->sess, send_timeout)) { VSLb(h2->vsl, SLT_Debug, "H2: stream %u: Hit send_timeout", r2->stream); h2e = H2SE_CANCEL; } return (h2e); } static h2_error h2_stream_tmo_unlocked(struct h2_sess *h2, const struct h2_req *r2) { h2_error h2e; Lck_Lock(&h2->sess->mtx); h2e = h2_stream_tmo(h2, r2, h2->sess->t_idle); Lck_Unlock(&h2->sess->mtx); return (h2e); } /* * This is the janitorial task of cleaning up any closed & refused * streams, and checking if the session is timed out. */ static h2_error h2_sweep(struct worker *wrk, struct h2_sess *h2) { struct h2_req *r2, *r22; h2_error h2e, tmo; vtim_real now; ASSERT_RXTHR(h2); h2e = h2->error; now = VTIM_real(); if (h2e == NULL && h2->open_streams == 0 && h2->sess->t_idle + cache_param->timeout_idle < now) h2e = H2CE_NO_ERROR; h2->do_sweep = 0; VTAILQ_FOREACH_SAFE(r2, &h2->streams, list, r22) { if (r2 == h2->req0) { assert (r2->state == H2_S_IDLE); continue; } switch (r2->state) { case H2_S_CLOSED: if (!r2->scheduled) h2_del_req(wrk, r2); break; case H2_S_CLOS_REM: if (!r2->scheduled) { H2_Send_Get(wrk, h2, h2->req0); H2_Send_RST(wrk, h2, h2->req0, r2->stream, H2SE_REFUSED_STREAM); H2_Send_Rel(h2, h2->req0); h2_del_req(wrk, r2); continue; } /* FALLTHROUGH */ case H2_S_CLOS_LOC: case H2_S_OPEN: tmo = h2_stream_tmo_unlocked(h2, r2); if (h2e == NULL) h2e = tmo; break; case H2_S_IDLE: /* Current code make this unreachable: h2_new_req is * only called inside h2_rx_headers, which immediately * sets the new stream state to H2_S_OPEN */ /* FALLTHROUGH */ default: WRONG("Wrong h2 stream state"); break; } } return (h2e); } /*********************************************************************** * Called in loop from h2_new_session() */ #define H2_FRAME(l,U,...) const struct h2_frame_s H2_F_##U[1] = \ {{ #U, h2_rx_##l, __VA_ARGS__ }}; #include "tbl/h2_frames.h" static const h2_frame h2flist[] = { #define H2_FRAME(l,U,t,...) [t] = H2_F_##U, #include "tbl/h2_frames.h" }; #define H2FMAX (sizeof(h2flist) / sizeof(h2flist[0])) int h2_rxframe(struct worker *wrk, struct h2_sess *h2) { enum htc_status_e hs; h2_frame h2f; h2_error h2e; ASSERT_RXTHR(h2); if (h2->goaway && h2->open_streams == 0) return (0); VTCP_blocking(*h2->htc->rfd); hs = HTC_RxStuff(h2->htc, h2_frame_complete, NULL, NULL, NAN, VTIM_real() + 0.5, NAN, h2->local_settings.max_frame_size + 9); h2e = NULL; switch (hs) { case HTC_S_COMPLETE: h2->sess->t_idle = VTIM_real(); if (h2->do_sweep) h2e = h2_sweep(wrk, h2); break; case HTC_S_TIMEOUT: h2e = h2_sweep(wrk, h2); break; default: h2e = H2CE_ENHANCE_YOUR_CALM; } if (h2e != NULL && h2e->connection) { h2->error = h2e; h2_tx_goaway(wrk, h2, h2e); return (0); } if (hs != HTC_S_COMPLETE) return (1); h2->rxf_len = vbe32dec(h2->htc->rxbuf_b) >> 8; h2->rxf_type = h2->htc->rxbuf_b[3]; h2->rxf_flags = h2->htc->rxbuf_b[4]; h2->rxf_stream = vbe32dec(h2->htc->rxbuf_b + 5); h2->rxf_stream &= ~(1LU<<31); // rfc7540,l,690,692 h2->rxf_data = (void*)(h2->htc->rxbuf_b + 9); /* XXX: later full DATA will not be rx'ed yet. */ HTC_RxPipeline(h2->htc, h2->htc->rxbuf_b + h2->rxf_len + 9); h2_vsl_frame(h2, h2->htc->rxbuf_b, 9L + h2->rxf_len); h2->srq->acct.req_hdrbytes += 9; if (h2->rxf_type >= H2FMAX) { // rfc7540,l,679,681 // XXX: later, drain rest of frame h2->bogosity++; Lck_Lock(&h2->sess->mtx); VSLb(h2->vsl, SLT_Debug, "H2: Unknown frame type 0x%02x (ignored)", (uint8_t)h2->rxf_type); Lck_Unlock(&h2->sess->mtx); h2->srq->acct.req_bodybytes += h2->rxf_len; return (1); } h2f = h2flist[h2->rxf_type]; AN(h2f->name); AN(h2f->rxfunc); if (h2f->overhead) h2->srq->acct.req_bodybytes += h2->rxf_len; if (h2->rxf_flags & ~h2f->flags) { // rfc7540,l,687,688 h2->bogosity++; Lck_Lock(&h2->sess->mtx); VSLb(h2->vsl, SLT_Debug, "H2: Unknown flags 0x%02x on %s (ignored)", (uint8_t)h2->rxf_flags & ~h2f->flags, h2f->name); Lck_Unlock(&h2->sess->mtx); h2->rxf_flags &= h2f->flags; } h2e = h2_procframe(wrk, h2, h2f); if (h2->error == NULL && h2e != NULL) { h2->error = h2e; h2_tx_goaway(wrk, h2, h2e); } return (h2->error != NULL ? 0 : 1); } varnish-7.5.0/bin/varnishd/http2/cache_http2_send.c000066400000000000000000000256231457605730600222120ustar00rootroot00000000000000/*- * Copyright (c) 2016 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #include "config.h" #include #include "cache/cache_varnishd.h" #include "cache/cache_transport.h" #include "http2/cache_http2.h" #include "vend.h" #include "vtim.h" #define H2_SEND_HELD(h2, r2) (VTAILQ_FIRST(&(h2)->txqueue) == (r2)) static h2_error h2_errcheck(const struct h2_req *r2, const struct h2_sess *h2) { CHECK_OBJ_NOTNULL(r2, H2_REQ_MAGIC); CHECK_OBJ_NOTNULL(h2, H2_SESS_MAGIC); if (r2->error != NULL) return (r2->error); if (h2->error != NULL && r2->stream > h2->goaway_last_stream) return (h2->error); return (NULL); } static int h2_cond_wait(pthread_cond_t *cond, struct h2_sess *h2, struct h2_req *r2) { vtim_dur tmo = 0.; vtim_real now; h2_error h2e; int r; AN(cond); CHECK_OBJ_NOTNULL(h2, H2_SESS_MAGIC); CHECK_OBJ_NOTNULL(r2, H2_REQ_MAGIC); Lck_AssertHeld(&h2->sess->mtx); if (cache_param->h2_window_timeout > 0.) tmo = cache_param->h2_window_timeout; r = Lck_CondWaitTimeout(cond, &h2->sess->mtx, tmo); assert(r == 0 || r == ETIMEDOUT); now = VTIM_real(); /* NB: when we grab h2_window_timeout before acquiring the session * lock we may time out, but once we wake up both send_timeout and * h2_window_timeout may have changed meanwhile. For this reason * h2_stream_tmo() may not log what timed out and we need to call * again with a magic NAN "now" that indicates to h2_stream_tmo() * that the stream reached the h2_window_timeout via the lock and * force it to log it. */ h2e = h2_stream_tmo(h2, r2, now); if (h2e == NULL && r == ETIMEDOUT) { h2e = h2_stream_tmo(h2, r2, NAN); AN(h2e); } if (r2->error == NULL) r2->error = h2e; return (h2e != NULL ? -1 : 0); } static void h2_send_get_locked(struct worker *wrk, struct h2_sess *h2, struct h2_req *r2) { CHECK_OBJ_NOTNULL(h2, H2_SESS_MAGIC); CHECK_OBJ_NOTNULL(r2, H2_REQ_MAGIC); CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); Lck_AssertHeld(&h2->sess->mtx); if (&wrk->cond == h2->cond) ASSERT_RXTHR(h2); r2->wrk = wrk; VTAILQ_INSERT_TAIL(&h2->txqueue, r2, tx_list); while (!H2_SEND_HELD(h2, r2)) AZ(Lck_CondWait(&wrk->cond, &h2->sess->mtx)); r2->wrk = NULL; } void H2_Send_Get(struct worker *wrk, struct h2_sess *h2, struct h2_req *r2) { CHECK_OBJ_NOTNULL(h2, H2_SESS_MAGIC); CHECK_OBJ_NOTNULL(r2, H2_REQ_MAGIC); CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); Lck_Lock(&h2->sess->mtx); h2_send_get_locked(wrk, h2, r2); Lck_Unlock(&h2->sess->mtx); } static void h2_send_rel_locked(struct h2_sess *h2, const struct h2_req *r2) { CHECK_OBJ_NOTNULL(r2, H2_REQ_MAGIC); CHECK_OBJ_NOTNULL(h2, H2_SESS_MAGIC); Lck_AssertHeld(&h2->sess->mtx); AN(H2_SEND_HELD(h2, r2)); VTAILQ_REMOVE(&h2->txqueue, r2, tx_list); r2 = VTAILQ_FIRST(&h2->txqueue); if (r2 != NULL) { CHECK_OBJ_NOTNULL(r2->wrk, WORKER_MAGIC); PTOK(pthread_cond_signal(&r2->wrk->cond)); } } void H2_Send_Rel(struct h2_sess *h2, const struct h2_req *r2) { CHECK_OBJ_NOTNULL(h2, H2_SESS_MAGIC); CHECK_OBJ_NOTNULL(r2, H2_REQ_MAGIC); Lck_Lock(&h2->sess->mtx); h2_send_rel_locked(h2, r2); Lck_Unlock(&h2->sess->mtx); } static void h2_mk_hdr(uint8_t *hdr, h2_frame ftyp, uint8_t flags, uint32_t len, uint32_t stream) { AN(hdr); assert(len < (1U << 24)); vbe32enc(hdr, len << 8); hdr[3] = ftyp->type; hdr[4] = flags; vbe32enc(hdr + 5, stream); } /* * This is the "raw" frame sender, all per-stream accounting and * prioritization must have happened before this is called, and * the session mtx must be held. */ void H2_Send_Frame(struct worker *wrk, struct h2_sess *h2, h2_frame ftyp, uint8_t flags, uint32_t len, uint32_t stream, const void *ptr) { uint8_t hdr[9]; ssize_t s; struct iovec iov[2]; (void)wrk; AN(ftyp); AZ(flags & ~(ftyp->flags)); if (stream == 0) AZ(ftyp->act_szero); else AZ(ftyp->act_snonzero); h2_mk_hdr(hdr, ftyp, flags, len, stream); Lck_Lock(&h2->sess->mtx); VSLb_bin(h2->vsl, SLT_H2TxHdr, 9, hdr); h2->srq->acct.resp_hdrbytes += 9; if (ftyp->overhead) h2->srq->acct.resp_bodybytes += len; Lck_Unlock(&h2->sess->mtx); memset(iov, 0, sizeof iov); iov[0].iov_base = (void*)hdr; iov[0].iov_len = sizeof hdr; iov[1].iov_base = TRUST_ME(ptr); iov[1].iov_len = len; s = writev(h2->sess->fd, iov, len == 0 ? 1 : 2); if (s != sizeof hdr + len) { if (errno == EWOULDBLOCK) { VSLb(h2->vsl, SLT_Debug, "H2: stream %u: Hit idle_send_timeout", stream); } /* * There is no point in being nice here, we will be unable * to send a GOAWAY once the code unrolls, so go directly * to the finale and be done with it. */ h2->error = H2CE_PROTOCOL_ERROR; } else if (len > 0) { Lck_Lock(&h2->sess->mtx); VSLb_bin(h2->vsl, SLT_H2TxBody, len, ptr); Lck_Unlock(&h2->sess->mtx); } } static int64_t h2_win_limit(const struct h2_req *r2, const struct h2_sess *h2) { CHECK_OBJ_NOTNULL(r2, H2_REQ_MAGIC); CHECK_OBJ_NOTNULL(h2, H2_SESS_MAGIC); CHECK_OBJ_NOTNULL(h2->req0, H2_REQ_MAGIC); Lck_AssertHeld(&h2->sess->mtx); return (vmin_t(int64_t, r2->t_window, h2->req0->t_window)); } static void h2_win_charge(struct h2_req *r2, const struct h2_sess *h2, uint32_t w) { CHECK_OBJ_NOTNULL(r2, H2_REQ_MAGIC); CHECK_OBJ_NOTNULL(h2, H2_SESS_MAGIC); CHECK_OBJ_NOTNULL(h2->req0, H2_REQ_MAGIC); Lck_AssertHeld(&h2->sess->mtx); r2->t_window -= w; h2->req0->t_window -= w; } static int64_t h2_do_window(struct worker *wrk, struct h2_req *r2, struct h2_sess *h2, int64_t wanted) { int64_t w = 0; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(r2, H2_REQ_MAGIC); CHECK_OBJ_NOTNULL(h2, H2_SESS_MAGIC); if (wanted == 0) return (0); Lck_Lock(&h2->sess->mtx); if (r2->t_window <= 0 || h2->req0->t_window <= 0) { r2->t_winupd = VTIM_real(); h2_send_rel_locked(h2, r2); assert(h2->winup_streams >= 0); h2->winup_streams++; while (r2->t_window <= 0 && h2_errcheck(r2, h2) == NULL) { r2->cond = &wrk->cond; (void)h2_cond_wait(r2->cond, h2, r2); r2->cond = NULL; } while (h2->req0->t_window <= 0 && h2_errcheck(r2, h2) == NULL) (void)h2_cond_wait(h2->winupd_cond, h2, r2); if (h2_errcheck(r2, h2) == NULL) { w = vmin_t(int64_t, h2_win_limit(r2, h2), wanted); h2_win_charge(r2, h2, w); assert (w > 0); } if (r2->error == H2SE_BROKE_WINDOW && h2->open_streams <= h2->winup_streams) h2->error = r2->error = H2CE_BANKRUPT; assert(h2->winup_streams > 0); h2->winup_streams--; h2_send_get_locked(wrk, h2, r2); } if (w == 0 && h2_errcheck(r2, h2) == NULL) { assert(r2->t_window > 0); assert(h2->req0->t_window > 0); w = h2_win_limit(r2, h2); if (w > wanted) w = wanted; h2_win_charge(r2, h2, w); assert (w > 0); } r2->t_winupd = 0; Lck_Unlock(&h2->sess->mtx); return (w); } /* * This is the per-stream frame sender. * XXX: priority */ static void h2_send(struct worker *wrk, struct h2_req *r2, h2_frame ftyp, uint8_t flags, uint32_t len, const void *ptr, uint64_t *counter) { struct h2_sess *h2; uint32_t mfs, tf; const char *p; uint8_t final_flags; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(r2, H2_REQ_MAGIC); h2 = r2->h2sess; CHECK_OBJ_NOTNULL(h2, H2_SESS_MAGIC); assert(len == 0 || ptr != NULL); AN(counter); AN(H2_SEND_HELD(h2, r2)); if (h2_errcheck(r2, h2) != NULL) return; AN(ftyp); AZ(flags & ~(ftyp->flags)); if (r2->stream == 0) AZ(ftyp->act_szero); else AZ(ftyp->act_snonzero); Lck_Lock(&h2->sess->mtx); mfs = h2->remote_settings.max_frame_size; if (r2->counted && ( (ftyp == H2_F_HEADERS && (flags & H2FF_HEADERS_END_STREAM)) || (ftyp == H2_F_DATA && (flags & H2FF_DATA_END_STREAM)) || ftyp == H2_F_RST_STREAM )) { assert(h2->open_streams > 0); h2->open_streams--; r2->counted = 0; } Lck_Unlock(&h2->sess->mtx); if (ftyp->respect_window) { tf = h2_do_window(wrk, r2, h2, (len > mfs) ? mfs : len); if (h2_errcheck(r2, h2) != NULL) return; AN(H2_SEND_HELD(h2, r2)); } else tf = mfs; if (len <= tf) { H2_Send_Frame(wrk, h2, ftyp, flags, len, r2->stream, ptr); *counter += len; } else { AN(ptr); p = ptr; final_flags = ftyp->final_flags & flags; flags &= ~ftyp->final_flags; do { AN(ftyp->continuation); if (!ftyp->respect_window) tf = mfs; if (ftyp->respect_window && p != ptr) { tf = h2_do_window(wrk, r2, h2, (len > mfs) ? mfs : len); if (h2_errcheck(r2, h2) != NULL) return; AN(H2_SEND_HELD(h2, r2)); } if (tf < len) { H2_Send_Frame(wrk, h2, ftyp, flags, tf, r2->stream, p); } else { if (ftyp->respect_window) assert(tf == len); tf = len; H2_Send_Frame(wrk, h2, ftyp, final_flags, tf, r2->stream, p); flags = 0; } p += tf; len -= tf; *counter += tf; ftyp = ftyp->continuation; flags &= ftyp->flags; final_flags &= ftyp->flags; } while (h2->error == NULL && len > 0); } } void H2_Send_RST(struct worker *wrk, struct h2_sess *h2, const struct h2_req *r2, uint32_t stream, h2_error h2e) { char b[4]; CHECK_OBJ_NOTNULL(h2, H2_SESS_MAGIC); CHECK_OBJ_NOTNULL(r2, H2_REQ_MAGIC); AN(H2_SEND_HELD(h2, r2)); AN(h2e); Lck_Lock(&h2->sess->mtx); VSLb(h2->vsl, SLT_Debug, "H2: stream %u: %s", stream, h2e->txt); Lck_Unlock(&h2->sess->mtx); vbe32enc(b, h2e->val); H2_Send_Frame(wrk, h2, H2_F_RST_STREAM, 0, sizeof b, stream, b); } void H2_Send(struct worker *wrk, struct h2_req *r2, h2_frame ftyp, uint8_t flags, uint32_t len, const void *ptr, uint64_t *counter) { uint64_t dummy_counter = 0; h2_error h2e; if (counter == NULL) counter = &dummy_counter; h2_send(wrk, r2, ftyp, flags, len, ptr, counter); h2e = h2_errcheck(r2, r2->h2sess); if (h2e != NULL && h2e->val == H2SE_CANCEL->val) H2_Send_RST(wrk, r2->h2sess, r2, r2->stream, h2e); } varnish-7.5.0/bin/varnishd/http2/cache_http2_session.c000066400000000000000000000275611457605730600227470ustar00rootroot00000000000000/*- * Copyright (c) 2016 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #include "config.h" #include "cache/cache_varnishd.h" #include #include "cache/cache_transport.h" #include "http2/cache_http2.h" #include "vend.h" #include "vtcp.h" static const char h2_resp_101[] = "HTTP/1.1 101 Switching Protocols\r\n" "Connection: Upgrade\r\n" "Upgrade: h2c\r\n" "\r\n"; static const char H2_prism[24] = { 0x50, 0x52, 0x49, 0x20, 0x2a, 0x20, 0x48, 0x54, 0x54, 0x50, 0x2f, 0x32, 0x2e, 0x30, 0x0d, 0x0a, 0x0d, 0x0a, 0x53, 0x4d, 0x0d, 0x0a, 0x0d, 0x0a }; static size_t h2_enc_settings(const struct h2_settings *h2s, uint8_t *buf, ssize_t n) { uint8_t *p = buf; #define H2_SETTING(U,l,v,d,...) \ if (h2s->l != d) { \ n -= 6; \ assert(n >= 0); \ vbe16enc(p, v); \ p += 2; \ vbe32enc(p, h2s->l); \ p += 4; \ } #include "tbl/h2_settings.h" return (p - buf); } static const struct h2_settings H2_proto_settings = { #define H2_SETTING(U,l,v,d,...) . l = d, #include "tbl/h2_settings.h" }; static void h2_local_settings(struct h2_settings *h2s) { *h2s = H2_proto_settings; #define H2_SETTINGS_PARAM_ONLY #define H2_SETTING(U, l, ...) \ h2s->l = cache_param->h2_##l; #include "tbl/h2_settings.h" #undef H2_SETTINGS_PARAM_ONLY } /********************************************************************** * The h2_sess struct needs many of the same things as a request, * WS, VSL, HTC &c, but rather than implement all that stuff over, we * grab an actual struct req, and mirror the relevant fields into * struct h2_sess. */ static struct h2_sess * h2_init_sess(struct sess *sp, struct h2_sess *h2s, struct req *srq, struct h2h_decode *decode) { uintptr_t *up; struct h2_sess *h2; /* proto_priv session attribute will always have been set up by H1 * before reaching here. */ AZ(SES_Get_proto_priv(sp, &up)); assert(*up == 0); if (srq == NULL) srq = Req_New(sp); AN(srq); h2 = h2s; AN(h2); INIT_OBJ(h2, H2_SESS_MAGIC); h2->srq = srq; h2->htc = srq->htc; h2->ws = srq->ws; h2->vsl = srq->vsl; VSL_Flush(h2->vsl, 0); h2->vsl->wid = sp->vxid; h2->htc->rfd = &sp->fd; h2->sess = sp; h2->rxthr = pthread_self(); PTOK(pthread_cond_init(h2->winupd_cond, NULL)); VTAILQ_INIT(&h2->streams); VTAILQ_INIT(&h2->txqueue); h2_local_settings(&h2->local_settings); h2->remote_settings = H2_proto_settings; h2->decode = decode; h2->rapid_reset = cache_param->h2_rapid_reset; h2->rapid_reset_limit = cache_param->h2_rapid_reset_limit; h2->rapid_reset_period = cache_param->h2_rapid_reset_period; h2->rst_budget = h2->rapid_reset_limit; h2->last_rst = sp->t_open; AZ(isnan(h2->last_rst)); AZ(VHT_Init(h2->dectbl, h2->local_settings.header_table_size)); *up = (uintptr_t)h2; return (h2); } static void h2_del_sess(struct worker *wrk, struct h2_sess *h2, stream_close_t reason) { struct sess *sp; struct req *req; CHECK_OBJ_NOTNULL(h2, H2_SESS_MAGIC); AZ(h2->refcnt); assert(VTAILQ_EMPTY(&h2->streams)); AN(reason); VHT_Fini(h2->dectbl); PTOK(pthread_cond_destroy(h2->winupd_cond)); TAKE_OBJ_NOTNULL(req, &h2->srq, REQ_MAGIC); assert(!WS_IsReserved(req->ws)); sp = h2->sess; Req_Cleanup(sp, wrk, req); Req_Release(req); SES_Delete(sp, reason, NAN); } /**********************************************************************/ enum htc_status_e v_matchproto_(htc_complete_f) H2_prism_complete(struct http_conn *htc) { size_t sz; CHECK_OBJ_NOTNULL(htc, HTTP_CONN_MAGIC); sz = sizeof(H2_prism); if (htc->rxbuf_b + sz > htc->rxbuf_e) sz = htc->rxbuf_e - htc->rxbuf_b; if (memcmp(htc->rxbuf_b, H2_prism, sz)) return (HTC_S_JUNK); return (sz == sizeof(H2_prism) ? HTC_S_COMPLETE : HTC_S_MORE); } /********************************************************************** * Deal with the base64url (NB: ...url!) encoded SETTINGS in the H1 req * of a H2C upgrade. */ static int h2_b64url_settings(struct h2_sess *h2, struct req *req) { const char *p, *q; uint8_t u[6], *up; unsigned x; int i, n; static const char s[] = "ABCDEFGHIJKLMNOPQRSTUVWXYZ" "abcdefghijklmnopqrstuvwxyz" "0123456789" "-_="; /* * If there is trouble with this, we could reject the upgrade * but putting this on the H1 side is just plain wrong... */ if (!http_GetHdr(req->http, H_HTTP2_Settings, &p)) return (-1); AN(p); VSLb(req->vsl, SLT_Debug, "H2CS %s", p); n = 0; x = 0; up = u; for (;*p; p++) { q = strchr(s, *p); if (q == NULL) return (-1); i = q - s; assert(i >= 0 && i <= 64); x <<= 6; x |= i; n += 6; if (n < 8) continue; *up++ = (uint8_t)(x >> (n - 8)); n -= 8; if (up == u + sizeof u) { AZ(n); if (h2_set_setting(h2, (void*)u)) return (-1); up = u; } } if (up != u) return (-1); return (0); } /**********************************************************************/ static int h2_ou_rel(struct worker *wrk, struct req *req) { CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(req, REQ_MAGIC); AZ(req->vcl); Req_AcctLogCharge(wrk->stats, req); Req_Release(req); return (0); } static int h2_ou_session(struct worker *wrk, struct h2_sess *h2, struct req *req) { ssize_t sz; enum htc_status_e hs; struct h2_req *r2; if (h2_b64url_settings(h2, req)) { VSLb(h2->vsl, SLT_Debug, "H2: Bad HTTP-Settings"); return (h2_ou_rel(wrk, req)); } sz = write(h2->sess->fd, h2_resp_101, strlen(h2_resp_101)); VTCP_Assert(sz); if (sz != strlen(h2_resp_101)) { VSLb(h2->vsl, SLT_Debug, "H2: Upgrade: Error writing 101" " response: %s\n", VAS_errtxt(errno)); return (h2_ou_rel(wrk, req)); } http_Unset(req->http, H_Upgrade); http_Unset(req->http, H_HTTP2_Settings); /* Steal pipelined read-ahead, if any */ h2->htc->pipeline_b = req->htc->pipeline_b; h2->htc->pipeline_e = req->htc->pipeline_e; req->htc->pipeline_b = NULL; req->htc->pipeline_e = NULL; /* XXX: This call may assert on buffer overflow if the pipelined data exceeds the available space in the ws workspace. What to do about the overflowing data is an open issue. */ HTC_RxInit(h2->htc, h2->ws); /* Start req thread */ r2 = h2_new_req(h2, 1, req); req->transport = &HTTP2_transport; assert(req->req_step == R_STP_TRANSPORT); req->task->func = h2_do_req; req->task->priv = req; r2->scheduled = 1; r2->state = H2_S_CLOS_REM; // rfc7540,l,489,491 req->err_code = 0; http_SetH(req->http, HTTP_HDR_PROTO, "HTTP/2.0"); /* Wait for PRISM response */ hs = HTC_RxStuff(h2->htc, H2_prism_complete, NULL, NULL, NAN, h2->sess->t_idle + cache_param->timeout_idle, NAN, sizeof H2_prism); if (hs != HTC_S_COMPLETE) { VSLb(h2->vsl, SLT_Debug, "H2: No/Bad OU PRISM (hs=%d)", hs); r2->scheduled = 0; h2_del_req(wrk, r2); return (0); } if (Pool_Task(wrk->pool, req->task, TASK_QUEUE_REQ)) { r2->scheduled = 0; h2_del_req(wrk, r2); VSLb(h2->vsl, SLT_Debug, "H2: No Worker-threads"); return (0); } return (1); } /********************************************************************** */ #define H2_PU_MARKER 1 #define H2_OU_MARKER 2 void H2_PU_Sess(struct worker *wrk, struct sess *sp, struct req *req) { VSLb(req->vsl, SLT_Debug, "H2 Prior Knowledge Upgrade"); req->err_code = H2_PU_MARKER; SES_SetTransport(wrk, sp, req, &HTTP2_transport); } void H2_OU_Sess(struct worker *wrk, struct sess *sp, struct req *req) { VSLb(req->vsl, SLT_Debug, "H2 Optimistic Upgrade"); req->err_code = H2_OU_MARKER; SES_SetTransport(wrk, sp, req, &HTTP2_transport); } static void v_matchproto_(task_func_t) h2_new_session(struct worker *wrk, void *arg) { struct req *req; struct sess *sp; struct h2_sess h2s; struct h2_sess *h2; struct h2_req *r2, *r22; int again; uint8_t settings[48]; struct h2h_decode decode; size_t l; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CAST_OBJ_NOTNULL(req, arg, REQ_MAGIC); sp = req->sp; CHECK_OBJ_NOTNULL(sp, SESS_MAGIC); if (wrk->wpriv->vcl) VCL_Rel(&wrk->wpriv->vcl); assert(req->transport == &HTTP2_transport); assert (req->err_code == H2_PU_MARKER || req->err_code == H2_OU_MARKER); h2 = h2_init_sess(sp, &h2s, req->err_code == H2_PU_MARKER ? req : NULL, &decode); h2->req0 = h2_new_req(h2, 0, NULL); AZ(h2->htc->priv); h2->htc->priv = h2; AZ(wrk->vsl); wrk->vsl = h2->vsl; if (req->err_code == H2_OU_MARKER && !h2_ou_session(wrk, h2, req)) { assert(h2->refcnt == 1); h2_del_req(wrk, h2->req0); h2_del_sess(wrk, h2, SC_RX_JUNK); wrk->vsl = NULL; return; } assert(HTC_S_COMPLETE == H2_prism_complete(h2->htc)); HTC_RxPipeline(h2->htc, h2->htc->rxbuf_b + sizeof(H2_prism)); HTC_RxInit(h2->htc, h2->ws); AN(WS_Reservation(h2->ws)); VSLb(h2->vsl, SLT_Debug, "H2: Got pu PRISM"); THR_SetRequest(h2->srq); AN(WS_Reservation(h2->ws)); l = h2_enc_settings(&h2->local_settings, settings, sizeof (settings)); AN(WS_Reservation(h2->ws)); H2_Send_Get(wrk, h2, h2->req0); AN(WS_Reservation(h2->ws)); H2_Send_Frame(wrk, h2, H2_F_SETTINGS, H2FF_NONE, l, 0, settings); AN(WS_Reservation(h2->ws)); H2_Send_Rel(h2, h2->req0); AN(WS_Reservation(h2->ws)); /* and off we go... */ h2->cond = &wrk->cond; while (h2_rxframe(wrk, h2)) { HTC_RxInit(h2->htc, h2->ws); if (WS_Overflowed(h2->ws)) { VSLb(h2->vsl, SLT_Debug, "H2: Empty Rx Workspace"); h2->error = H2CE_INTERNAL_ERROR; break; } AN(WS_Reservation(h2->ws)); } AN(h2->error); /* Delete all idle streams */ VSLb(h2->vsl, SLT_Debug, "H2 CLEANUP %s", h2->error->name); Lck_Lock(&h2->sess->mtx); VTAILQ_FOREACH(r2, &h2->streams, list) { if (r2->error == 0) r2->error = h2->error; if (r2->cond != NULL) PTOK(pthread_cond_signal(r2->cond)); } PTOK(pthread_cond_broadcast(h2->winupd_cond)); Lck_Unlock(&h2->sess->mtx); while (1) { again = 0; VTAILQ_FOREACH_SAFE(r2, &h2->streams, list, r22) { if (r2 != h2->req0) { h2_kill_req(wrk, h2, r2, h2->error); again++; } } if (!again) break; Lck_Lock(&h2->sess->mtx); VTAILQ_FOREACH(r2, &h2->streams, list) VSLb(h2->vsl, SLT_Debug, "ST %u %d", r2->stream, r2->state); (void)Lck_CondWaitTimeout(h2->cond, &h2->sess->mtx, .1); Lck_Unlock(&h2->sess->mtx); } h2->cond = NULL; assert(h2->refcnt == 1); h2_del_req(wrk, h2->req0); h2_del_sess(wrk, h2, h2->error->reason); wrk->vsl = NULL; } static int v_matchproto_(vtr_poll_f) h2_poll(struct req *req) { struct h2_req *r2; CHECK_OBJ_NOTNULL(req, REQ_MAGIC); CAST_OBJ_NOTNULL(r2, req->transport_priv, H2_REQ_MAGIC); return (r2->error ? -1 : 1); } struct transport HTTP2_transport = { .name = "HTTP/2", .magic = TRANSPORT_MAGIC, .deliver = h2_deliver, .minimal_response = h2_minimal_response, .new_session = h2_new_session, .req_body = h2_req_body, .req_fail = h2_req_fail, .sess_panic = h2_sess_panic, .poll = h2_poll, }; varnish-7.5.0/bin/varnishd/mgt/000077500000000000000000000000001457605730600163675ustar00rootroot00000000000000varnish-7.5.0/bin/varnishd/mgt/mgt.h000066400000000000000000000160311457605730600173300ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #ifdef MGT_MGT_H #error "Multiple includes of mgt/mgt.h" #endif #define MGT_MGT_H #include #include #include "vdef.h" #include "miniobj.h" #include "vas.h" #include "vcs.h" #include "vqueue.h" #include "vsb.h" #include "common/common_param.h" struct vjsn; struct vsc_seg; struct vsmw_cluster; #include "VSC_mgt.h" struct cli; struct parspec; struct vcc; struct vclprog; extern struct vev_root *mgt_evb; extern unsigned d_flag; extern int exit_status; /* builtin_vcl.c */ extern const char * const builtin_vcl; /* mgt_acceptor.c */ void MAC_Arg(const char *); int MAC_reopen_sockets(void); /* mgt_child.c */ void MCH_Init(void); int MCH_Running(void); void MCH_Stop_Child(void); int MCH_Start_Child(void); void MCH_TrackHighFd(int fd); void MCH_Cli_Fail(void); /* mgt_cli.c */ extern struct VCLS *mgt_cls; typedef int mgt_cli_close_f(void *priv); void mgt_cli_setup(int fdi, int fdo, int auth, const char *ident, mgt_cli_close_f *close_func, void *priv); int mgt_cli_askchild(unsigned *status, char **resp, const char *fmt, ...) v_printflike_(3, 4); unsigned mgt_cli_start_child(int fd, double tmo); void mgt_cli_stop_child(void); void mgt_cli_telnet(const char *T_arg); void mgt_cli_master(const char *M_arg); void mgt_cli_secret(const char *S_arg); void mgt_cli_close_all(void); void mgt_DumpRstCli(void); void mgt_cli_init_cls(void); #define MCF_NOAUTH 0 /* NB: zero disables here-documents */ #define MCF_AUTH 16 /* mgt_jail.c */ enum jail_master_e { JAIL_MASTER_LOW = 0, JAIL_MASTER_SYSTEM, JAIL_MASTER_FILE, JAIL_MASTER_STORAGE, JAIL_MASTER_PRIVPORT, JAIL_MASTER_KILL, #define JAIL_SUBPROC (JAIL_MASTER_KILL + 1) }; enum jail_subproc_e { JAIL_SUBPROC_VCC = JAIL_SUBPROC, JAIL_SUBPROC_CC, JAIL_SUBPROC_VCLLOAD, JAIL_SUBPROC_WORKER, #define JAIL_LIMIT (JAIL_SUBPROC_WORKER + 1) }; #define ASSERT_JAIL_MASTER(x) assert((x) < JAIL_SUBPROC) #define ASSERT_JAIL_SUBPROC(x) do { \ assert((x) >= JAIL_SUBPROC); \ assert((x) < JAIL_LIMIT); \ } while (0) enum jail_fixfd_e { JAIL_FIXFD_FILE, JAIL_FIXFD_VSMMGT, JAIL_FIXFD_VSMWRK, }; typedef int jail_init_f(char **); typedef void jail_master_f(enum jail_master_e); typedef void jail_subproc_f(enum jail_subproc_e); typedef int jail_make_dir_f(const char *, const char *, struct vsb *); typedef void jail_fixfd_f(int, enum jail_fixfd_e); struct jail_tech { unsigned magic; #define JAIL_TECH_MAGIC 0x4d00fa4d const char *name; jail_init_f *init; jail_master_f *master; jail_subproc_f *subproc; jail_make_dir_f *make_workdir; jail_make_dir_f *make_subdir; jail_fixfd_f *fixfd; }; void VJ_Init(const char *); void VJ_master(enum jail_master_e); void VJ_subproc(enum jail_subproc_e); int VJ_make_workdir(const char *); int VJ_make_subdir(const char *, const char *, struct vsb *); void VJ_fix_fd(int, enum jail_fixfd_e); void VJ_unlink(const char *, int); void VJ_rmdir(const char *); extern const struct jail_tech jail_tech_unix; extern const struct jail_tech jail_tech_solaris; /* mgt_main.c */ extern struct vsb *vident; extern struct VSC_mgt *VSC_C_mgt; struct choice { const char *name; const void *ptr; }; extern const char C_ERR[]; // Things are not as they should be extern const char C_INFO[]; // Normal stuff, keep a record for later extern const char C_DEBUG[]; // More detail than you'd normally want extern const char C_SECURITY[]; // Security issues extern const char C_CLI[]; // CLI traffic between master and child extern int complain_to_stderr; /* mgt_param.c */ void MCF_InitParams(struct cli *); enum mcf_which_e { MCF_DEFAULT = 32, MCF_MINIMUM = 33, MCF_MAXIMUM = 34, }; void MCF_ParamConf(enum mcf_which_e, const char *param, const char *, ...) v_printflike_(3, 4); void MCF_ParamSet(struct cli *, const char *param, const char *val); void MCF_ParamProtect(struct cli *, const char *arg); void MCF_DumpRstParam(void); extern struct params mgt_param; /* mgt_shmem.c */ void mgt_SHM_Init(void); void mgt_SHM_static_alloc(const void *, ssize_t size, const char *category, const char *ident); void mgt_SHM_Create(void); void mgt_SHM_Destroy(int keep); void mgt_SHM_ChildNew(void); void mgt_SHM_ChildDestroy(void); /* mgt_param_tcp.c */ void MCF_TcpParams(void); /* mgt_util.c */ char *mgt_HostName(void); void mgt_ProcTitle(const char *comp); void mgt_DumpRstVsl(void); struct vsb *mgt_BuildVident(void); void MGT_Complain(const char *, const char *, ...) v_printflike_(2, 3); const void *MGT_Pick(const struct choice *, const char *, const char *); char **MGT_NamedArg(const char *, const char **, const char *); /* stevedore_mgt.c */ extern const char *mgt_stv_h2_rxbuf; void STV_Config(const char *spec); void STV_Config_Final(void); void STV_Init(void); /* mgt_vcc.c */ void mgt_DumpBuiltin(void); char *mgt_VccCompile(struct cli *, struct vclprog *, const char *vclname, const char *vclsrc, const char *vclsrcfile, int C_flag); void mgt_vcl_init(void); void mgt_vcl_startup(struct cli *, const char *vclsrc, const char *origin, const char *vclname, int Cflag); int mgt_push_vcls(struct cli *, unsigned *status, char **p); const char *mgt_has_vcl(void); extern char *mgt_cc_cmd; extern char *mgt_cc_cmd_def; extern char *mgt_cc_warn; extern const char *mgt_vcl_path; extern const char *mgt_vmod_path; #if defined(PTHREAD_CANCELED) || defined(PTHREAD_MUTEX_DEFAULT) #error "Keep pthreads out of in manager process" #endif #define MGT_FEATURE(x) COM_FEATURE(mgt_param.feature_bits, x) #define MGT_EXPERIMENT(x) COM_EXPERIMENT(mgt_param.experimental_bits, x) #define MGT_DO_DEBUG(x) COM_DO_DEBUG(mgt_param.debug_bits, x) #define MGT_VCC_FEATURE(x) COM_VCC_FEATURE(mgt_param.vcc_feature_bits, x) varnish-7.5.0/bin/varnishd/mgt/mgt_acceptor.c000066400000000000000000000224021457605730600212020ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Acceptor socket management */ #include "config.h" #include #include #include #include #include #include #include #include #include #include #include "mgt/mgt.h" #include "common/heritage.h" #include "vav.h" #include "vcli_serve.h" #include "vsa.h" #include "vss.h" #include "vtcp.h" #include "vus.h" struct listen_arg { unsigned magic; #define LISTEN_ARG_MAGIC 0xbb2fc333 VTAILQ_ENTRY(listen_arg) list; const char *endpoint; const char *name; VTAILQ_HEAD(,listen_sock) socks; const struct transport *transport; const struct uds_perms *perms; }; struct uds_perms { unsigned magic; #define UDS_PERMS_MAGIC 0x84fb5635 mode_t mode; uid_t uid; gid_t gid; }; static VTAILQ_HEAD(,listen_arg) listen_args = VTAILQ_HEAD_INITIALIZER(listen_args); static int mac_vus_bind(void *priv, const struct sockaddr_un *uds) { return (VUS_bind(uds, priv)); } static int mac_opensocket(struct listen_sock *ls) { int fail; const char *err; CHECK_OBJ_NOTNULL(ls, LISTEN_SOCK_MAGIC); if (ls->sock > 0) { MCH_Fd_Inherit(ls->sock, NULL); closefd(&ls->sock); } if (!ls->uds) ls->sock = VTCP_bind(ls->addr, NULL); else ls->sock = VUS_resolver(ls->endpoint, mac_vus_bind, NULL, &err); fail = errno; if (ls->sock < 0) { AN(fail); return (fail); } if (ls->perms != NULL) { CHECK_OBJ(ls->perms, UDS_PERMS_MAGIC); assert(ls->uds); errno = 0; if (ls->perms->mode != 0 && chmod(ls->endpoint, ls->perms->mode) != 0) return (errno); if (chown(ls->endpoint, ls->perms->uid, ls->perms->gid) != 0) return (errno); } MCH_Fd_Inherit(ls->sock, "sock"); return (0); } /*===================================================================== * Reopen the accept sockets to get rid of listen status. * returns the highest errno encountered, 0 for success */ int MAC_reopen_sockets(void) { struct listen_sock *ls; int err, fail = 0; VTAILQ_FOREACH(ls, &heritage.socks, list) { VJ_master(JAIL_MASTER_PRIVPORT); err = mac_opensocket(ls); VJ_master(JAIL_MASTER_LOW); if (err == 0) continue; fail = vmax(fail, err); MGT_Complain(C_ERR, "Could not reopen listen socket %s: %s", ls->endpoint, VAS_errtxt(err)); } return (fail); } /*--------------------------------------------------------------------*/ static struct listen_sock * mk_listen_sock(const struct listen_arg *la, const struct suckaddr *sa) { struct listen_sock *ls; int fail; ALLOC_OBJ(ls, LISTEN_SOCK_MAGIC); AN(ls); ls->sock = -1; ls->addr = VSA_Clone(sa); AN(ls->addr); REPLACE(ls->endpoint, la->endpoint); ls->name = la->name; ls->transport = la->transport; ls->perms = la->perms; ls->uds = VUS_is(la->endpoint); VJ_master(JAIL_MASTER_PRIVPORT); fail = mac_opensocket(ls); VJ_master(JAIL_MASTER_LOW); if (fail) { VSA_free(&ls->addr); free(ls->endpoint); FREE_OBJ(ls); if (fail != EAFNOSUPPORT) ARGV_ERR("Could not get socket %s: %s\n", la->endpoint, VAS_errtxt(fail)); return (NULL); } return (ls); } static int v_matchproto_(vss_resolved_f) mac_tcp(void *priv, const struct suckaddr *sa) { struct listen_arg *la; struct listen_sock *ls; char abuf[VTCP_ADDRBUFSIZE], pbuf[VTCP_PORTBUFSIZE]; char nbuf[VTCP_ADDRBUFSIZE+VTCP_PORTBUFSIZE+2]; CAST_OBJ_NOTNULL(la, priv, LISTEN_ARG_MAGIC); VTAILQ_FOREACH(ls, &heritage.socks, list) { if (!ls->uds && !VSA_Compare(sa, ls->addr)) ARGV_ERR("-a arguments %s and %s have same address\n", ls->endpoint, la->endpoint); } ls = mk_listen_sock(la, sa); if (ls == NULL) return (0); AZ(ls->uds); if (VSA_Port(ls->addr) == 0) { /* * If the argv port number is zero, we adopt whatever * port number this VTCP_bind() found us, as if * it was specified by the argv. */ VSA_free(&ls->addr); ls->addr = VTCP_my_suckaddr(ls->sock); VTCP_myname(ls->sock, abuf, sizeof abuf, pbuf, sizeof pbuf); if (VSA_Get_Proto(sa) == AF_INET6) bprintf(nbuf, "[%s]:%s", abuf, pbuf); else bprintf(nbuf, "%s:%s", abuf, pbuf); REPLACE(ls->endpoint, nbuf); } VTAILQ_INSERT_TAIL(&la->socks, ls, arglist); VTAILQ_INSERT_TAIL(&heritage.socks, ls, list); return (0); } static int v_matchproto_(vus_resolved_f) mac_uds(void *priv, const struct sockaddr_un *uds) { struct listen_arg *la; struct listen_sock *ls; CAST_OBJ_NOTNULL(la, priv, LISTEN_ARG_MAGIC); (void) uds; VTAILQ_FOREACH(ls, &heritage.socks, list) { if (ls->uds && strcmp(ls->endpoint, la->endpoint) == 0) ARGV_ERR("-a arguments %s and %s have same address\n", ls->endpoint, la->endpoint); } ls = mk_listen_sock(la, bogo_ip); if (ls == NULL) return (0); AN(ls->uds); VTAILQ_INSERT_TAIL(&la->socks, ls, arglist); VTAILQ_INSERT_TAIL(&heritage.socks, ls, list); return (0); } void MAC_Arg(const char *spec) { char **av; struct listen_arg *la; const char *err; int error; const struct transport *xp = NULL; const char *name; char name_buf[8]; static unsigned seq = 0; struct passwd *pwd = NULL; struct group *grp = NULL; mode_t mode = 0; struct uds_perms *perms; av = MGT_NamedArg(spec, &name, "-a"); AN(av); ALLOC_OBJ(la, LISTEN_ARG_MAGIC); AN(la); VTAILQ_INIT(&la->socks); VTAILQ_INSERT_TAIL(&listen_args, la, list); la->endpoint = av[1]; if (name == NULL) { bprintf(name_buf, "a%u", seq++); name = strdup(name_buf); AN(name); } la->name = name; if (*la->endpoint != '/' && strchr(la->endpoint, '/') != NULL) ARGV_ERR("Unix domain socket addresses must be" " absolute paths in -a (%s)\n", la->endpoint); if (VUS_is(la->endpoint) && heritage.min_vcl_version < 41) heritage.min_vcl_version = 41; for (int i = 2; av[i] != NULL; i++) { char *eq, *val; int len; if ((eq = strchr(av[i], '=')) == NULL) { if (xp != NULL) ARGV_ERR("Too many protocol sub-args" " in -a (%s)\n", av[i]); xp = XPORT_Find(av[i]); if (xp == NULL) ARGV_ERR("Unknown protocol '%s'\n", av[i]); continue; } if (la->endpoint[0] != '/') ARGV_ERR("Invalid sub-arg %s" " in -a\n", av[i]); val = eq + 1; len = eq - av[i]; assert(len >= 0); if (len == 0) ARGV_ERR("Invalid sub-arg %s in -a\n", av[i]); if (strncmp(av[i], "user", len) == 0) { if (pwd != NULL) ARGV_ERR("Too many user sub-args in -a (%s)\n", av[i]); pwd = getpwnam(val); if (pwd == NULL) ARGV_ERR("Unknown user %s in -a\n", val); continue; } if (strncmp(av[i], "group", len) == 0) { if (grp != NULL) ARGV_ERR("Too many group sub-args in -a (%s)\n", av[i]); grp = getgrnam(val); if (grp == NULL) ARGV_ERR("Unknown group %s in -a\n", val); continue; } if (strncmp(av[i], "mode", len) == 0) { long m; char *p; if (mode != 0) ARGV_ERR("Too many mode sub-args in -a (%s)\n", av[i]); if (*val == '\0') ARGV_ERR("Empty mode sub-arg in -a\n"); errno = 0; m = strtol(val, &p, 8); if (*p != '\0') ARGV_ERR("Invalid mode sub-arg %s in -a\n", val); if (errno) ARGV_ERR("Cannot parse mode sub-arg %s in -a: " "%s\n", val, VAS_errtxt(errno)); if (m <= 0 || m > 0777) ARGV_ERR("Mode sub-arg %s out of range in -a\n", val); mode = (mode_t) m; continue; } ARGV_ERR("Invalid sub-arg %s in -a\n", av[i]); } if (xp == NULL) xp = XPORT_Find("http"); AN(xp); la->transport = xp; if (pwd != NULL || grp != NULL || mode != 0) { ALLOC_OBJ(perms, UDS_PERMS_MAGIC); AN(perms); if (pwd != NULL) perms->uid = pwd->pw_uid; else perms->uid = (uid_t) -1; if (grp != NULL) perms->gid = grp->gr_gid; else perms->gid = (gid_t) -1; perms->mode = mode; la->perms = perms; } else AZ(la->perms); if (VUS_is(la->endpoint)) error = VUS_resolver(av[1], mac_uds, la, &err); else error = VSS_resolver(av[1], "80", mac_tcp, la, &err); if (VTAILQ_EMPTY(&la->socks) || error) ARGV_ERR("Got no socket(s) for %s\n", av[1]); VAV_Free(av); } varnish-7.5.0/bin/varnishd/mgt/mgt_child.c000066400000000000000000000462001457605730600204670ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * The mechanics of handling the child process */ #include "config.h" #include #include #include #include #include #include #include #include #include #include #include #include "mgt.h" #include "vapi/vsig.h" #include "vbm.h" #include "vcli_serve.h" #include "vev.h" #include "vfil.h" #include "vlu.h" #include "vtim.h" #include "common/heritage.h" static pid_t child_pid = -1; static struct vbitmap *fd_map; static int child_cli_fd = -1; static int child_output = -1; static enum { CH_STOPPED = 0, CH_STARTING = 1, CH_RUNNING = 2, CH_STOPPING = 3, CH_DIED = 4 } child_state = CH_STOPPED; static const char * const ch_state[] = { [CH_STOPPED] = "stopped", [CH_STARTING] = "starting", [CH_RUNNING] = "running", [CH_STOPPING] = "stopping", [CH_DIED] = "died, (restarting)", }; static struct vev *ev_poker; static struct vev *ev_listen; static struct vlu *child_std_vlu; static struct vsb *child_panic = NULL; static void mgt_reap_child(void); static int kill_child(void); /*===================================================================== * Panic string evacuation and handling */ static void mgt_panic_record(pid_t r) { char time_str[30]; if (child_panic != NULL) VSB_destroy(&child_panic); child_panic = VSB_new_auto(); AN(child_panic); VTIM_format(VTIM_real(), time_str); VSB_printf(child_panic, "Panic at: %s\n", time_str); VSB_quote(child_panic, heritage.panic_str, strnlen(heritage.panic_str, heritage.panic_str_len), VSB_QUOTE_NONL); AZ(VSB_finish(child_panic)); MGT_Complain(C_ERR, "Child (%jd) %s", (intmax_t)r, VSB_data(child_panic)); } static void mgt_panic_clear(void) { VSB_destroy(&child_panic); } static void cli_panic_show(struct cli *cli, const char * const *av, int json) { if (!child_panic) { VCLI_SetResult(cli, CLIS_CANT); VCLI_Out(cli, "Child has not panicked or panic has been cleared"); return; } if (!json) { VCLI_Out(cli, "%s\n", VSB_data(child_panic)); return; } VCLI_JSON_begin(cli, 2, av); VCLI_Out(cli, ",\n"); VCLI_JSON_str(cli, VSB_data(child_panic)); VCLI_JSON_end(cli); } static void v_matchproto_(cli_func_t) mch_cli_panic_show(struct cli *cli, const char * const *av, void *priv) { (void)priv; cli_panic_show(cli, av, 0); } static void v_matchproto_(cli_func_t) mch_cli_panic_show_json(struct cli *cli, const char * const *av, void *priv) { (void)priv; cli_panic_show(cli, av, 1); } static void v_matchproto_(cli_func_t) mch_cli_panic_clear(struct cli *cli, const char * const *av, void *priv) { (void)priv; if (av[2] != NULL && strcmp(av[2], "-z")) { VCLI_SetResult(cli, CLIS_PARAM); VCLI_Out(cli, "Unknown parameter \"%s\".", av[2]); return; } else if (av[2] != NULL) { VSC_C_mgt->child_panic = 0; if (child_panic == NULL) return; } if (child_panic == NULL) { VCLI_SetResult(cli, CLIS_CANT); VCLI_Out(cli, "No panic to clear"); return; } mgt_panic_clear(); } /*===================================================================== * Track the highest file descriptor the parent knows is being used. * * This allows the child process to clean/close only a small fraction * of the possible file descriptors after exec(2). * * This is likely to a bit on the low side, as libc and other libraries * has a tendency to cache file descriptors (syslog, resolver, etc.) * so we add a margin of 10 fds. * * For added safety, we check that we see no file descriptor open for * another margin above the limit for which we close by design */ static int mgt_max_fd; #define CLOSE_FD_UP_TO (mgt_max_fd + 10) #define CHECK_FD_UP_TO (CLOSE_FD_UP_TO + 10) void MCH_TrackHighFd(int fd) { /* * Assert > 0, to catch bogus opens, we know where stdin goes * in the master process. */ assert(fd > 0); mgt_max_fd = vmax(mgt_max_fd, fd); } /*-------------------------------------------------------------------- * Keep track of which filedescriptors the child should inherit and * which should be closed after fork() */ void MCH_Fd_Inherit(int fd, const char *what) { assert(fd >= 0); // XXX why? if (fd > 0) MCH_TrackHighFd(fd); if (fd_map == NULL) fd_map = vbit_new(128); AN(fd_map); if (what != NULL) vbit_set(fd_map, fd); else vbit_clr(fd_map, fd); } /*===================================================================== * Listen to stdout+stderr from the child */ static const char *whining_child = C_ERR; static int v_matchproto_(vlu_f) child_line(void *priv, const char *p) { (void)priv; MGT_Complain(whining_child, "Child (%jd) said %s", (intmax_t)child_pid, p); return (0); } /*-------------------------------------------------------------------- * NB: Notice cleanup call from mgt_reap_child() */ static int v_matchproto_(vev_cb_f) child_listener(const struct vev *e, int what) { if ((what & ~VEV__RD) || VLU_Fd(child_std_vlu, child_output)) { ev_listen = NULL; if (e != NULL) mgt_reap_child(); return (1); } return (0); } /*===================================================================== * Periodically poke the child, to see that it still lives */ static int v_matchproto_(vev_cb_f) child_poker(const struct vev *e, int what) { char *r = NULL; unsigned status; (void)e; (void)what; if (child_state != CH_RUNNING) return (1); if (child_pid < 0) return (0); if (mgt_cli_askchild(&status, &r, "ping\n") || strncmp("PONG ", r, 5)) { MGT_Complain(C_ERR, "Unexpected reply from ping: %u %s", status, r); if (status != CLIS_COMMS) MCH_Cli_Fail(); } free(r); return (0); } /*===================================================================== * Launch the child process */ #define mgt_launch_err(cli, status, ...) do { \ MGT_Complain(C_ERR, __VA_ARGS__); \ if (cli == NULL) \ break; \ VCLI_Out(cli, __VA_ARGS__); \ VCLI_SetResult(cli, status); \ } while (0) static void mgt_launch_child(struct cli *cli) { pid_t pid; unsigned u; char *p; struct vev *e; int i, cp[2]; struct rlimit rl[1]; vtim_dur dstart; int bstart; vtim_mono t0; if (child_state != CH_STOPPED && child_state != CH_DIED) return; child_state = CH_STARTING; /* Open pipe for mgt->child CLI */ AZ(socketpair(AF_UNIX, SOCK_STREAM, 0, cp)); heritage.cli_fd = cp[0]; assert(cp[0] > STDERR_FILENO); // See #2782 assert(cp[1] > STDERR_FILENO); MCH_Fd_Inherit(heritage.cli_fd, "cli_fd"); child_cli_fd = cp[1]; /* * Open pipe for child stdout/err * NB: not inherited, because we dup2() it to stdout/stderr in child */ AZ(pipe(cp)); heritage.std_fd = cp[1]; child_output = cp[0]; mgt_SHM_ChildNew(); AN(heritage.param); AN(heritage.panic_str); VJ_master(JAIL_MASTER_SYSTEM); if ((pid = fork()) < 0) { VJ_master(JAIL_MASTER_LOW); perror("Could not fork child"); exit(1); // XXX Harsh ? } if (pid == 0) { if (MGT_FEATURE(FEATURE_NO_COREDUMP)) { memset(rl, 0, sizeof *rl); rl->rlim_cur = 0; AZ(setrlimit(RLIMIT_CORE, rl)); } /* Redirect stdin/out/err */ VFIL_null_fd(STDIN_FILENO); assert(dup2(heritage.std_fd, STDOUT_FILENO) == STDOUT_FILENO); assert(dup2(heritage.std_fd, STDERR_FILENO) == STDERR_FILENO); setbuf(stdout, NULL); setbuf(stderr, NULL); printf("Child starts\n"); /* * Close all FDs the child shouldn't know about * * We cannot just close these filedescriptors, some random * library routine might miss it later on and wantonly close * a FD we use at that point in time. (See bug #1841). * We close the FD and replace it with /dev/null instead, * That prevents security leakage, and gives the library * code a valid FD to close when it discovers the changed * circumstances. */ closelog(); for (i = STDERR_FILENO + 1; i <= CLOSE_FD_UP_TO; i++) { if (vbit_test(fd_map, i)) continue; if (close(i) == 0) VFIL_null_fd(i); } for (i = CLOSE_FD_UP_TO + 1; i <= CHECK_FD_UP_TO; i++) { assert(close(i) == -1); assert(errno == EBADF); } mgt_ProcTitle("Child"); heritage.cls = mgt_cls; heritage.ident = VSB_data(vident) + 1; vext_load(); STV_Init(); VJ_subproc(JAIL_SUBPROC_WORKER); /* * We pass these two params because child_main needs them * well before it has found its own param struct. */ child_main(mgt_param.sigsegv_handler, mgt_param.wthread_stacksize); /* * It would be natural to clean VSMW up here, but it is apt * to fail in some scenarios because of the fall-back * "rm -rf" in mgt_SHM_ChildDestroy() which is there to * catch the cases were we don't get here. */ // VSMW_Destroy(&heritage.proc_vsmw); exit(0); } VJ_master(JAIL_MASTER_LOW); assert(pid > 1); MGT_Complain(C_DEBUG, "Child (%jd) Started", (intmax_t)pid); VSC_C_mgt->child_start++; /* Close stuff the child got */ closefd(&heritage.std_fd); MCH_Fd_Inherit(heritage.cli_fd, NULL); closefd(&heritage.cli_fd); child_std_vlu = VLU_New(child_line, NULL, 0); AN(child_std_vlu); /* Wait for cache/cache_cli.c::CLI_Run() to check in */ bstart = mgt_param.startup_timeout >= mgt_param.cli_timeout; dstart = bstart ? mgt_param.startup_timeout : mgt_param.cli_timeout; t0 = VTIM_mono(); u = mgt_cli_start_child(child_cli_fd, dstart); if (u != CLIS_OK) { assert(u == CLIS_COMMS); if (VTIM_mono() - t0 < dstart) mgt_launch_err(cli, u, "Child failed on launch "); else mgt_launch_err(cli, u, "Child failed on launch " "within %s_timeout=%.2fs%s", bstart ? "startup" : "cli", dstart, bstart ? "" : " (tip: set startup_timeout)"); child_pid = pid; (void)kill_child(); mgt_reap_child(); child_state = CH_STOPPED; return; } else { assert(u == CLIS_OK); fprintf(stderr, "Child launched OK\n"); } whining_child = C_INFO; AZ(ev_listen); e = VEV_Alloc(); XXXAN(e); e->fd = child_output; e->fd_flags = VEV__RD; e->name = "Child listener"; e->callback = child_listener; AZ(VEV_Start(mgt_evb, e)); ev_listen = e; AZ(ev_poker); if (mgt_param.ping_interval > 0) { e = VEV_Alloc(); XXXAN(e); e->timeout = mgt_param.ping_interval; e->callback = child_poker; e->name = "child poker"; AZ(VEV_Start(mgt_evb, e)); ev_poker = e; } child_pid = pid; if (mgt_push_vcls(cli, &u, &p)) { mgt_launch_err(cli, u, "Child (%jd) Pushing vcls failed:\n%s", (intmax_t)child_pid, p); free(p); MCH_Stop_Child(); return; } if (mgt_cli_askchild(&u, &p, "start\n")) { mgt_launch_err(cli, u, "Child (%jd) Acceptor start failed:\n%s", (intmax_t)child_pid, p); free(p); MCH_Stop_Child(); return; } free(p); child_state = CH_RUNNING; } /*===================================================================== * Cleanup when child dies. */ static int kill_child(void) { int i, error; VJ_master(JAIL_MASTER_KILL); i = kill(child_pid, SIGQUIT); error = errno; VJ_master(JAIL_MASTER_LOW); errno = error; return (i); } static void mgt_reap_child(void) { int i; int status = 0xffff; struct vsb *vsb; pid_t r = 0; assert(child_pid != -1); /* * Close the CLI connections * This signals orderly shut down to child */ mgt_cli_stop_child(); if (child_cli_fd >= 0) closefd(&child_cli_fd); /* Stop the poker */ if (ev_poker != NULL) { VEV_Stop(mgt_evb, ev_poker); free(ev_poker); ev_poker = NULL; } /* Stop the listener */ if (ev_listen != NULL) { VEV_Stop(mgt_evb, ev_listen); free(ev_listen); ev_listen = NULL; } /* Compose obituary */ vsb = VSB_new_auto(); XXXAN(vsb); (void)VFIL_nonblocking(child_output); /* Wait for child to die */ for (i = 0; i < mgt_param.cli_timeout * 10; i++) { (void)child_listener(NULL, VEV__RD); r = waitpid(child_pid, &status, WNOHANG); if (r == child_pid) break; (void)usleep(100000); } if (r == 0) { VSB_printf(vsb, "Child (%jd) not dying (waitpid = %jd)," " killing\n", (intmax_t)child_pid, (intmax_t)r); /* Kick it Jim... */ (void)kill_child(); r = waitpid(child_pid, &status, 0); } if (r != child_pid) fprintf(stderr, "WAIT 0x%jd\n", (intmax_t)r); assert(r == child_pid); VSB_printf(vsb, "Child (%jd) %s", (intmax_t)r, status ? "died" : "ended"); if (WIFEXITED(status) && WEXITSTATUS(status)) { VSB_printf(vsb, " status=%d", WEXITSTATUS(status)); exit_status |= 0x20; if (WEXITSTATUS(status) == 1) VSC_C_mgt->child_exit++; else VSC_C_mgt->child_stop++; } if (WIFSIGNALED(status)) { VSB_printf(vsb, " signal=%d", WTERMSIG(status)); exit_status |= 0x40; VSC_C_mgt->child_died++; } #ifdef WCOREDUMP if (WCOREDUMP(status)) { VSB_cat(vsb, " (core dumped)"); if (!MGT_FEATURE(FEATURE_NO_COREDUMP)) exit_status |= 0x80; VSC_C_mgt->child_dump++; } #endif AZ(VSB_finish(vsb)); MGT_Complain(status ? C_ERR : C_INFO, "%s", VSB_data(vsb)); VSB_destroy(&vsb); /* Dispose of shared memory but evacuate panic messages first */ if (heritage.panic_str[0] != '\0') { mgt_panic_record(r); VSC_C_mgt->child_panic++; } mgt_SHM_ChildDestroy(); if (child_state == CH_RUNNING) child_state = CH_DIED; /* Pick up any stuff lingering on stdout/stderr */ (void)child_listener(NULL, VEV__RD); closefd(&child_output); VLU_Destroy(&child_std_vlu); child_pid = -1; MGT_Complain(C_DEBUG, "Child cleanup complete"); /* XXX number of retries? interval? */ for (i = 0; i < 3; i++) { if (MAC_reopen_sockets() == 0) break; /* error already logged */ (void)sleep(1); } if (i == 3) { /* We failed to reopen our listening sockets. No choice * but to exit. */ MGT_Complain(C_ERR, "Could not reopen listening sockets. Exiting."); exit(1); } if (child_state == CH_DIED && mgt_param.auto_restart) mgt_launch_child(NULL); else if (child_state == CH_DIED) child_state = CH_STOPPED; else if (child_state == CH_STOPPING) child_state = CH_STOPPED; } /*===================================================================== * If CLI communications with the child process fails, there is nothing * for us to do but to drag it behind the barn and get it over with. * * The typical case is where the child process fails to return a reply * before the cli_timeout expires. This invalidates the CLI pipes for * all future use, as we don't know if the child was just slow and the * result gets piped later on, or if the child is catatonic. */ void MCH_Cli_Fail(void) { if (child_state != CH_RUNNING && child_state != CH_STARTING) return; if (child_pid < 0) return; if (kill_child() == 0) MGT_Complain(C_ERR, "Child (%jd) not responding to CLI," " killed it.", (intmax_t)child_pid); else MGT_Complain(C_ERR, "Failed to kill child with PID %jd: %s", (intmax_t)child_pid, VAS_errtxt(errno)); } /*===================================================================== * Controlled stop of child process * * Reaping the child asks for orderly shutdown */ void MCH_Stop_Child(void) { if (child_state != CH_RUNNING && child_state != CH_STARTING) return; child_state = CH_STOPPING; MGT_Complain(C_DEBUG, "Stopping Child"); mgt_reap_child(); } /*===================================================================== */ int MCH_Start_Child(void) { mgt_launch_child(NULL); if (child_state != CH_RUNNING) return (2); return (0); } /*==================================================================== * Query if the child is running */ int MCH_Running(void) { return (child_pid > 0); } /*===================================================================== * CLI commands */ static void v_matchproto_(cli_func_t) mch_pid(struct cli *cli, const char * const *av, void *priv) { (void)av; (void)priv; VCLI_Out(cli, "Master: %10jd\n", (intmax_t)getpid()); if (!MCH_Running()) return; VCLI_Out(cli, "Worker: %10jd\n", (intmax_t)child_pid); } static void v_matchproto_(cli_func_t) mch_pid_json(struct cli *cli, const char * const *av, void *priv) { (void)priv; VCLI_JSON_begin(cli, 2, av); VCLI_Out(cli, ",\n {\"master\": %jd", (intmax_t)getpid()); if (MCH_Running()) VCLI_Out(cli, ", \"worker\": %jd", (intmax_t)child_pid); VCLI_Out(cli, "}"); VCLI_JSON_end(cli); } static void v_matchproto_(cli_func_t) mch_cli_server_start(struct cli *cli, const char * const *av, void *priv) { const char *err; (void)av; (void)priv; if (child_state == CH_STOPPED) { err = mgt_has_vcl(); if (err == NULL) { mgt_launch_child(cli); } else { VCLI_SetResult(cli, CLIS_CANT); VCLI_Out(cli, "%s", err); } } else { VCLI_SetResult(cli, CLIS_CANT); VCLI_Out(cli, "Child in state %s", ch_state[child_state]); } } static void v_matchproto_(cli_func_t) mch_cli_server_stop(struct cli *cli, const char * const *av, void *priv) { (void)av; (void)priv; if (child_state == CH_RUNNING) { MCH_Stop_Child(); } else { VCLI_SetResult(cli, CLIS_CANT); VCLI_Out(cli, "Child in state %s", ch_state[child_state]); } } static void v_matchproto_(cli_func_t) mch_cli_server_status(struct cli *cli, const char * const *av, void *priv) { (void)av; (void)priv; VCLI_Out(cli, "Child in state %s", ch_state[child_state]); } static void v_matchproto_(cli_func_t) mch_cli_server_status_json(struct cli *cli, const char * const *av, void *priv) { (void)priv; VCLI_JSON_begin(cli, 2, av); VCLI_Out(cli, ", "); VCLI_JSON_str(cli, ch_state[child_state]); VCLI_JSON_end(cli); } static struct cli_proto cli_mch[] = { { CLICMD_SERVER_STATUS, "", mch_cli_server_status, mch_cli_server_status_json }, { CLICMD_SERVER_START, "", mch_cli_server_start }, { CLICMD_SERVER_STOP, "", mch_cli_server_stop }, { CLICMD_PANIC_SHOW, "", mch_cli_panic_show, mch_cli_panic_show_json }, { CLICMD_PANIC_CLEAR, "", mch_cli_panic_clear }, { CLICMD_PID, "", mch_pid, mch_pid_json }, { NULL } }; /*===================================================================== * This thread is the master thread in the management process. * The relatively simple task is to start and stop the child process * and to reincarnate it in case of trouble. */ void MCH_Init(void) { VCLS_AddFunc(mgt_cls, MCF_AUTH, cli_mch); } varnish-7.5.0/bin/varnishd/mgt/mgt_cli.c000066400000000000000000000405041457605730600201540ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * The management process' CLI handling */ #include "config.h" #include #include #include #include #include #include #include #include #include #include "mgt/mgt.h" #include "common/heritage.h" #include "vcli_serve.h" #include "vev.h" #include "vrnd.h" #include "vsa.h" #include "vss.h" #include "vtcp.h" #define CLI_CMD(U,l,s,h,d,m,M) \ const struct cli_cmd_desc CLICMD_##U[1] = {{ l, s, h, d, m, M }}; #include "tbl/cli_cmds.h" static const struct cli_cmd_desc *cmds[] = { #define CLI_CMD(U,l,s,h,d,m,M) CLICMD_##U, #include "tbl/cli_cmds.h" }; static const int ncmds = sizeof cmds / sizeof cmds[0]; static int cli_i = -1, cli_o = -1; struct VCLS *mgt_cls; static const char *secret_file; static struct vsb *cli_buf = NULL; /*--------------------------------------------------------------------*/ static void v_matchproto_(cli_func_t) mcf_banner(struct cli *cli, const char *const *av, void *priv) { (void)av; (void)priv; VCLI_Out(cli, "-----------------------------\n"); VCLI_Out(cli, "Varnish Cache CLI 1.0\n"); VCLI_Out(cli, "-----------------------------\n"); VCLI_Out(cli, "%s\n", VSB_data(vident) + 1); VCLI_Out(cli, "%s\n", VCS_String("V")); VCLI_Out(cli, "\n"); VCLI_Out(cli, "Type 'help' for command list.\n"); VCLI_Out(cli, "Type 'quit' to close CLI session.\n"); if (!MCH_Running()) VCLI_Out(cli, "Type 'start' to launch worker process.\n"); VCLI_SetResult(cli, CLIS_OK); } /*--------------------------------------------------------------------*/ static struct cli_proto cli_proto[] = { { CLICMD_BANNER, "", mcf_banner }, { NULL } }; /*--------------------------------------------------------------------*/ static void v_noreturn_ v_matchproto_(cli_func_t) mcf_panic(struct cli *cli, const char * const *av, void *priv) { (void)cli; (void)av; (void)priv; v_gcov_flush(); AZ(strcmp("", "You asked for it")); /* NOTREACHED */ abort(); } static struct cli_proto cli_debug[] = { { CLICMD_DEBUG_PANIC_MASTER, "d", mcf_panic }, { NULL } }; /*--------------------------------------------------------------------*/ static void v_matchproto_(cli_func_t) mcf_askchild(struct cli *cli, const char * const *av, void *priv) { int i; char *q; unsigned u; (void)priv; /* * Command not recognized in master, try cacher if it is * running. */ if (cli_o <= 0) { VCLI_SetResult(cli, CLIS_UNKNOWN); VCLI_Out(cli, "Unknown request in manager process " "(child not running).\n" "Type 'help' for more info."); return; } VSB_clear(cli_buf); for (i = 1; av[i] != NULL; i++) { VSB_quote(cli_buf, av[i], strlen(av[i]), 0); VSB_putc(cli_buf, ' '); } VSB_putc(cli_buf, '\n'); AZ(VSB_finish(cli_buf)); if (VSB_tofile(cli_buf, cli_o)) { VCLI_SetResult(cli, CLIS_COMMS); VCLI_Out(cli, "CLI communication error"); MCH_Cli_Fail(); return; } if (VCLI_ReadResult(cli_i, &u, &q, mgt_param.cli_timeout)) MCH_Cli_Fail(); VCLI_SetResult(cli, u); VCLI_Out(cli, "%s", q); free(q); } static const struct cli_cmd_desc CLICMD_WILDCARD[1] = {{ "*", "", "", "", 0, 999 }}; static struct cli_proto cli_askchild[] = { { CLICMD_WILDCARD, "h*", mcf_askchild, mcf_askchild }, { NULL } }; /*-------------------------------------------------------------------- * Ask the child something over CLI, return zero only if everything is * happy happy. */ int mgt_cli_askchild(unsigned *status, char **resp, const char *fmt, ...) { int i; va_list ap; unsigned u; AN(status); VSB_clear(cli_buf); if (resp != NULL) *resp = NULL; *status = 0; if (cli_i < 0 || cli_o < 0) { *status = CLIS_CANT; return (CLIS_CANT); } va_start(ap, fmt); AZ(VSB_vprintf(cli_buf, fmt, ap)); va_end(ap); AZ(VSB_finish(cli_buf)); i = VSB_len(cli_buf); assert(i > 0 && VSB_data(cli_buf)[i - 1] == '\n'); if (VSB_tofile(cli_buf, cli_o)) { *status = CLIS_COMMS; if (resp != NULL) *resp = strdup("CLI communication error"); MCH_Cli_Fail(); return (CLIS_COMMS); } if (VCLI_ReadResult(cli_i, &u, resp, mgt_param.cli_timeout)) MCH_Cli_Fail(); *status = u; return (u == CLIS_OK || u == CLIS_TRUNCATED ? 0 : u); } /*--------------------------------------------------------------------*/ unsigned mgt_cli_start_child(int fd, double tmo) { unsigned u; cli_i = fd; cli_o = fd; if (VCLI_ReadResult(cli_i, &u, NULL, tmo)) { return (CLIS_COMMS); } return (u); } /*--------------------------------------------------------------------*/ void mgt_cli_stop_child(void) { cli_i = -1; cli_o = -1; /* XXX: kick any users */ } /*-------------------------------------------------------------------- * Generate a random challenge */ static void mgt_cli_challenge(struct cli *cli) { size_t z; uint8_t u; AZ(VRND_RandomCrypto(cli->challenge, sizeof cli->challenge - 2)); for (z = 0; z < (sizeof cli->challenge) - 2; z++) { AZ(VRND_RandomCrypto(&u, sizeof u)); cli->challenge[z] = (u % 26) + 'a'; } cli->challenge[z++] = '\n'; cli->challenge[z] = '\0'; VCLI_Out(cli, "%s", cli->challenge); VCLI_Out(cli, "\nAuthentication required.\n"); VCLI_SetResult(cli, CLIS_AUTH); } /*-------------------------------------------------------------------- * Validate the authentication */ static void mcf_auth(struct cli *cli, const char *const *av, void *priv) { int fd; char buf[CLI_AUTH_RESPONSE_LEN + 1]; AN(av[2]); (void)priv; if (secret_file == NULL) { VCLI_Out(cli, "Secret file not configured\n"); VCLI_SetResult(cli, CLIS_CANT); return; } VJ_master(JAIL_MASTER_FILE); fd = open(secret_file, O_RDONLY); if (fd < 0) { VCLI_Out(cli, "Cannot open secret file (%s)\n", VAS_errtxt(errno)); VCLI_SetResult(cli, CLIS_CANT); VJ_master(JAIL_MASTER_LOW); return; } VJ_master(JAIL_MASTER_LOW); MCH_TrackHighFd(fd); VCLI_AuthResponse(fd, cli->challenge, buf); closefd(&fd); if (strcasecmp(buf, av[2])) { MGT_Complain(C_SECURITY, "CLI Authentication failure from %s", cli->ident); VCLI_SetResult(cli, CLIS_CLOSE); return; } cli->auth = MCF_AUTH; memset(cli->challenge, 0, sizeof cli->challenge); VCLI_SetResult(cli, CLIS_OK); mcf_banner(cli, av, priv); } /*--------------------------------------------------------------------*/ static void v_matchproto_(cli_func_t) mcf_help(struct cli *cli, const char * const *av, void *priv) { if (cli_o <= 0) VCLS_func_help(cli, av, priv); else mcf_askchild(cli, av, priv); } static void v_matchproto_(cli_func_t) mcf_help_json(struct cli *cli, const char * const *av, void *priv) { if (cli_o <= 0) VCLS_func_help_json(cli, av, priv); else mcf_askchild(cli, av, priv); } static struct cli_proto cli_auth[] = { { CLICMD_HELP, "", mcf_help, mcf_help_json }, { CLICMD_PING, "", VCLS_func_ping, VCLS_func_ping_json }, { CLICMD_AUTH, "", mcf_auth }, { CLICMD_QUIT, "", VCLS_func_close }, { NULL } }; /*--------------------------------------------------------------------*/ static void mgt_cli_cb_before(const struct cli *cli) { if (cli->priv == stderr) fprintf(stderr, "> %s\n", VSB_data(cli->cmd)); MGT_Complain(C_CLI, "CLI %s Rd %s", cli->ident, VSB_data(cli->cmd)); } static void mgt_cli_cb_after(const struct cli *cli) { MGT_Complain(C_CLI, "CLI %s Wr %03u %s", cli->ident, cli->result, VSB_data(cli->sb)); if (cli->priv != stderr) return; if (cli->result == CLIS_TRUNCATED) ARGV_ERR("-I file had incomplete CLI command at the end\n"); if (cli->result != CLIS_OK && *VSB_data(cli->cmd) != '-') { ARGV_ERR("-I file CLI command failed (%d)\n%s\n", cli->result, VSB_data(cli->sb)); } } /*--------------------------------------------------------------------*/ void mgt_cli_init_cls(void) { mgt_cls = VCLS_New(NULL); AN(mgt_cls); VCLS_SetHooks(mgt_cls, mgt_cli_cb_before, mgt_cli_cb_after); VCLS_AddFunc(mgt_cls, MCF_NOAUTH, cli_auth); VCLS_AddFunc(mgt_cls, MCF_AUTH, cli_proto); VCLS_AddFunc(mgt_cls, MCF_AUTH, cli_debug); VCLS_AddFunc(mgt_cls, MCF_AUTH, cli_askchild); cli_buf = VSB_new_auto(); AN(cli_buf); } /*-------------------------------------------------------------------- * Get rid of all CLI sessions */ void mgt_cli_close_all(void) { VCLS_Destroy(&mgt_cls); } /*-------------------------------------------------------------------- * Callback whenever something happens to the input fd of the session. */ static int mgt_cli_callback2(const struct vev *e, int what) { int i; (void)what; i = VCLS_Poll(mgt_cls, e->priv, 0); return (i); } /*--------------------------------------------------------------------*/ void mgt_cli_setup(int fdi, int fdo, int auth, const char *ident, mgt_cli_close_f *closefunc, void *priv) { struct cli *cli; struct vev *ev; cli = VCLS_AddFd(mgt_cls, fdi, fdo, closefunc, priv); REPLACE(cli->ident, ident); if (!auth && secret_file != NULL) { cli->auth = MCF_NOAUTH; mgt_cli_challenge(cli); } else { cli->auth = MCF_AUTH; mcf_banner(cli, NULL, NULL); } AZ(VSB_finish(cli->sb)); (void)VCLI_WriteResult(fdo, cli->result, VSB_data(cli->sb)); ev = VEV_Alloc(); AN(ev); ev->name = cli->ident; ev->fd = fdi; ev->fd_flags = VEV__RD; ev->callback = mgt_cli_callback2; ev->priv = cli; AZ(VEV_Start(mgt_evb, ev)); } /*--------------------------------------------------------------------*/ static struct vsb * sock_id(const char *pfx, int fd) { struct vsb *vsb; char abuf1[VTCP_ADDRBUFSIZE], abuf2[VTCP_ADDRBUFSIZE]; char pbuf1[VTCP_PORTBUFSIZE], pbuf2[VTCP_PORTBUFSIZE]; vsb = VSB_new_auto(); AN(vsb); VTCP_myname(fd, abuf1, sizeof abuf1, pbuf1, sizeof pbuf1); VTCP_hisname(fd, abuf2, sizeof abuf2, pbuf2, sizeof pbuf2); VSB_printf(vsb, "%s %s %s %s %s", pfx, abuf2, pbuf2, abuf1, pbuf1); AZ(VSB_finish(vsb)); return (vsb); } /*--------------------------------------------------------------------*/ static int telnet_accept(const struct vev *ev, int what) { struct vsb *vsb; int i; (void)what; i = accept(ev->fd, NULL, NULL); if (i < 0 && errno == EBADF) return (1); if (i < 0) return (0); MCH_TrackHighFd(i); vsb = sock_id("telnet", i); mgt_cli_setup(i, i, 0, VSB_data(vsb), NULL, NULL); VSB_destroy(&vsb); return (0); } void mgt_cli_secret(const char *S_arg) { int i, fd; char buf[BUFSIZ]; /* Save in shmem */ mgt_SHM_static_alloc(S_arg, strlen(S_arg) + 1L, "Arg", "-S"); VJ_master(JAIL_MASTER_FILE); fd = open(S_arg, O_RDONLY); if (fd < 0) { fprintf(stderr, "Can not open secret-file \"%s\"\n", S_arg); exit(2); } VJ_master(JAIL_MASTER_LOW); MCH_TrackHighFd(fd); i = read(fd, buf, sizeof buf); if (i == 0) { fprintf(stderr, "Empty secret-file \"%s\"\n", S_arg); exit(2); } if (i < 0) { fprintf(stderr, "Can not read secret-file \"%s\"\n", S_arg); exit(2); } closefd(&fd); secret_file = S_arg; } static int v_matchproto_(vss_resolved_f) mct_callback(void *priv, const struct suckaddr *sa) { int sock; struct vsb *vsb = priv; const char *err; char abuf[VTCP_ADDRBUFSIZE]; char pbuf[VTCP_PORTBUFSIZE]; struct vev *ev; VJ_master(JAIL_MASTER_PRIVPORT); sock = VTCP_listen(sa, 10, &err); VJ_master(JAIL_MASTER_LOW); assert(sock != 0); // We know where stdin is if (sock > 0) { VTCP_myname(sock, abuf, sizeof abuf, pbuf, sizeof pbuf); VSB_printf(vsb, "%s %s\n", abuf, pbuf); ev = VEV_Alloc(); AN(ev); ev->fd = sock; ev->fd_flags = POLLIN; ev->callback = telnet_accept; AZ(VEV_Start(mgt_evb, ev)); } return (0); } void mgt_cli_telnet(const char *T_arg) { int error; const char *err; struct vsb *vsb; AN(T_arg); vsb = VSB_new_auto(); AN(vsb); error = VSS_resolver(T_arg, NULL, mct_callback, vsb, &err); if (err != NULL) ARGV_ERR("Could not resolve -T argument to address\n\t%s\n", err); AZ(error); AZ(VSB_finish(vsb)); if (VSB_len(vsb) == 0) ARGV_ERR("-T %s could not be listened on.\n", T_arg); /* Save in shmem */ mgt_SHM_static_alloc(VSB_data(vsb), VSB_len(vsb) + 1, "Arg", "-T"); VSB_destroy(&vsb); } /* Reverse CLI ("Master") connections --------------------------------*/ struct m_addr { unsigned magic; #define M_ADDR_MAGIC 0xbc6217ed const struct suckaddr *sa; VTAILQ_ENTRY(m_addr) list; }; static int M_fd = -1; static struct vev *M_poker, *M_conn; static double M_poll = 0.1; static VTAILQ_HEAD(,m_addr) m_addr_list = VTAILQ_HEAD_INITIALIZER(m_addr_list); static int v_matchproto_(mgt_cli_close_f) Marg_closer(void *priv) { (void)priv; M_fd = -1; return (0); } static int v_matchproto_(vev_cb_f) Marg_connect(const struct vev *e, int what) { struct vsb *vsb; struct m_addr *ma; assert(e == M_conn); (void)what; M_fd = VTCP_connected(M_fd); if (M_fd < 0) { MGT_Complain(C_INFO, "Could not connect to CLI-master: %s", VAS_errtxt(errno)); ma = VTAILQ_FIRST(&m_addr_list); AN(ma); VTAILQ_REMOVE(&m_addr_list, ma, list); VTAILQ_INSERT_TAIL(&m_addr_list, ma, list); if (M_poll < 10) M_poll++; return (1); } vsb = sock_id("master", M_fd); mgt_cli_setup(M_fd, M_fd, 0, VSB_data(vsb), Marg_closer, NULL); VSB_destroy(&vsb); M_poll = 1; return (1); } static int v_matchproto_(vev_cb_f) Marg_poker(const struct vev *e, int what) { int s; struct m_addr *ma; assert(e == M_poker); (void)what; M_poker->timeout = M_poll; /* XXX nasty ? */ if (M_fd > 0) return (0); ma = VTAILQ_FIRST(&m_addr_list); AN(ma); /* Try to connect asynchronously */ s = VTCP_connect(ma->sa, -1); if (s < 0) return (0); MCH_TrackHighFd(s); M_conn = VEV_Alloc(); AN(M_conn); M_conn->callback = Marg_connect; M_conn->name = "-M connector"; M_conn->fd_flags = VEV__WR; M_conn->fd = s; M_fd = s; AZ(VEV_Start(mgt_evb, M_conn)); return (0); } static int v_matchproto_(vss_resolved_f) marg_cb(void *priv, const struct suckaddr *sa) { struct m_addr *ma; (void)priv; ALLOC_OBJ(ma, M_ADDR_MAGIC); AN(ma); ma->sa = VSA_Clone(sa); VTAILQ_INSERT_TAIL(&m_addr_list, ma, list); return (0); } void mgt_cli_master(const char *M_arg) { const char *err; int error; AN(M_arg); error = VSS_resolver(M_arg, NULL, marg_cb, NULL, &err); if (err != NULL) ARGV_ERR("Could not resolve -M argument to address\n\t%s\n", err); AZ(error); if (VTAILQ_EMPTY(&m_addr_list)) ARGV_ERR("Could not resolve -M argument to address\n"); AZ(M_poker); M_poker = VEV_Alloc(); AN(M_poker); M_poker->timeout = M_poll; M_poker->callback = Marg_poker; M_poker->name = "-M poker"; AZ(VEV_Start(mgt_evb, M_poker)); } static int cli_cmp(const void *a, const void *b) { struct cli_cmd_desc * const * const aa = a; struct cli_cmd_desc * const * const bb = b; return (strcmp((*aa)->request, (*bb)->request)); } void mgt_DumpRstCli(void) { const struct cli_cmd_desc *cp; const char *p; int z; size_t j; qsort(cmds, ncmds, sizeof cmds[0], cli_cmp); for (z = 0; z < ncmds; z++, cp++) { cp = cmds[z]; if (!strncmp(cp->request, "debug.", 6)) continue; printf(".. _ref_cli_"); for (p = cp->request; *p; p++) fputc(*p == '.' ? '_' : *p, stdout); printf(":\n\n"); printf("%s\n", cp->syntax); for (j = 0; j < strlen(cp->syntax); j++) printf("~"); printf("\n"); printf(" %s\n", cp->help); if (*cp->doc != '\0') printf("\n%s\n", cp->doc); printf("\n"); } } varnish-7.5.0/bin/varnishd/mgt/mgt_jail.c000066400000000000000000000135261457605730600203300ustar00rootroot00000000000000/*- * Copyright (c) 2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Jailing * */ #include "config.h" #include #include #include #include #include #include //lint -efile(766, sys/statvfs.h) #include #include "mgt/mgt.h" #include "common/heritage.h" #include "vav.h" /********************************************************************** * A "none" jail implementation which doesn't do anything. */ static int v_matchproto_(jail_init_f) vjn_init(char **args) { if (args != NULL && *args != NULL) ARGV_ERR("-jnone takes no arguments.\n"); return (0); } static void v_matchproto_(jail_master_f) vjn_master(enum jail_master_e jme) { (void)jme; } static void v_matchproto_(jail_subproc_f) vjn_subproc(enum jail_subproc_e jse) { (void)jse; } static const struct jail_tech jail_tech_none = { .magic = JAIL_TECH_MAGIC, .name = "none", .init = vjn_init, .master = vjn_master, .subproc = vjn_subproc, }; /**********************************************************************/ static const struct jail_tech *vjt; static const struct choice vj_choice[] = { #ifdef HAVE_SETPPRIV { "solaris", &jail_tech_solaris }, #endif { "unix", &jail_tech_unix }, { "none", &jail_tech_none }, { NULL, NULL }, }; void VJ_Init(const char *j_arg) { char **av; int i; if (j_arg != NULL) { av = VAV_Parse(j_arg, NULL, ARGV_COMMA); AN(av); if (av[0] != NULL) ARGV_ERR("-j argument: %s\n", av[0]); if (av[1] == NULL) ARGV_ERR("-j argument is empty\n"); vjt = MGT_Pick(vj_choice, av[1], "jail"); CHECK_OBJ_NOTNULL(vjt, JAIL_TECH_MAGIC); (void)vjt->init(av + 2); VAV_Free(av); } else { /* * Go through list of jail technologies until one * succeeds, falling back to "none". */ for (i = 0; vj_choice[i].name != NULL; i++) { vjt = vj_choice[i].ptr; CHECK_OBJ_NOTNULL(vjt, JAIL_TECH_MAGIC); if (!vjt->init(NULL)) break; } } VSB_printf(vident, ",-j%s", vjt->name); } void VJ_master(enum jail_master_e jme) { CHECK_OBJ_NOTNULL(vjt, JAIL_TECH_MAGIC); vjt->master(jme); } void VJ_subproc(enum jail_subproc_e jse) { CHECK_OBJ_NOTNULL(vjt, JAIL_TECH_MAGIC); vjt->subproc(jse); } int VJ_make_workdir(const char *dname) { int i; AN(dname); CHECK_OBJ_NOTNULL(vjt, JAIL_TECH_MAGIC); if (vjt->make_workdir != NULL) { i = vjt->make_workdir(dname, NULL, NULL); if (i) return (i); VJ_master(JAIL_MASTER_FILE); } else { VJ_master(JAIL_MASTER_FILE); if (mkdir(dname, 0755) < 0 && errno != EEXIST) ARGV_ERR("Cannot create working directory '%s': %s\n", dname, VAS_errtxt(errno)); } if (chdir(dname) < 0) ARGV_ERR("Cannot change to working directory '%s': %s\n", dname, VAS_errtxt(errno)); i = open("_.testfile", O_RDWR|O_CREAT|O_EXCL, 0600); if (i < 0) ARGV_ERR("Cannot create test-file in %s (%s)\n" "Check permissions (or delete old directory)\n", dname, VAS_errtxt(errno)); #ifdef ST_NOEXEC struct statvfs vfs[1]; /* deliberately ignore fstatvfs errors */ if (! fstatvfs(i, vfs) && vfs->f_flag & ST_NOEXEC) { closefd(&i); AZ(unlink("_.testfile")); ARGV_ERR("Working directory %s (-n argument) " "can not reside on a file system mounted noexec\n", dname); } #endif closefd(&i); AZ(unlink("_.testfile")); VJ_master(JAIL_MASTER_LOW); return (0); } int VJ_make_subdir(const char *dname, const char *what, struct vsb *vsb) { int e; AN(dname); AN(what); CHECK_OBJ_NOTNULL(vjt, JAIL_TECH_MAGIC); if (vjt->make_subdir != NULL) return (vjt->make_subdir(dname, what, vsb)); VJ_master(JAIL_MASTER_FILE); if (mkdir(dname, 0755) < 0 && errno != EEXIST) { e = errno; if (vsb != NULL) { VSB_printf(vsb, "Cannot create %s directory '%s': %s\n", what, dname, VAS_errtxt(e)); } else { MGT_Complain(C_ERR, "Cannot create %s directory '%s': %s", what, dname, VAS_errtxt(e)); } return (1); } VJ_master(JAIL_MASTER_LOW); return (0); } void VJ_unlink(const char *fname, int ignore_enoent) { VJ_master(JAIL_MASTER_FILE); if (unlink(fname)) { if (errno != ENOENT || !ignore_enoent) fprintf(stderr, "Could not delete '%s': %s\n", fname, strerror(errno)); } VJ_master(JAIL_MASTER_LOW); } void VJ_rmdir(const char *dname) { VJ_master(JAIL_MASTER_FILE); if (rmdir(dname)) { fprintf(stderr, "Could not rmdir '%s': %s\n", dname, strerror(errno)); } VJ_master(JAIL_MASTER_LOW); } void VJ_fix_fd(int fd, enum jail_fixfd_e what) { CHECK_OBJ_NOTNULL(vjt, JAIL_TECH_MAGIC); if (vjt->fixfd != NULL) vjt->fixfd(fd, what); } varnish-7.5.0/bin/varnishd/mgt/mgt_jail_solaris.c000066400000000000000000000322031457605730600220550ustar00rootroot00000000000000/*- * Copyright (c) 2006-2011 Varnish Software AS * Copyright 2011-2020 UPLEX - Nils Goroll Systemoptimierung * All rights reserved. * * Author: Poul-Henning Kamp * Nils Goroll * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * "Jailing" *1) child processes on Solaris and Solaris-derivatives *2) * ==================================================================== * * *1) The name is motivated by the availability of the -j command line * option. Jailing Varnish is not to be confused with BSD Jails or * Solaris Zones. * * In Solaris parlour, jail == least privileges * * *2) e.g. illumos, SmartOS, OmniOS etc. * * * Note on use of symbolic PRIV_* constants * ---------------------------------------- * * We assume backwards compatibility only for Solaris Releases after the * OpenSolaris Launch. For privileges which existed at the time of the * OpenSolaris Launch, we use the constants from sys/priv_names.h and assert * that priv_addset must succeed. * * For privileges which have been added later, we need to use priv strings in * order not to break builds of varnish on older platforms. To remain binary * compatible, we can't assert that priv_addset succeeds, but we may assert that * it either succeeds or fails with EINVAL. * * See priv_setop_check() * * Note on introduction of new privileges (or: lack of forward compatibility) * -------------------------------------------------------------------------- * * For optimal build and binary forward compatibility, we could use subtractive * set specs like * * basic,!file_link_any,!proc_exec,!proc_fork,!proc_info,!proc_session * * which would implicitly keep any privileges newly introduced to the 'basic' * set. * * But we have a preference for making an informed decision about which * privileges varnish subprocesses should have, so we prefer to risk breaking * varnish temporarily on newer kernels and be notified of missing privileges * through bug reports. * * Notes on the SNOCD flag * ----------------------- * * On Solaris, any uid/gid fiddling which can be interpreted as 'waiving * privileges' will lead to the processes' SNOCD flag being set, disabling core * dumps unless explicitly allowed using coreadm (see below). There is no * equivalent to Linux PR_SET_DUMPABLE. The only way to clear the flag is a call * to some form of exec(). The presence of the SNOCD flag also prevents many * process manipulations from other processes with the same uid/gid unless the * latter have the proc_owner privilege. * * Thus, if we want to run subprocesses with a different uid/gid than the master * process, we cannot avoid the SNOCD flag for those subprocesses not exec'ing * (VCC, VCLLOAD, WORKER). * * * We should, however, avoid to accidentally set the SNOCD flag when setting * privileges (see https://www.varnish-cache.org/trac/ticket/671 ) * * When changing the logic herein, always check with mdb -k. Replace _PID_ with * the pid of your varnish child, the result should be 0, otherwise a regression * has been introduced. * * > 0t_PID_::pid2proc | ::print proc_t p_flag | >a * > ( // ARG_ERR #include #include #include #include "mgt/mgt.h" #include "common/heritage.h" #ifdef HAVE_PRIV_H #include #endif /* renamed from sys/priv_const.h */ #define VJS_EFFECTIVE 0 #define VJS_INHERITABLE 1 #define VJS_PERMITTED 2 #define VJS_LIMIT 3 #define VJS_NSET (VJS_LIMIT + 1) #define VJS_MASK(x) (1U << (x)) /* to denote sharing */ #define JAIL_MASTER_ANY 0 const priv_ptype_t vjs_ptype[VJS_NSET] = { [VJS_EFFECTIVE] = PRIV_EFFECTIVE, [VJS_INHERITABLE] = PRIV_INHERITABLE, [VJS_PERMITTED] = PRIV_PERMITTED, [VJS_LIMIT] = PRIV_LIMIT }; static priv_set_t *vjs_sets[JAIL_LIMIT][VJS_NSET]; static priv_set_t *vjs_inverse[JAIL_LIMIT][VJS_NSET]; static priv_set_t *vjs_proc_setid; // for vjs_setuid static void v_matchproto_(jail_master_f) vjs_master(enum jail_master_e jme); /*------------------------------------------------------------*/ static inline int priv_setop_check(int a) { if (a == 0) return (1); if (errno == EINVAL) return (1); return (0); } #define priv_setop_assert(a) assert(priv_setop_check(a)) /*------------------------------------------------------------*/ static int vjs_priv_on(int vs, priv_set_t **set) { assert(vs >= 0); assert(vs < VJS_NSET); return (setppriv(PRIV_ON, vjs_ptype[vs], set[vs])); } /* ------------------------------------------------------------ * initialization of privilege sets from mgt_jail_solaris_tbl.h * and implicit rules documented therein */ static inline void vjs_add(priv_set_t *sets[VJS_NSET], unsigned mask, const char *priv) { int i; for (i = 0; i < VJS_NSET; i++) if (mask & VJS_MASK(i)) priv_setop_assert(priv_addset(sets[i], priv)); } /* add SUBPROC INHERITABLE and PERMITTED to MASTER PERMITTED */ static int vjs_master_rules(void) { priv_set_t *punion = priv_allocset(); int vs, vj; AN(punion); for (vs = VJS_INHERITABLE; vs <= VJS_PERMITTED; vs ++) { priv_emptyset(punion); for (vj = JAIL_SUBPROC; vj < JAIL_LIMIT; vj++) priv_union(vjs_sets[vj][vs], punion); priv_union(punion, vjs_sets[JAIL_MASTER_ANY][VJS_PERMITTED]); } priv_freeset(punion); return (0); } static priv_set_t * vjs_alloc(void) { priv_set_t *s; s = priv_allocset(); AN(s); priv_emptyset(s); return (s); } static int v_matchproto_(jail_init_f) vjs_init(char **args) { priv_set_t **sets, *permitted, *inheritable, *user = NULL; const char *e; int vj, vs; if (args != NULL && *args != NULL) { for (;*args != NULL; args++) { if (!strncmp(*args, "worker=", 7)) { user = priv_str_to_set((*args) + 7, ",", &e); if (user == NULL) ARGV_ERR( "-jsolaris: parsing worker= " "argument failed near %s.\n", e); continue; } ARGV_ERR("-jsolaris: unknown sub-argument '%s'\n", *args); } } permitted = vjs_alloc(); AN(permitted); AZ(getppriv(PRIV_PERMITTED, permitted)); inheritable = vjs_alloc(); AN(inheritable); AZ(getppriv(PRIV_INHERITABLE, inheritable)); priv_union(permitted, inheritable); /* init privset for vjs_setuid() */ vjs_proc_setid = vjs_alloc(); AN(vjs_proc_setid); priv_setop_assert(priv_addset(vjs_proc_setid, PRIV_PROC_SETID)); assert(JAIL_MASTER_ANY < JAIL_SUBPROC); /* alloc privsets. * for master, PERMITTED and LIMIT are shared */ for (vj = 0; vj < JAIL_SUBPROC; vj++) for (vs = 0; vs < VJS_NSET; vs++) { if (vj == JAIL_MASTER_ANY || vs < VJS_PERMITTED) { vjs_sets[vj][vs] = vjs_alloc(); vjs_inverse[vj][vs] = vjs_alloc(); } else { vjs_sets[vj][vs] = vjs_sets[JAIL_MASTER_ANY][vs]; vjs_inverse[vj][vs] = vjs_inverse[JAIL_MASTER_ANY][vs]; } } for (; vj < JAIL_LIMIT; vj++) for (vs = 0; vs < VJS_NSET; vs++) { vjs_sets[vj][vs] = vjs_alloc(); vjs_inverse[vj][vs] = vjs_alloc(); } /* init from table */ #define PRIV(name, mask, priv) vjs_add(vjs_sets[JAIL_ ## name], mask, priv); #include "mgt_jail_solaris_tbl.h" if (user != NULL) priv_union(user, vjs_sets[JAIL_SUBPROC_WORKER][VJS_EFFECTIVE]); /* mask by available privs */ for (vj = 0; vj < JAIL_LIMIT; vj++) { sets = vjs_sets[vj]; priv_intersect(permitted, sets[VJS_EFFECTIVE]); priv_intersect(permitted, sets[VJS_PERMITTED]); priv_intersect(inheritable, sets[VJS_INHERITABLE]); } /* SUBPROC implicit rules */ for (vj = JAIL_SUBPROC; vj < JAIL_LIMIT; vj++) { sets = vjs_sets[vj]; priv_union(sets[VJS_EFFECTIVE], sets[VJS_PERMITTED]); priv_union(sets[VJS_PERMITTED], sets[VJS_LIMIT]); priv_union(sets[VJS_INHERITABLE], sets[VJS_LIMIT]); } vjs_master_rules(); /* MASTER implicit rules */ for (vj = 0; vj < JAIL_SUBPROC; vj++) { sets = vjs_sets[vj]; priv_union(sets[VJS_EFFECTIVE], sets[VJS_PERMITTED]); priv_union(sets[VJS_PERMITTED], sets[VJS_LIMIT]); priv_union(sets[VJS_INHERITABLE], sets[VJS_LIMIT]); } /* generate inverse */ for (vj = 0; vj < JAIL_LIMIT; vj++) for (vs = 0; vs < VJS_NSET; vs++) { priv_copyset(vjs_sets[vj][vs], vjs_inverse[vj][vs]); priv_inverse(vjs_inverse[vj][vs]); } vjs_master(JAIL_MASTER_LOW); priv_freeset(permitted); priv_freeset(inheritable); /* XXX LEAK: no _fini for priv_freeset() */ return (0); } static void vjs_waive(int jail) { priv_set_t **sets; int i; assert(jail >= 0); assert(jail < JAIL_LIMIT); sets = vjs_inverse[jail]; for (i = 0; i < VJS_NSET; i++) AZ(setppriv(PRIV_OFF, vjs_ptype[i], sets[i])); } static void vjs_setuid(void) { if (priv_ineffect(PRIV_PROC_SETID)) { AZ(setppriv(PRIV_OFF, PRIV_EFFECTIVE, vjs_proc_setid)); AZ(setppriv(PRIV_OFF, PRIV_PERMITTED, vjs_proc_setid)); } else { MGT_Complain(C_SECURITY, "Privilege %s missing, will not change uid/gid", PRIV_PROC_SETID); } } static void v_matchproto_(jail_subproc_f) vjs_subproc(enum jail_subproc_e jse) { AZ(vjs_priv_on(VJS_EFFECTIVE, vjs_sets[jse])); AZ(vjs_priv_on(VJS_INHERITABLE, vjs_sets[jse])); vjs_setuid(); vjs_waive(jse); } static void v_matchproto_(jail_master_f) vjs_master(enum jail_master_e jme) { assert(jme < JAIL_SUBPROC); AZ(vjs_priv_on(VJS_EFFECTIVE, vjs_sets[jme])); AZ(vjs_priv_on(VJS_INHERITABLE, vjs_sets[jme])); vjs_waive(jme); } const struct jail_tech jail_tech_solaris = { .magic = JAIL_TECH_MAGIC, .name = "solaris", .init = vjs_init, .master = vjs_master, // .make_workdir = vjs_make_workdir, // .storage_file = vjs_storage_file, .subproc = vjs_subproc, }; #endif /* HAVE_SETPPRIV */ varnish-7.5.0/bin/varnishd/mgt/mgt_jail_solaris_tbl.h000066400000000000000000000071141457605730600227260ustar00rootroot00000000000000/*- * Copyright 2020 UPLEX - Nils Goroll Systemoptimierung * All rights reserved. * * Author: Nils Goroll * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * definition of privileges to use for the different Varnish Jail (VJ) * levels */ #define E VJS_MASK(VJS_EFFECTIVE) // as-is #define I VJS_MASK(VJS_INHERITABLE) // as-is #define P VJS_MASK(VJS_PERMITTED) // joined with effective #define L VJS_MASK(VJS_LIMIT) // joined with of all the above /* ------------------------------------------------------------ * MASTER * - only EFFECTIVE & INHERITABLE are per JAIL state * - other priv sets are shared across all MASTER_* JAIL states * * MASTER implicit rules (vjs_master_rules()) * - INHERITABLE and PERMITTED from SUBPROC* joined into PERMITTED * - implicit rules from above */ PRIV(MASTER_LOW, E , "file_write") // XXX vcl_boot PRIV(MASTER_LOW, E , "file_read") // XXX library open PRIV(MASTER_LOW, E , "net_access") PRIV(MASTER_SYSTEM, E|I , PRIV_PROC_EXEC) PRIV(MASTER_SYSTEM, E|I , PRIV_PROC_FORK) PRIV(MASTER_SYSTEM, E|I , "file_read") PRIV(MASTER_SYSTEM, E|I , "file_write") PRIV(MASTER_FILE, E , "file_read") PRIV(MASTER_FILE, E , "file_write") PRIV(MASTER_STORAGE, E , "file_read") PRIV(MASTER_STORAGE, E , "file_write") PRIV(MASTER_PRIVPORT, E , "file_write") // bind(AF_UNIX) PRIV(MASTER_PRIVPORT, E , PRIV_FILE_CHOWN) // user= PRIV(MASTER_PRIVPORT, E , PRIV_FILE_OWNER) // mode= PRIV(MASTER_PRIVPORT, E , "net_access") PRIV(MASTER_PRIVPORT, E , PRIV_NET_PRIVADDR) PRIV(MASTER_KILL, E , PRIV_PROC_OWNER) /* ------------------------------------------------------------ * SUBPROC */ PRIV(SUBPROC_VCC, E , PRIV_PROC_SETID) // waived after setuid PRIV(SUBPROC_VCC, E , "file_read") PRIV(SUBPROC_VCC, E , "file_write") PRIV(SUBPROC_CC, E , PRIV_PROC_SETID) // waived after setuid PRIV(SUBPROC_CC, E|I , PRIV_PROC_EXEC) PRIV(SUBPROC_CC, E|I , PRIV_PROC_FORK) PRIV(SUBPROC_CC, E|I , "file_read") PRIV(SUBPROC_CC, E|I , "file_write") PRIV(SUBPROC_VCLLOAD, E , PRIV_PROC_SETID) // waived after setuid PRIV(SUBPROC_VCLLOAD, E , "file_read") PRIV(SUBPROC_WORKER, E , PRIV_PROC_SETID) // waived after setuid PRIV(SUBPROC_WORKER, E , "net_access") PRIV(SUBPROC_WORKER, E , "file_read") PRIV(SUBPROC_WORKER, E , "file_write") PRIV(SUBPROC_WORKER, P , PRIV_PROC_INFO) // vmod_unix #undef E #undef I #undef P #undef L #undef PRIV varnish-7.5.0/bin/varnishd/mgt/mgt_jail_unix.c000066400000000000000000000163151457605730600213720ustar00rootroot00000000000000/*- * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Jailing processes the UNIX way, using setuid(2) etc. */ #include "config.h" #include #include #include #include #include #include #include #include #include "mgt/mgt.h" #include "common/heritage.h" #ifdef __linux__ #include #endif static gid_t vju_mgt_gid; static uid_t vju_uid; static gid_t vju_gid; static const char *vju_user; static uid_t vju_wrkuid; static gid_t vju_wrkgid; static const char *vju_wrkuser; static gid_t vju_cc_gid; static int vju_cc_gid_set; #ifndef VARNISH_USER #define VARNISH_USER "varnish" #endif #ifndef VCACHE_USER #define VCACHE_USER "vcache" #endif #ifndef NGID #define NGID 2000 #endif static int vju_getuid(const char *arg) { struct passwd *pw; pw = getpwnam(arg); if (pw != NULL) { vju_user = strdup(arg); AN(vju_user); vju_uid = pw->pw_uid; vju_gid = pw->pw_gid; } endpwent(); return (pw == NULL ? -1 : 0); } static int vju_getwrkuid(const char *arg) { struct passwd *pw; pw = getpwnam(arg); if (pw != NULL) { vju_wrkuser = strdup(arg); AN(vju_wrkuser); vju_wrkuid = pw->pw_uid; vju_wrkgid = pw->pw_gid; } endpwent(); return (pw == NULL ? -1 : 0); } static int vju_getccgid(const char *arg) { struct group *gr; gr = getgrnam(arg); if (gr != NULL) { vju_cc_gid_set = 1; vju_cc_gid = gr->gr_gid; } endgrent(); return (gr == NULL ? -1 : 0); } /********************************************************************** */ static int v_matchproto_(jail_init_f) vju_init(char **args) { if (args == NULL) { /* Autoconfig */ if (geteuid() != 0) return (1); if (vju_getuid(VARNISH_USER)) return (1); } else { if (geteuid() != 0) ARGV_ERR("Unix Jail: Must be root.\n"); for (;*args != NULL; args++) { if (!strncmp(*args, "user=", 5)) { if (vju_getuid((*args) + 5)) ARGV_ERR( "Unix jail: %s user not found.\n", (*args) + 5); continue; } if (!strncmp(*args, "workuser=", 9)) { if (vju_getwrkuid((*args) + 9)) ARGV_ERR( "Unix jail: %s user not found.\n", (*args) + 9); continue; } if (!strncmp(*args, "ccgroup=", 8)) { if (vju_getccgid((*args) + 8)) ARGV_ERR( "Unix jail: %s group not found.\n", (*args) + 8); continue; } ARGV_ERR("Unix jail: unknown sub-argument '%s'\n", *args); } if (vju_user == NULL && vju_getuid(VARNISH_USER)) ARGV_ERR("Unix jail: %s user not found.\n", VARNISH_USER); } AN(vju_user); vju_mgt_gid = getgid(); if (vju_wrkuser == NULL && vju_getwrkuid(VCACHE_USER)) { vju_wrkuid = vju_uid; vju_wrkgid = vju_gid; } if (vju_wrkuser != NULL && vju_wrkgid != vju_gid) ARGV_ERR("Unix jail: user %s and %s have " "different login groups\n", vju_user, vju_wrkuser); /* Do an explicit JAIL_MASTER_LOW */ AZ(setegid(vju_gid)); AZ(seteuid(vju_uid)); return (0); } static void v_matchproto_(jail_master_f) vju_master(enum jail_master_e jme) { ASSERT_JAIL_MASTER(jme); if (jme == JAIL_MASTER_LOW) { AZ(setegid(vju_gid)); AZ(seteuid(vju_uid)); } else { AZ(seteuid(0)); AZ(setegid(vju_mgt_gid)); } } static void v_matchproto_(jail_subproc_f) vju_subproc(enum jail_subproc_e jse) { int i; gid_t gid_list[NGID]; ASSERT_JAIL_SUBPROC(jse); AZ(seteuid(0)); if (vju_wrkuser != NULL && (jse == JAIL_SUBPROC_VCLLOAD || jse == JAIL_SUBPROC_WORKER)) { AZ(setgid(vju_wrkgid)); AZ(initgroups(vju_wrkuser, vju_wrkgid)); } else { AZ(setgid(vju_gid)); AZ(initgroups(vju_user, vju_gid)); } if (jse == JAIL_SUBPROC_CC && vju_cc_gid_set) { /* Add the optional extra group for the C-compiler access */ i = getgroups(NGID, gid_list); assert(i >= 0); gid_list[i++] = vju_cc_gid; AZ(setgroups(i, gid_list)); } if (vju_wrkuser != NULL && (jse == JAIL_SUBPROC_VCLLOAD || jse == JAIL_SUBPROC_WORKER)) { AZ(setuid(vju_wrkuid)); } else { AZ(setuid(vju_uid)); } #ifdef __linux__ /* * On linux mucking about with uid/gid disables core-dumps, * reenable them again. */ if (prctl(PR_SET_DUMPABLE, 1) != 0) { MGT_Complain(C_INFO, "Could not set dumpable bit. Core dumps turned off"); } #endif } static int v_matchproto_(jail_make_dir_f) vju_make_subdir(const char *dname, const char *what, struct vsb *vsb) { int e; AN(dname); AN(what); AZ(seteuid(0)); if (mkdir(dname, 0755) < 0 && errno != EEXIST) { e = errno; if (vsb != NULL) { VSB_printf(vsb, "Cannot create %s directory '%s': %s\n", what, dname, VAS_errtxt(e)); } else { MGT_Complain(C_ERR, "Cannot create %s directory '%s': %s", what, dname, VAS_errtxt(e)); } return (1); } AZ(chown(dname, vju_uid, vju_gid)); AZ(seteuid(vju_uid)); return (0); } static int v_matchproto_(jail_make_dir_f) vju_make_workdir(const char *dname, const char *what, struct vsb *vsb) { AN(dname); AZ(what); AZ(vsb); AZ(seteuid(0)); if (mkdir(dname, 0755) < 0 && errno != EEXIST) { MGT_Complain(C_ERR, "Cannot create working directory '%s': %s", dname, VAS_errtxt(errno)); return (1); } //lint -e{570} AZ(chown(dname, -1, vju_gid)); AZ(seteuid(vju_uid)); return (0); } static void v_matchproto_(jail_fixfd_f) vju_fixfd(int fd, enum jail_fixfd_e what) { /* Called under JAIL_MASTER_FILE */ switch (what) { case JAIL_FIXFD_FILE: AZ(fchmod(fd, 0600)); AZ(fchown(fd, vju_wrkuid, vju_wrkgid)); break; case JAIL_FIXFD_VSMMGT: AZ(fchmod(fd, 0750)); AZ(fchown(fd, vju_uid, vju_gid)); break; case JAIL_FIXFD_VSMWRK: AZ(fchmod(fd, 0750)); AZ(fchown(fd, vju_wrkuid, vju_wrkgid)); break; default: WRONG("Ain't Fixin'"); } } const struct jail_tech jail_tech_unix = { .magic = JAIL_TECH_MAGIC, .name = "unix", .init = vju_init, .master = vju_master, .make_subdir = vju_make_subdir, .make_workdir = vju_make_workdir, .fixfd = vju_fixfd, .subproc = vju_subproc, }; varnish-7.5.0/bin/varnishd/mgt/mgt_main.c000066400000000000000000000554701457605730600203410ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2017 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * The management process and CLI handling */ #include "config.h" #include #include #include #include #include #include #include #include #include #include #include #include "mgt/mgt.h" #include "common/heritage.h" #include "hash/hash_slinger.h" #include "libvcc.h" #include "vcli_serve.h" #include "vct.h" #include "vev.h" #include "vfil.h" #include "vin.h" #include "vpf.h" #include "vrnd.h" #include "vsha256.h" #include "vsub.h" #include "vtim.h" #include "waiter/mgt_waiter.h" #include "vsa.h" #include "vus.h" struct heritage heritage; unsigned d_flag = 0; struct vev_root *mgt_evb; int exit_status = 0; struct vsb *vident; struct VSC_mgt *VSC_C_mgt; static int I_fd = -1; static char *workdir; static struct vfil_path *vcl_path = NULL; static const char opt_spec[] = "?a:b:CdE:f:Fh:i:I:j:l:M:n:P:p:r:S:s:T:t:VW:x:"; /*--------------------------------------------------------------------*/ static void usage(void) { char *p; #define FMT_NONL " %-28s # %s" #define FMT FMT_NONL "\n" printf( "Usage: varnishd [options]\n"); printf("\nBasic options:\n"); printf(FMT, "-a [=]address[:port][,proto]", "HTTP listen address and port"); printf(FMT, " [,user=][,group=]", "Can be specified multiple times."); printf(FMT, " [,mode=]", " default: \":80,HTTP\""); printf(FMT, "", "Proto can be \"PROXY\" or \"HTTP\" (default)"); printf(FMT, "", "user, group and mode set permissions for"); printf(FMT, "", " a Unix domain socket."); printf(FMT, "-b none", "No backend"); printf(FMT, "-b [addr[:port]|path]", "Backend address and port"); printf(FMT, "", " or socket file path"); printf(FMT, "", " default: \":80\""); printf(FMT, "-f vclfile", "VCL program"); printf(FMT, "", "Can be specified multiple times."); printf(FMT, "-n dir", "Working directory"); p = VIN_n_Arg(NULL); AN(p); printf(FMT_NONL " default: %s\n", "", "", p); free(p); printf("\n-b can be used only once, and not together with -f\n"); printf("\nDocumentation options:\n"); printf(FMT, "-?", "Prints this usage message"); printf(FMT, "-x parameter", "Parameter documentation"); printf(FMT, "-x vsl", "VSL record documentation"); printf(FMT, "-x cli", "CLI command documentation"); printf(FMT, "-x builtin", "Builtin VCL program"); printf(FMT, "-x optstring", "List of getopt options"); printf("\nOperations options:\n"); printf(FMT, "-F", "Run in foreground"); printf(FMT, "-T address[:port]", "CLI address"); printf(FMT, "", "Can be specified multiple times."); printf(FMT, "-M address:port", "Reverse CLI destination"); printf(FMT, "", "Can be specified multiple times."); printf(FMT, "-P file", "PID file"); printf(FMT, "-i identity", "Identity of varnish instance"); printf(FMT, "-I clifile", "Initialization CLI commands"); printf(FMT, "-E extension", "Load extension"); printf("\nTuning options:\n"); printf(FMT, "-t TTL", "Default TTL"); printf(FMT, "-p param=value", "set parameter"); printf(FMT, "", "Can be specified multiple times."); printf(FMT, "-s [name=]kind[,options]", "Storage specification"); printf(FMT, "", "Can be specified multiple times."); #ifdef HAVE_UMEM_H printf(FMT, "", " -s default (=umem)"); printf(FMT, "", " -s umem"); #else printf(FMT, "", " -s default (=malloc)"); #endif printf(FMT, "", " -s malloc"); printf(FMT, "", " -s file"); printf(FMT, "-l vsl", "Size of shared memory log"); printf(FMT, "", " vsl: space for VSL records [80m]"); printf("\nSecurity options:\n"); printf(FMT, "-r param[,param...]", "Set parameters read-only from CLI"); printf(FMT, "", "Can be specified multiple times."); printf(FMT, "-S secret-file", "Secret file for CLI authentication"); printf(FMT, "-j jail[,options]", "Jail specification"); #ifdef HAVE_SETPPRIV printf(FMT, "", " -j solaris"); #endif printf(FMT, "", " -j unix"); printf(FMT, "", " -j none"); printf("\nAdvanced/Dev/Debug options:\n"); printf(FMT, "-d", "debug mode"); printf(FMT, "", "Stay in foreground, CLI on stdin."); printf(FMT, "-C", "Output VCL code compiled to C language"); printf(FMT, "-V", "version"); printf(FMT, "-h kind[,options]", "Hash specification"); printf(FMT, "-W waiter", "Waiter implementation"); } /*--------------------------------------------------------------------*/ struct arg_list { unsigned magic; #define ARG_LIST_MAGIC 0x7b0d1bc4 char arg[2]; char *val; VTAILQ_ENTRY(arg_list) list; void *priv; }; static VTAILQ_HEAD(, arg_list) arglist = VTAILQ_HEAD_INITIALIZER(arglist); static struct arg_list * arg_list_add(const char arg, const char *val) { struct arg_list *alp; ALLOC_OBJ(alp, ARG_LIST_MAGIC); AN(alp); alp->arg[0] = arg; alp->arg[1] = '\0'; REPLACE(alp->val, val); VTAILQ_INSERT_TAIL(&arglist, alp, list); return (alp); } static unsigned arg_list_count(const char *arg) { unsigned retval = 0; struct arg_list *alp; VTAILQ_FOREACH(alp, &arglist, list) { if (!strcmp(alp->arg, arg)) retval++; } return (retval); } /*--------------------------------------------------------------------*/ static void cli_check(const struct cli *cli) { if (cli->result == CLIS_OK || cli->result == CLIS_TRUNCATED) { AZ(VSB_finish(cli->sb)); if (VSB_len(cli->sb) > 0) fprintf(stderr, "Warnings:\n%s\n", VSB_data(cli->sb)); VSB_clear(cli->sb); return; } AZ(VSB_finish(cli->sb)); fprintf(stderr, "Error:\n%s\n", VSB_data(cli->sb)); exit(2); } /*-------------------------------------------------------------------- * This function is called when the CLI on stdin is closed. */ static int v_matchproto_(mgt_cli_close_f) mgt_stdin_close(void *priv) { (void)priv; return (-42); } /*-------------------------------------------------------------------- * Autogenerate a -S file using strong random bits from the kernel. */ static void mgt_secret_atexit(void) { /* Only master process */ if (getpid() != heritage.mgt_pid) return; VJ_master(JAIL_MASTER_FILE); (void)unlink("_.secret"); VJ_master(JAIL_MASTER_LOW); } static const char * make_secret(const char *dirname) { char *fn; int fdo; int i; unsigned char b; assert(asprintf(&fn, "%s/_.secret", dirname) > 0); VJ_master(JAIL_MASTER_FILE); fdo = open(fn, O_RDWR|O_CREAT|O_TRUNC, 0640); if (fdo < 0) ARGV_ERR("Cannot create secret-file in %s (%s)\n", dirname, VAS_errtxt(errno)); for (i = 0; i < 256; i++) { AZ(VRND_RandomCrypto(&b, 1)); assert(1 == write(fdo, &b, 1)); } closefd(&fdo); VJ_master(JAIL_MASTER_LOW); AZ(atexit(mgt_secret_atexit)); return (fn); } static void mgt_Cflag_atexit(void) { /* Only master process */ if (getpid() != heritage.mgt_pid) return; if (arg_list_count("E")) { vext_cleanup(1); VJ_rmdir("vext_cache"); } VJ_rmdir("vmod_cache"); (void)chdir("/"); VJ_rmdir(workdir); } /*--------------------------------------------------------------------*/ static void mgt_tests(void) { assert(VTIM_parse("Sun, 06 Nov 1994 08:49:37 GMT") == 784111777); assert(VTIM_parse("Sunday, 06-Nov-94 08:49:37 GMT") == 784111777); assert(VTIM_parse("Sun Nov 6 08:49:37 1994") == 784111777); /* Check that our VSHA256 works */ VSHA256_Test(); } static void mgt_initialize(struct cli *cli) { static unsigned clilim = 32768; /* for ASSERT_MGT() */ heritage.mgt_pid = getpid(); /* Create a cli for convenience in otherwise CLI functions */ INIT_OBJ(cli, CLI_MAGIC); cli[0].sb = VSB_new_auto(); AN(cli[0].sb); cli[0].result = CLIS_OK; cli[0].limit = &clilim; mgt_cli_init_cls(); // CLI commands can be registered MCF_InitParams(cli); VCC_VCL_Range(&heritage.min_vcl_version, &heritage.max_vcl_version); cli_check(cli); } static void mgt_x_arg(const char *x_arg) { if (!strcmp(x_arg, "parameter")) MCF_DumpRstParam(); else if (!strcmp(x_arg, "vsl")) mgt_DumpRstVsl(); else if (!strcmp(x_arg, "cli")) mgt_DumpRstCli(); else if (!strcmp(x_arg, "builtin")) mgt_DumpBuiltin(); else if (!strcmp(x_arg, "optstring")) (void)printf("%s\n", opt_spec); else ARGV_ERR("Invalid -x argument\n"); } /*--------------------------------------------------------------------*/ #define ERIC_MAGIC 0x2246988a /* Eric is not random */ static int mgt_eric(void) { int eric_pipes[2]; unsigned u; ssize_t sz; AZ(pipe(eric_pipes)); switch (fork()) { case -1: fprintf(stderr, "Fork() failed: %s\n", VAS_errtxt(errno)); exit(-1); case 0: closefd(&eric_pipes[0]); assert(setsid() > 1); VFIL_null_fd(STDIN_FILENO); return (eric_pipes[1]); default: break; } closefd(&eric_pipes[1]); sz = read(eric_pipes[0], &u, sizeof u); if (sz == sizeof u && u == ERIC_MAGIC) exit(0); else if (sz == sizeof u && u != 0) exit(u); else exit(-1); } static void mgt_eric_im_done(int eric_fd, unsigned u) { if (u == 0) u = ERIC_MAGIC; VFIL_null_fd(STDIN_FILENO); VFIL_null_fd(STDOUT_FILENO); VFIL_null_fd(STDERR_FILENO); assert(write(eric_fd, &u, sizeof u) == sizeof u); closefd(&eric_fd); } /*--------------------------------------------------------------------*/ static int v_matchproto_(vev_cb_f) mgt_sigint(const struct vev *e, int what) { (void)e; (void)what; MGT_Complain(C_ERR, "Manager got %s", e->name); (void)fflush(stdout); if (MCH_Running()) MCH_Stop_Child(); return (-42); } /*--------------------------------------------------------------------*/ static int v_matchproto_(vev_cb_f) mgt_uptime(const struct vev *e, int what) { static double mgt_uptime_t0 = 0; (void)e; (void)what; AN(VSC_C_mgt); if (mgt_uptime_t0 == 0) mgt_uptime_t0 = VTIM_real(); VSC_C_mgt->uptime = (uint64_t)(VTIM_real() - mgt_uptime_t0); return (0); } /*--------------------------------------------------------------------*/ static int v_matchproto_(mgt_cli_close_f) mgt_I_close(void *priv) { (void)priv; fprintf(stderr, "END of -I file processing\n"); I_fd = -1; return (0); } /*--------------------------------------------------------------------*/ struct f_arg { unsigned magic; #define F_ARG_MAGIC 0x840649a8 char *farg; char *src; }; static struct f_arg * mgt_f_read(const char *fn) { struct f_arg *fa; char *f, *fnp; ALLOC_OBJ(fa, F_ARG_MAGIC); AN(fa); REPLACE(fa->farg, fn); VFIL_setpath(&vcl_path, mgt_vcl_path); if (VFIL_searchpath(vcl_path, NULL, &f, fn, &fnp) || f == NULL) { ARGV_ERR("Cannot read -f file '%s' (%s)\n", fnp != NULL ? fnp : fn, VAS_errtxt(errno)); } free(fa->farg); fa->farg = fnp; fa->src = f; return (fa); } static void mgt_b_conv(const char *b_arg) { struct f_arg *fa; struct vsb *vsb; ALLOC_OBJ(fa, F_ARG_MAGIC); AN(fa); REPLACE(fa->farg, "<-b argument>"); vsb = VSB_new_auto(); AN(vsb); VSB_cat(vsb, "vcl 4.1;\n"); VSB_cat(vsb, "backend default "); if (!strcasecmp(b_arg, "none")) VSB_cat(vsb, "none;\n"); else if (VUS_is(b_arg)) VSB_printf(vsb, "{\n .path = \"%s\";\n}\n", b_arg); else VSB_printf(vsb, "{\n .host = \"%s\";\n}\n", b_arg); AZ(VSB_finish(vsb)); fa->src = strdup(VSB_data(vsb)); AN(fa->src); VSB_destroy(&vsb); AZ(arg_list_count("f")); arg_list_add('f', "")->priv = fa; } static int mgt_process_f_arg(struct cli *cli, unsigned C_flag, void **fap) { int retval = 0; struct f_arg *fa; const char *name = NULL; TAKE_OBJ_NOTNULL(fa, fap, F_ARG_MAGIC); if (arg_list_count("f") == 1) name = "boot"; mgt_vcl_startup(cli, fa->src, name, fa->farg, C_flag); if (C_flag) { if (cli->result != CLIS_OK && cli->result != CLIS_TRUNCATED) retval = 2; AZ(VSB_finish(cli->sb)); fprintf(stderr, "%s\n", VSB_data(cli->sb)); VSB_clear(cli->sb); } else { cli_check(cli); } free(fa->farg); free(fa->src); FREE_OBJ(fa); return (retval); } static const char * create_bogo_n_arg(void) { struct vsb *vsb; char *p; vsb = VSB_new_auto(); AN(vsb); if (getenv("TMPDIR") != NULL) VSB_printf(vsb, "%s", getenv("TMPDIR")); else VSB_cat(vsb, "/tmp"); VSB_cat(vsb, "/varnishd_C_XXXXXXX"); AZ(VSB_finish(vsb)); p = strdup(VSB_data(vsb)); AN(p); VSB_destroy(&vsb); AN(mkdtemp(p)); AZ(chmod(p, 0750)); return (p); } static struct vpf_fh * create_pid_file(pid_t *ppid, const char *fmt, ...) { struct vsb *vsb; va_list ap; struct vpf_fh *pfh; va_start(ap, fmt); vsb = VSB_new_auto(); AN(vsb); VSB_vprintf(vsb, fmt, ap); AZ(VSB_finish(vsb)); VJ_master(JAIL_MASTER_FILE); pfh = VPF_Open(VSB_data(vsb), 0644, ppid); if (pfh == NULL && errno == EEXIST) ARGV_ERR( "Varnishd is already running (pid=%jd) (pidfile=%s)\n", (intmax_t)*ppid, VSB_data(vsb)); if (pfh == NULL) ARGV_ERR("Could not open pid-file (%s): %s\n", VSB_data(vsb), VAS_errtxt(errno)); VJ_master(JAIL_MASTER_LOW); VSB_destroy(&vsb); va_end(ap); return (pfh); } int main(int argc, char * const *argv) { int o, eric_fd = -1; unsigned C_flag = 0; unsigned F_flag = 0; const char *b_arg = NULL; const char *i_arg = NULL; const char *j_arg = NULL; const char *h_arg = "critbit"; const char *n_arg = NULL; const char *S_arg = NULL; const char *s_arg = "default,100m"; const char *W_arg = NULL; const char *c; char *p; struct cli cli[1]; const char *err; unsigned u; struct sigaction sac; struct vev *e; pid_t pid; struct arg_list *alp; int first_arg = 1; if (argc == 2 && !strcmp(argv[1], "--optstring")) { printf("%s\n", opt_spec); exit(0); } heritage.argc = argc; heritage.argv = argv; setbuf(stdout, NULL); setbuf(stderr, NULL); mgt_tests(); mgt_initialize(cli); for (; (o = getopt(argc, argv, opt_spec)) != -1; first_arg = 0) { switch (o) { case 'V': if (!first_arg) ARGV_ERR("-V must be the first argument\n"); if (argc != 2) ARGV_ERR("Too many arguments for -V\n"); VCS_Message("varnishd"); exit(0); case 'x': if (!first_arg) ARGV_ERR("-x must be the first argument\n"); if (argc != 3) ARGV_ERR("Too many arguments for -x\n"); mgt_x_arg(optarg); exit(0); case 'b': b_arg = optarg; break; case 'C': C_flag = 1; break; case 'd': d_flag++; break; case 'F': F_flag = 1; break; case 'j': j_arg = optarg; break; case 'h': h_arg = optarg; break; case 'i': i_arg = optarg; break; case 'l': arg_list_add('p', "vsl_space")->priv = optarg; break; case 'n': n_arg = optarg; break; case 'S': S_arg = optarg; break; case 't': arg_list_add('p', "default_ttl")->priv = optarg; break; case 'W': W_arg = optarg; break; case 'a': case 'E': case 'f': case 'I': case 'p': case 'P': case 'M': case 'r': case 's': case 'T': (void)arg_list_add(o, optarg); break; default: usage(); exit(2); } } if (argc != optind) ARGV_ERR("Too many arguments (%s...)\n", argv[optind]); if (b_arg != NULL && arg_list_count("f")) ARGV_ERR("Only one of -b or -f can be specified\n"); if (d_flag && F_flag) ARGV_ERR("Only one of -d or -F can be specified\n"); if (C_flag && b_arg == NULL && !arg_list_count("f")) ARGV_ERR("-C needs either -b or -f \n"); if (d_flag && C_flag) ARGV_ERR("-d makes no sense with -C\n"); if (F_flag && C_flag) ARGV_ERR("-F makes no sense with -C\n"); if (!d_flag && b_arg == NULL && !arg_list_count("f")) ARGV_ERR("Neither -b nor -f given. (use -f '' to override)\n"); if (arg_list_count("I") > 1) ARGV_ERR("\tOnly one -I allowed\n"); if (d_flag || F_flag) complain_to_stderr = 1; if (!arg_list_count("T")) (void)arg_list_add('T', "localhost:0"); /* * Start out by closing all unwanted file descriptors we might * have inherited from sloppy process control daemons. */ VSUB_closefrom(STDERR_FILENO + 1); MCH_TrackHighFd(STDERR_FILENO); /* * Have Eric Daemonize us if need be */ if (!C_flag && !d_flag && !F_flag) { eric_fd = mgt_eric(); MCH_TrackHighFd(eric_fd); heritage.mgt_pid = getpid(); } VRND_SeedAll(); vident = mgt_BuildVident(); /* Various initializations */ VTAILQ_INIT(&heritage.socks); mgt_evb = VEV_New(); AN(mgt_evb); /* Initialize transport protocols */ XPORT_Init(); VJ_Init(j_arg); /* Initialize the bogo-IP VSA */ VSA_Init(); if (b_arg != NULL) mgt_b_conv(b_arg); /* Process delayed arguments */ VTAILQ_FOREACH(alp, &arglist, list) { switch(alp->arg[0]) { case 'a': MAC_Arg(alp->val); break; case 'f': if (*alp->val != '\0') alp->priv = mgt_f_read(alp->val); break; case 'E': vext_argument(alp->val); break; case 'I': VJ_master(JAIL_MASTER_FILE); I_fd = open(alp->val, O_RDONLY); if (I_fd < 0) ARGV_ERR("\tCan't open %s: %s\n", alp->val, VAS_errtxt(errno)); VJ_master(JAIL_MASTER_LOW); break; case 'p': if (alp->priv) { MCF_ParamSet(cli, alp->val, alp->priv); } else { p = strchr(alp->val, '='); if (p == NULL) ARGV_ERR("\t-p lacks '='\n"); AN(p); *p++ = '\0'; MCF_ParamSet(cli, alp->val, p); } break; case 'r': MCF_ParamProtect(cli, alp->val); break; case 's': STV_Config(alp->val); break; default: break; } cli_check(cli); } VCLS_SetLimit(mgt_cls, &mgt_param.cli_limit); assert(d_flag == 0 || F_flag == 0); if (C_flag && n_arg == NULL) n_arg = create_bogo_n_arg(); if (S_arg != NULL && !strcmp(S_arg, "none")) { fprintf(stderr, "Warning: CLI authentication disabled.\n"); } else if (S_arg != NULL) { VJ_master(JAIL_MASTER_FILE); o = open(S_arg, O_RDONLY, 0); if (o < 0) ARGV_ERR("Cannot open -S file (%s): %s\n", S_arg, VAS_errtxt(errno)); closefd(&o); VJ_master(JAIL_MASTER_LOW); } workdir = VIN_n_Arg(n_arg); AN(workdir); if (i_arg == NULL || *i_arg == '\0') i_arg = mgt_HostName(); else for (c = i_arg; *c != '\0'; c++) { if (!vct_istchar(*c)) ARGV_ERR("Invalid character '%c' for -i\n", *c); } heritage.identity = i_arg; mgt_ProcTitle("Mgt"); openlog("varnishd", LOG_PID, LOG_LOCAL0); if (VJ_make_workdir(workdir)) ARGV_ERR("Cannot create working directory (%s): %s\n", workdir, VAS_errtxt(errno)); VJ_master(JAIL_MASTER_SYSTEM); AZ(system("rm -rf vmod_cache")); AZ(system("rm -rf vext_cache")); VJ_master(JAIL_MASTER_LOW); if (VJ_make_subdir("vmod_cache", "VMOD cache", NULL)) { ARGV_ERR( "Cannot create vmod directory (%s/vmod_cache): %s\n", workdir, VAS_errtxt(errno)); } if (arg_list_count("E") && VJ_make_subdir("vext_cache", "VMOD cache", NULL)) { ARGV_ERR( "Cannot create vmod directory (%s/vext_cache): %s\n", workdir, VAS_errtxt(errno)); } if (C_flag) AZ(atexit(mgt_Cflag_atexit)); /* If no -s argument specified, process default -s argument */ if (!arg_list_count("s")) STV_Config(s_arg); /* Configure CLI and Transient storage, if user did not */ STV_Config_Final(); mgt_vcl_init(); vext_copyin(vident); u = 0; VTAILQ_FOREACH(alp, &arglist, list) { if (!strcmp(alp->arg, "f") && alp->priv != NULL) u |= mgt_process_f_arg(cli, C_flag, &alp->priv); } if (C_flag) exit(u); VTAILQ_FOREACH(alp, &arglist, list) { if (!strcmp(alp->arg, "P")) alp->priv = create_pid_file(&pid, "%s", alp->val); } /* Implict -P argument */ alp = arg_list_add('P', NULL); alp->priv = create_pid_file(&pid, "%s/_.pid", workdir); if (VTAILQ_EMPTY(&heritage.socks)) MAC_Arg(":80\0"); // XXX: extra NUL for FlexeLint assert(!VTAILQ_EMPTY(&heritage.socks)); HSH_config(h_arg); Wait_config(W_arg); mgt_SHM_Init(); mgt_SHM_static_alloc(i_arg, strlen(i_arg) + 1L, "Arg", "-i"); VSC_C_mgt = VSC_mgt_New(NULL, NULL, ""); VTAILQ_FOREACH(alp, &arglist, list) { if (!strcmp(alp->arg, "M")) mgt_cli_master(alp->val); else if (!strcmp(alp->arg, "T") && strcmp(alp->val, "none")) mgt_cli_telnet(alp->val); else if (!strcmp(alp->arg, "P")) VPF_Write(alp->priv); } AZ(VSB_finish(vident)); if (S_arg == NULL) S_arg = make_secret(workdir); AN(S_arg); MGT_Complain(C_DEBUG, "Version: %s", VCS_String("V")); MGT_Complain(C_DEBUG, "Platform: %s", VSB_data(vident) + 1); if (d_flag) mgt_cli_setup(0, 1, 1, "debug", mgt_stdin_close, NULL); if (strcmp(S_arg, "none")) mgt_cli_secret(S_arg); memset(&sac, 0, sizeof sac); sac.sa_handler = SIG_IGN; sac.sa_flags = SA_RESTART; AZ(sigaction(SIGPIPE, &sac, NULL)); AZ(sigaction(SIGHUP, &sac, NULL)); MCH_Init(); if (I_fd >= 0) { fprintf(stderr, "BEGIN of -I file processing\n"); /* We must dup stderr, because VCLS closes the output fd */ mgt_cli_setup(I_fd, dup(2), 1, "-I file", mgt_I_close, stderr); while (I_fd >= 0) { o = VEV_Once(mgt_evb); if (o != 1) MGT_Complain(C_ERR, "VEV_Once() = %d", o); } } assert(I_fd == -1); err = mgt_has_vcl(); if (!d_flag && err != NULL && !arg_list_count("f")) MGT_Complain(C_ERR, "%s", err); if (err == NULL && !d_flag) u = MCH_Start_Child(); else u = 0; if (eric_fd >= 0) mgt_eric_im_done(eric_fd, u); if (u) exit(u); /* Failure is no longer an option */ if (F_flag) VFIL_null_fd(STDIN_FILENO); e = VEV_Alloc(); AN(e); e->callback = mgt_uptime; e->timeout = 1.0; e->name = "mgt_uptime"; AZ(VEV_Start(mgt_evb, e)); e = VEV_Alloc(); AN(e); e->sig = SIGTERM; e->callback = mgt_sigint; e->name = "SIGTERM"; AZ(VEV_Start(mgt_evb, e)); e = VEV_Alloc(); AN(e); e->sig = SIGINT; e->callback = mgt_sigint; e->name = "SIGINT"; AZ(VEV_Start(mgt_evb, e)); o = VEV_Loop(mgt_evb); if (o != 0 && o != -42) MGT_Complain(C_ERR, "VEV_Loop() = %d", o); MGT_Complain(C_INFO, "manager stopping child"); MCH_Stop_Child(); MGT_Complain(C_INFO, "manager dies"); mgt_cli_close_all(); VEV_Destroy(&mgt_evb); VJ_master(JAIL_MASTER_SYSTEM); /*lint -e(730)*/ vext_cleanup(! MGT_DO_DEBUG(DBG_VMOD_SO_KEEP)); (void)rmdir("vext_cache"); VJ_master(JAIL_MASTER_LOW); VTAILQ_FOREACH(alp, &arglist, list) { if (!strcmp(alp->arg, "P")) VPF_Remove(alp->priv); } exit(exit_status); } varnish-7.5.0/bin/varnishd/mgt/mgt_param.c000066400000000000000000000543131457605730600205100ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include "config.h" #include #include #include #include #include #include #include "mgt/mgt.h" #include "common/heritage.h" #include "mgt/mgt_param.h" #include "vav.h" #include "vcli_serve.h" struct plist { unsigned magic; #define PLIST_MAGIC 0xbfc3ea16 VTAILQ_ENTRY(plist) list; struct parspec *spec; }; static VTAILQ_HEAD(, plist) phead = VTAILQ_HEAD_INITIALIZER(phead); struct params mgt_param; static const int margin1 = 8; static int margin2 = 0; static const int wrap_at = 72; static const int tab0 = 3; /*--------------------------------------------------------------------*/ static const char TYPE_TIMEOUT_TEXT[] = "\n\n" "NB: This parameter can be disabled with the value \"never\"."; static const char OBJ_STICKY_TEXT[] = "\n\n" "NB: This parameter is evaluated only when objects are created. " "To change it for all objects, restart or ban everything."; static const char DELAYED_EFFECT_TEXT[] = "\n\n" "NB: This parameter may take quite some time to take (full) effect."; static const char MUST_RESTART_TEXT[] = "\n\n" "NB: This parameter will not take any effect until the " "child process has been restarted."; static const char MUST_RELOAD_TEXT[] = "\n\n" "NB: This parameter will not take any effect until the " "VCL programs have been reloaded."; static const char EXPERIMENTAL_TEXT[] = "\n\n" "NB: We do not know yet if it is a good idea to change " "this parameter, or if the default value is even sensible. " "Caution is advised, and feedback is most welcome."; static const char WIZARD_TEXT[] = "\n\n" "NB: Do not change this parameter, unless a developer tell " "you to do so."; static const char PROTECTED_TEXT[] = "\n\n" "NB: This parameter is protected and can not be changed."; static const char ONLY_ROOT_TEXT[] = "\n\n" "NB: This parameter only works if varnishd is run as root."; static const char NOT_IMPLEMENTED_TEXT[] = "\n\n" "NB: This parameter depends on a feature which is not available" " on this platform."; static const char PLATFORM_DEPENDENT_TEXT[] = "\n\n" "NB: This parameter depends on a feature which is not available" " on all platforms."; static const char BUILD_OPTIONS_TEXT[] = "\n\n" "NB: The actual default value for this parameter depends on the" " Varnish build environment and options."; /*--------------------------------------------------------------------*/ static struct parspec * mcf_findpar(const char *name) { struct plist *pl; AN(name); VTAILQ_FOREACH(pl, &phead, list) if (!strcmp(pl->spec->name, name)) return (pl->spec); return (NULL); } static void mcf_addpar(struct parspec *ps) { struct plist *pl, *pl2; int i; ALLOC_OBJ(pl, PLIST_MAGIC); AN(pl); pl->spec = ps; VTAILQ_FOREACH(pl2, &phead, list) { i = strcmp(pl2->spec->name, pl->spec->name); if (i == 0) { fprintf(stderr, "Duplicate param: %s\n", ps->name); exit(4); } else if (i > 0) { VTAILQ_INSERT_BEFORE(pl2, pl, list); return; } } VTAILQ_INSERT_TAIL(&phead, pl, list); } /*-------------------------------------------------------------------- * Wrap the text nicely. * Lines are allowed to contain two TABS and we render that as a table * taking the width of the first column into account. */ static void mcf_wrap_line(struct cli *cli, const char *b, const char *e, int tabs, int m0) { int n, hadtabs = 0; const char *w; n = m0; VCLI_Out(cli, "%*s", n, ""); while (b < e) { if (!isspace(*b)) { VCLI_Out(cli, "%c", *b); b++; n++; } else if (*b == '\t') { assert(tabs); assert(hadtabs < 2); do { VCLI_Out(cli, " "); n++; } while ((n % tabs) != (m0 + tab0) % tabs); b++; hadtabs++; } else { assert (*b == ' '); for (w = b + 1; w < e; w++) if (isspace(*w)) break; if (n + (w - b) < wrap_at) { VCLI_Out(cli, "%.*s", (int)(w - b), b); n += (w - b); b = w; } else { assert(hadtabs == 0 || hadtabs == 2); VCLI_Out(cli, "\n"); mcf_wrap_line(cli, b + 1, e, 0, hadtabs ? m0 + tab0 + tabs : m0); return; } } } assert(b == e); } static void mcf_wrap(struct cli *cli, const char *text) { const char *p, *q, *r; int tw = 0; if (strchr(text, '\t') != NULL) { for (p = text; *p != '\0'; ) { q = strstr(p, "\n\t"); if (q == NULL) break; q += 2; r = strchr(q, '\t'); if (r == NULL) { fprintf(stderr, "LINE with just one TAB: <%s>\n", text); exit(4); } if (r - q > tw) tw = r - q; p = q; } tw += 2; if (tw < 20) tw = 20; } for (p = text; *p != '\0'; ) { if (*p == '\n') { VCLI_Out(cli, "\n"); p++; continue; } q = strchr(p, '\n'); if (q == NULL) q = strchr(p, '\0'); mcf_wrap_line(cli, p, q, tw, margin1); p = q; } } /*--------------------------------------------------------------------*/ static void v_matchproto_(cli_func_t) mcf_param_show(struct cli *cli, const char * const *av, void *priv) { struct plist *pl; const struct parspec *pp, *pa; int n, lfmt = 0, chg = 0; struct vsb *vsb; const char *show = NULL; (void)priv; for (n = 2; av[n] != NULL; n++) { if (strcmp(av[n], "-l") == 0) { lfmt = 1; continue; } if (strcmp(av[n], "changed") == 0) { chg = 1; continue; } if (show != NULL) { VCLI_SetResult(cli, CLIS_TOOMANY); VCLI_Out(cli, "Too many parameters"); return; } show = av[n]; lfmt = 1; } vsb = VSB_new_auto(); AN(vsb); n = 0; VTAILQ_FOREACH(pl, &phead, list) { pp = pl->spec; if (lfmt && show != NULL && strcmp(pp->name, show)) continue; if (pp->func == tweak_alias && (show == NULL || strcmp(pp->name, show))) continue; n++; VSB_clear(vsb); if (pp->func(vsb, pp, NULL)) VCLI_SetResult(cli, CLIS_PARAM); AZ(VSB_finish(vsb)); if (chg && pp->def != NULL && !strcmp(pp->def, VSB_data(vsb))) continue; if (pp->flags & NOT_IMPLEMENTED) { if (lfmt) { VCLI_Out(cli, "%s\n", pp->name); VCLI_Out(cli, "%-*sNot available", margin1, " "); } else { VCLI_Out(cli, "%-*s-", margin2, pp->name); } } else { if (lfmt) { VCLI_Out(cli, "%s\n", pp->name); VCLI_Out(cli, "%-*sValue is: ", margin1, " "); } else { VCLI_Out(cli, "%-*s", margin2, pp->name); } VCLI_Out(cli, "%s", VSB_data(vsb)); if (pp->units != NULL && *pp->units != '\0') VCLI_Out(cli, " [%s]", pp->units); if (pp->def != NULL && !strcmp(pp->def, VSB_data(vsb))) VCLI_Out(cli, " (default)"); } VCLI_Out(cli, "\n"); if (lfmt && pp->func == tweak_alias) { pa = TRUST_ME(pp->priv); VCLI_Out(cli, "%-*sAlias of: %s\n", margin1, " ", pa->name); } if (lfmt && pp->flags & NOT_IMPLEMENTED) { VCLI_Out(cli, "\n"); mcf_wrap(cli, NOT_IMPLEMENTED_TEXT); VCLI_Out(cli, "\n\n"); } else if (lfmt) { if (pp->def != NULL && strcmp(pp->def, VSB_data(vsb))) VCLI_Out(cli, "%-*sDefault is: %s\n", margin1, "", pp->def); if (pp->min != NULL) VCLI_Out(cli, "%-*sMinimum is: %s\n", margin1, "", pp->min); if (pp->max != NULL) VCLI_Out(cli, "%-*sMaximum is: %s\n", margin1, "", pp->max); VCLI_Out(cli, "\n"); mcf_wrap(cli, pp->descr); if (pp->func == tweak_timeout) mcf_wrap(cli, TYPE_TIMEOUT_TEXT); if (pp->flags & OBJ_STICKY) mcf_wrap(cli, OBJ_STICKY_TEXT); if (pp->flags & DELAYED_EFFECT) mcf_wrap(cli, DELAYED_EFFECT_TEXT); if (pp->flags & EXPERIMENTAL) mcf_wrap(cli, EXPERIMENTAL_TEXT); if (pp->flags & MUST_RELOAD) mcf_wrap(cli, MUST_RELOAD_TEXT); if (pp->flags & MUST_RESTART) mcf_wrap(cli, MUST_RESTART_TEXT); if (pp->flags & WIZARD) mcf_wrap(cli, WIZARD_TEXT); if (pp->flags & PROTECTED) mcf_wrap(cli, PROTECTED_TEXT); if (pp->flags & ONLY_ROOT) mcf_wrap(cli, ONLY_ROOT_TEXT); if (pp->flags & BUILD_OPTIONS) mcf_wrap(cli, BUILD_OPTIONS_TEXT); VCLI_Out(cli, "\n\n"); } } if (show != NULL && n == 0) { VCLI_SetResult(cli, CLIS_PARAM); VCLI_Out(cli, "Unknown parameter \"%s\".", show); } VSB_destroy(&vsb); } static inline void mcf_json_key_valstr(struct cli *cli, const char *key, const char *val) { VCLI_Out(cli, "\"%s\": ", key); VCLI_JSON_str(cli, val); VCLI_Out(cli, ",\n"); } static void v_matchproto_(cli_func_t) mcf_param_show_json(struct cli *cli, const char * const *av, void *priv) { int n, comma = 0, chg = 0; struct plist *pl; const struct parspec *pp, *pa; struct vsb *vsb, *def; const char *show = NULL, *sep; (void)priv; for (int i = 2; av[i] != NULL; i++) { if (strcmp(av[i], "-l") == 0) { VCLI_SetResult(cli, CLIS_PARAM); VCLI_Out(cli, "-l not permitted with param.show -j"); return; } if (strcmp(av[i], "changed") == 0) { chg = 1; continue; } if (strcmp(av[i], "-j") == 0) continue; if (show != NULL) { VCLI_SetResult(cli, CLIS_TOOMANY); VCLI_Out(cli, "Too many parameters"); return; } show = av[i]; } vsb = VSB_new_auto(); AN(vsb); def = VSB_new_auto(); AN(def); n = 0; VCLI_JSON_begin(cli, 2, av); VCLI_Out(cli, ",\n"); VTAILQ_FOREACH(pl, &phead, list) { pp = pl->spec; if (show != NULL && strcmp(pp->name, show) != 0) continue; if (pp->func == tweak_alias && (show == NULL || strcmp(pp->name, show))) continue; n++; VSB_clear(vsb); if (pp->func(vsb, pp, JSON_FMT)) VCLI_SetResult(cli, CLIS_PARAM); AZ(VSB_finish(vsb)); VSB_clear(def); if (pp->func(def, pp, NULL)) VCLI_SetResult(cli, CLIS_PARAM); AZ(VSB_finish(def)); if (chg && pp->def != NULL && !strcmp(pp->def, VSB_data(def))) continue; VCLI_Out(cli, "%s", comma ? ",\n" : ""); comma++; VCLI_Out(cli, "{\n"); VSB_indent(cli->sb, 2); mcf_json_key_valstr(cli, "name", pp->name); if (pp->func == tweak_alias) { pa = TRUST_ME(pp->priv); mcf_json_key_valstr(cli, "alias", pa->name); } if (pp->flags & NOT_IMPLEMENTED) { VCLI_Out(cli, "\"implemented\": false\n"); VSB_indent(cli->sb, -2); VCLI_Out(cli, "}"); continue; } VCLI_Out(cli, "\"implemented\": true,\n"); VCLI_Out(cli, "\"value\": %s,\n", VSB_data(vsb)); if (pp->units != NULL && *pp->units != '\0') mcf_json_key_valstr(cli, "units", pp->units); if (pp->def != NULL) mcf_json_key_valstr(cli, "default", pp->def); if (pp->min != NULL) mcf_json_key_valstr(cli, "minimum", pp->min); if (pp->max != NULL) mcf_json_key_valstr(cli, "maximum", pp->max); mcf_json_key_valstr(cli, "description", pp->descr); VCLI_Out(cli, "\"flags\": ["); VSB_indent(cli->sb, 2); sep = ""; #define flag_out(flag, string) do { \ if (pp->flags & flag) { \ VCLI_Out(cli, "%s\n", sep); \ VCLI_Out(cli, "\"%s\"", #string); \ sep = ","; \ } \ } while(0) flag_out(OBJ_STICKY, obj_sticky); flag_out(DELAYED_EFFECT, delayed_effect); flag_out(EXPERIMENTAL, experimental); flag_out(MUST_RELOAD, must_reload); flag_out(MUST_RESTART, must_restart); flag_out(WIZARD, wizard); flag_out(PROTECTED, protected); flag_out(ONLY_ROOT, only_root); flag_out(BUILD_OPTIONS, build_options); #undef flag_out if (pp->flags) VCLI_Out(cli, "\n"); VSB_indent(cli->sb, -2); VCLI_Out(cli, "]\n"); VSB_indent(cli->sb, -2); VCLI_Out(cli, "}"); } VCLI_JSON_end(cli); if (show != NULL && n == 0) { VSB_clear(cli->sb); VCLI_SetResult(cli, CLIS_PARAM); VCLI_Out(cli, "Unknown parameter \"%s\".", show); } VSB_destroy(&vsb); VSB_destroy(&def); } /*-------------------------------------------------------------------- * Mark parameters as protected */ void MCF_ParamProtect(struct cli *cli, const char *args) { char **av; struct parspec *pp; int i; av = VAV_Parse(args, NULL, ARGV_COMMA); if (av[0] != NULL) { VCLI_Out(cli, "Parse error: %s", av[0]); VCLI_SetResult(cli, CLIS_PARAM); VAV_Free(av); return; } for (i = 1; av[i] != NULL; i++) { pp = mcf_findpar(av[i]); if (pp == NULL) { VCLI_Out(cli, "Unknown parameter %s", av[i]); VCLI_SetResult(cli, CLIS_PARAM); VAV_Free(av); return; } pp->flags |= PROTECTED; } VAV_Free(av); } /*--------------------------------------------------------------------*/ void MCF_ParamSet(struct cli *cli, const char *param, const char *val) { const struct parspec *pp; pp = mcf_findpar(param); if (pp == NULL) { VCLI_SetResult(cli, CLIS_PARAM); VCLI_Out(cli, "Unknown parameter \"%s\".", param); return; } if (pp->flags & NOT_IMPLEMENTED) { VCLI_SetResult(cli, CLIS_CANT); VCLI_Out(cli, "parameter \"%s\" is not available on this platform.", param ); return; } if (pp->flags & PROTECTED) { VCLI_SetResult(cli, CLIS_AUTH); VCLI_Out(cli, "parameter \"%s\" is protected.", param); return; } if (!val) val = pp->def; if (pp->func(cli->sb, pp, val)) VCLI_SetResult(cli, CLIS_PARAM); if (cli->result == CLIS_OK && heritage.param != NULL) *heritage.param = mgt_param; if (cli->result != CLIS_OK) { VCLI_Out(cli, "\n(attempting to set param '%s' to '%s')", pp->name, val); } else if (MCH_Running() && pp->flags & MUST_RESTART) { VCLI_Out(cli, "\nChange will take effect when child is restarted"); } else if (pp->flags & MUST_RELOAD) { VCLI_Out(cli, "\nChange will take effect when VCL script is reloaded"); } } /*--------------------------------------------------------------------*/ static void v_matchproto_(cli_func_t) mcf_param_set(struct cli *cli, const char * const *av, void *priv) { (void)priv; MCF_ParamSet(cli, av[2], av[3]); } static void v_matchproto_(cli_func_t) mcf_param_set_json(struct cli *cli, const char * const *av, void *priv) { const char *const avs[] = { av[0], av[1], av[2], av[3], NULL }; MCF_ParamSet(cli, av[3], av[4]); if (cli->result == CLIS_OK) mcf_param_show_json(cli, avs, priv); } /*--------------------------------------------------------------------*/ static void v_matchproto_(cli_func_t) mcf_param_reset(struct cli *cli, const char * const *av, void *priv) { (void)priv; MCF_ParamSet(cli, av[2], NULL); } /*-------------------------------------------------------------------- * Initialize parameters and sort by name. */ static struct parspec mgt_parspec[] = { #define PARAM_ALL #define PARAM_PRE { #define PARAM(typ, fld, nm, ...) #nm, __VA_ARGS__ #define PARAM_POST }, #include "tbl/params.h" { NULL, NULL, NULL } }; static void mcf_init_params(void) { struct parspec *pp; const char *s; for (pp = mgt_parspec; pp->name != NULL; pp++) { AN(pp->func); s = strchr(pp->descr, '\0'); if (isspace(s[-1])) { fprintf(stderr, "Param->descr has trailing space: %s\n", pp->name); exit(4); } mcf_addpar(pp); margin2 = vmax_t(int, margin2, strlen(pp->name) + 1); } } /*-------------------------------------------------------------------- * Wash a min/max/default value */ static void mcf_dyn_vsb(enum mcf_which_e which, struct parspec *pp, struct vsb *vsb) { switch (which) { case MCF_DEFAULT: REPLACE(pp->dyn_def, VSB_data(vsb)); pp->def = pp->dyn_def; break; case MCF_MINIMUM: REPLACE(pp->dyn_min, VSB_data(vsb)); pp->min = pp->dyn_min; break; case MCF_MAXIMUM: REPLACE(pp->dyn_max, VSB_data(vsb)); pp->max = pp->dyn_max; break; default: WRONG("bad 'which'"); } } static void mcf_wash_param(struct cli *cli, struct parspec *pp, enum mcf_which_e which, const char *name, struct vsb *vsb) { const char *val; int err; switch (which) { case MCF_DEFAULT: val = pp->def; break; case MCF_MINIMUM: val = pp->min; break; case MCF_MAXIMUM: val = pp->max; break; default: WRONG("bad 'which'"); } AN(val); if (pp->func == tweak_alias) { assert(which == MCF_DEFAULT); pp->priv = mcf_findpar(pp->def); pp->def = NULL; return; } VSB_clear(vsb); VSB_printf(vsb, "FAILED to set %s for param %s: %s\n", name, pp->name, val); err = pp->func(vsb, pp, val); AZ(VSB_finish(vsb)); if (err) { VCLI_Out(cli, "%s\n", VSB_data(vsb)); VCLI_SetResult(cli, CLIS_CANT); return; } VSB_clear(vsb); err = pp->func(vsb, pp, NULL); AZ(err); AZ(VSB_finish(vsb)); if (strcmp(val, VSB_data(vsb))) mcf_dyn_vsb(which, pp, vsb); } /*--------------------------------------------------------------------*/ static struct cli_proto cli_params[] = { { CLICMD_PARAM_SHOW, "", mcf_param_show, mcf_param_show_json }, { CLICMD_PARAM_SET, "", mcf_param_set, mcf_param_set_json }, { CLICMD_PARAM_RESET, "", mcf_param_reset, mcf_param_set_json }, { NULL } }; /*-------------------------------------------------------------------- * Configure the parameters */ void MCF_InitParams(struct cli *cli) { struct plist *pl; struct parspec *pp; struct vsb *vsb; ssize_t def, low; mcf_init_params(); MCF_TcpParams(); def = 80 * 1024; if (sizeof(void *) < 8) { /*lint !e506 !e774 */ /* * Adjust default parameters for 32 bit systems to conserve * VM space. * * Reflect changes in doc/sphinx/reference/varnishd.rst ! */ MCF_ParamConf(MCF_DEFAULT, "workspace_client", "24k"); MCF_ParamConf(MCF_DEFAULT, "workspace_backend", "20k"); MCF_ParamConf(MCF_DEFAULT, "http_resp_size", "8k"); MCF_ParamConf(MCF_DEFAULT, "http_req_size", "12k"); MCF_ParamConf(MCF_DEFAULT, "gzip_buffer", "4k"); MCF_ParamConf(MCF_DEFAULT, "vsl_buffer", "4k"); MCF_ParamConf(MCF_MAXIMUM, "vsl_space", "1G"); def = 64 * 1024; } low = sysconf(_SC_THREAD_STACK_MIN); MCF_ParamConf(MCF_MINIMUM, "thread_pool_stack", "%jdb", (intmax_t)low); #if defined(ENABLE_SANITIZER) || defined(ENABLE_COVERAGE) def = 192 * 1024; #endif if (def < low) def = low; MCF_ParamConf(MCF_DEFAULT, "thread_pool_stack", "%jdb", (intmax_t)def); #if !defined(MAX_THREAD_POOLS) # define MAX_THREAD_POOLS 32 #endif MCF_ParamConf(MCF_MAXIMUM, "thread_pools", "%d", MAX_THREAD_POOLS); #if !defined(HAVE_ACCEPT_FILTERS) || defined(__linux) MCF_ParamConf(MCF_DEFAULT, "accept_filter", "off"); #endif VCLS_AddFunc(mgt_cls, MCF_AUTH, cli_params); vsb = VSB_new_auto(); AN(vsb); VTAILQ_FOREACH(pl, &phead, list) { pp = pl->spec; if (pp->flags & NOT_IMPLEMENTED) continue; if (pp->min != NULL) mcf_wash_param(cli, pp, MCF_MINIMUM, "minimum", vsb); if (pp->max != NULL) mcf_wash_param(cli, pp, MCF_MAXIMUM, "maximum", vsb); AN(pp->def); mcf_wash_param(cli, pp, MCF_DEFAULT, "default", vsb); } VSB_destroy(&vsb); AN(mgt_cc_cmd); REPLACE(mgt_cc_cmd_def, mgt_cc_cmd); } /*--------------------------------------------------------------------*/ void MCF_ParamConf(enum mcf_which_e which, const char * const param, const char *fmt, ...) { struct parspec *pp; struct vsb *vsb; va_list ap; pp = mcf_findpar(param); AN(pp); vsb = VSB_new_auto(); AN(vsb); va_start(ap, fmt); VSB_vprintf(vsb, fmt, ap); va_end(ap); AZ(VSB_finish(vsb)); mcf_dyn_vsb(which, pp, vsb); VSB_destroy(&vsb); } /*--------------------------------------------------------------------*/ void MCF_DumpRstParam(void) { struct plist *pl; const struct parspec *pp; const char *p, *q, *t1, *t2; unsigned flags; size_t z; printf("\n.. The following is the autogenerated " "output from varnishd -x parameter\n\n"); VTAILQ_FOREACH(pl, &phead, list) { pp = pl->spec; if (!strcmp("deprecated_dummy", pp->name)) continue; printf(".. _ref_param_%s:\n\n", pp->name); printf("%s\n", pp->name); for (z = 0; z < strlen(pp->name); z++) printf("~"); printf("\n"); if (pp->flags && pp->flags & PLATFORM_DEPENDENT) printf("\n%s\n\n", PLATFORM_DEPENDENT_TEXT); if (pp->flags && pp->flags & BUILD_OPTIONS) printf("\n%s\n\n", BUILD_OPTIONS_TEXT); if (pp->units != NULL && *pp->units != '\0') printf("\t* Units: %s\n", pp->units); #define MCF_DYN_REASON(lbl, nm) \ if (pp->dyn_ ## nm ## _reason != NULL) \ printf("\t* " #lbl ": %s\n", \ pp->dyn_ ## nm ## _reason); \ else if (pp->nm != NULL) \ printf("\t* " #lbl ": %s\n", pp->nm); MCF_DYN_REASON(Default, def); MCF_DYN_REASON(Minimum, min); MCF_DYN_REASON(Maximum, max); #undef MCF_DYN_REASON flags = pp->flags & ~DOCS_FLAGS; if (pp->func == tweak_timeout) flags |= TYPE_TIMEOUT; if (flags) { printf("\t* Flags: "); q = ""; if (flags & TYPE_TIMEOUT) { printf("%stimeout", q); q = ", "; } if (flags & DELAYED_EFFECT) { printf("%sdelayed", q); q = ", "; } if (flags & MUST_RESTART) { printf("%smust_restart", q); q = ", "; } if (flags & MUST_RELOAD) { printf("%smust_reload", q); q = ", "; } if (flags & EXPERIMENTAL) { printf("%sexperimental", q); q = ", "; } if (flags & WIZARD) { printf("%swizard", q); q = ", "; } if (flags & ONLY_ROOT) { printf("%sonly_root", q); q = ", "; } if (flags & OBJ_STICKY) { printf("%sobj_sticky", q); q = ", "; } printf("\n"); } printf("\n"); p = pp->descr; while (*p != '\0') { q = strchr(p, '\n'); if (q == NULL) q = strchr(p, '\0'); t1 = strchr(p, '\t'); if (t1 != NULL && t1 < q) { t2 = strchr(t1 + 1, '\t'); AN(t2); printf("\n\t*"); (void)fwrite(t1 + 1, (t2 - 1) - t1, 1, stdout); printf("*\n\t\t"); p = t2 + 1; } (void)fwrite(p, q - p, 1, stdout); p = q; if (*p == '\n') { printf("\n"); p++; } continue; } printf("\n\n"); } } varnish-7.5.0/bin/varnishd/mgt/mgt_param.h000066400000000000000000000055731457605730600205210ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ struct parspec; typedef int tweak_t(struct vsb *, const struct parspec *, const char *arg); /* Sentinel for the arg position of tweak_t to ask for JSON formatting. */ extern const char * const JSON_FMT; struct parspec { const char *name; tweak_t *func; volatile void *priv; const char *min; const char *max; const char *def; const char *units; const char *descr; unsigned flags; #define DELAYED_EFFECT (1<<0) #define EXPERIMENTAL (1<<1) #define MUST_RESTART (1<<2) #define MUST_RELOAD (1<<3) #define WIZARD (1<<4) #define PROTECTED (1<<5) #define OBJ_STICKY (1<<6) #define ONLY_ROOT (1<<7) #define NOT_IMPLEMENTED (1<<8) #define PLATFORM_DEPENDENT (1<<9) #define BUILD_OPTIONS (1<<10) #define TYPE_TIMEOUT (1<<11) #define DOCS_FLAGS (NOT_IMPLEMENTED|PLATFORM_DEPENDENT|BUILD_OPTIONS) const char *dyn_min_reason; const char *dyn_max_reason; const char *dyn_def_reason; char *dyn_min; char *dyn_max; char *dyn_def; }; tweak_t tweak_alias; tweak_t tweak_boolean; tweak_t tweak_bytes; tweak_t tweak_bytes_u; tweak_t tweak_double; tweak_t tweak_duration; tweak_t tweak_debug; tweak_t tweak_experimental; tweak_t tweak_feature; tweak_t tweak_poolparam; tweak_t tweak_storage; tweak_t tweak_string; tweak_t tweak_thread_pool_min; tweak_t tweak_thread_pool_max; tweak_t tweak_timeout; tweak_t tweak_uint; tweak_t tweak_vcc_feature; tweak_t tweak_vsl_buffer; tweak_t tweak_vsl_mask; tweak_t tweak_vsl_reclen; varnish-7.5.0/bin/varnishd/mgt/mgt_param_tcp.c000066400000000000000000000055351457605730600213600ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Parameters related to TCP keepalives are not universally available * as socket options, and probing for system-wide defaults more appropriate * than our own involves slightly too much grunt-work to be negligible * so we sequestrate that code here. */ #include "config.h" #include #include #include #include #include #include #include #include "mgt/mgt.h" #include "common/heritage.h" #include "vtcp.h" #if defined(HAVE_TCP_KEEP) || defined(HAVE_TCP_KEEPALIVE) static void tcp_probe(int sock, int nam, const char *param, unsigned def) { int i; socklen_t l; unsigned u; l = sizeof u; i = getsockopt(sock, IPPROTO_TCP, nam, &u, &l); if (i < 0 || u == 0) u = def; MCF_ParamConf(MCF_DEFAULT, param, "%u", u); } static void tcp_keep_probes(void) { const char *err; int s; s = VTCP_listen_on(":0", NULL, 10, &err); if (err != NULL) ARGV_ERR("Could not probe TCP keepalives: %s", err); assert(s > 0); #ifdef HAVE_TCP_KEEP tcp_probe(s, TCP_KEEPIDLE, "tcp_keepalive_time", 600); tcp_probe(s, TCP_KEEPCNT, "tcp_keepalive_probes", 5); tcp_probe(s, TCP_KEEPINTVL, "tcp_keepalive_intvl", 5); #else tcp_probe(s, TCP_KEEPALIVE, "tcp_keepalive_time", 600); #endif closefd(&s); } #endif void MCF_TcpParams(void) { #if defined(HAVE_TCP_KEEP) || defined(HAVE_TCP_KEEPALIVE) tcp_keep_probes(); #endif } varnish-7.5.0/bin/varnishd/mgt/mgt_param_tweak.c000066400000000000000000000471071457605730600217060ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Functions for tweaking parameters * */ #include "config.h" #include #include #include #include #include #include "mgt/mgt.h" #include "mgt/mgt_param.h" #include "storage/storage.h" #include "vav.h" #include "vnum.h" #include "vsl_priv.h" const char * const JSON_FMT = (const char *)&JSON_FMT; /*-------------------------------------------------------------------- * Generic handling of double typed parameters */ typedef double parse_double_f(const char *, const char **); static double parse_decimal(const char *p, const char **err) { double v; v = SF_Parse_Decimal(&p, 0, err); if (errno == 0 && *p != '\0') { errno = EINVAL; *err = "Invalid number"; } return (v); } static int tweak_generic_double(struct vsb *vsb, const char *arg, const struct parspec *pp, parse_double_f parse, const char *fmt) { volatile double u, minv = VRT_DECIMAL_MIN, maxv = VRT_DECIMAL_MAX; volatile double *dest = pp->priv; const char *err; if (arg != NULL && arg != JSON_FMT) { if (pp->min != NULL) { minv = parse(pp->min, &err); if (errno) { VSB_printf(vsb, "Min: %s (%s)\n", err, pp->min); return (-1); } } if (pp->max != NULL) { maxv = parse(pp->max, &err); if (errno) { VSB_printf(vsb, "Max: %s (%s)\n", err, pp->max); return (-1); } } u = parse(arg, &err); if (errno) { VSB_printf(vsb, "%s (%s)\n", err, arg); return (-1); } if (u < minv) { VSB_printf(vsb, "Must be greater or equal to %s\n", pp->min); return (-1); } if (u > maxv) { VSB_printf(vsb, "Must be less than or equal to %s\n", pp->max); return (-1); } *dest = u; } else { VSB_printf(vsb, fmt, *dest); } return (0); } /*--------------------------------------------------------------------*/ static double parse_duration(const char *p, const char **err) { double v, r; v = SF_Parse_Decimal(&p, 0, err); if (*p == '\0') return (v); r = VNUM_duration_unit(v, p, NULL); if (isnan(r)) { errno = EINVAL; *err = "Invalid duration unit"; } return (r); } int v_matchproto_(tweak_t) tweak_timeout(struct vsb *vsb, const struct parspec *par, const char *arg) { volatile double *dest = par->priv; if (arg != NULL && !strcmp(arg, "never")) { *dest = INFINITY; return (0); } if (*dest == INFINITY && arg == NULL) { VSB_cat(vsb, "never"); return (0); } if (*dest == INFINITY && arg == JSON_FMT) { VSB_cat(vsb, "\"never\""); return (0); } return (tweak_generic_double(vsb, arg, par, parse_duration, "%.3f")); } int v_matchproto_(tweak_t) tweak_duration(struct vsb *vsb, const struct parspec *par, const char *arg) { return (tweak_generic_double(vsb, arg, par, parse_duration, "%.3f")); } /*--------------------------------------------------------------------*/ int v_matchproto_(tweak_t) tweak_double(struct vsb *vsb, const struct parspec *par, const char *arg) { return (tweak_generic_double(vsb, arg, par, parse_decimal, "%g")); } /*--------------------------------------------------------------------*/ static int parse_boolean(struct vsb *vsb, const char *arg) { if (!strcasecmp(arg, "off")) return (0); if (!strcasecmp(arg, "disable")) return (0); if (!strcasecmp(arg, "no")) return (0); if (!strcasecmp(arg, "false")) return (0); if (!strcasecmp(arg, "on")) return (1); if (!strcasecmp(arg, "enable")) return (1); if (!strcasecmp(arg, "yes")) return (1); if (!strcasecmp(arg, "true")) return (1); VSB_cat(vsb, "use \"on\" or \"off\"\n"); return (-1); } int v_matchproto_(tweak_t) tweak_boolean(struct vsb *vsb, const struct parspec *par, const char *arg) { volatile unsigned *dest; int val; dest = par->priv; if (arg != NULL && arg != JSON_FMT) { val = parse_boolean(vsb, arg); if (val < 0) return (-1); *dest = val; } else if (arg == JSON_FMT) { VSB_printf(vsb, "%s", *dest ? "true" : "false"); } else { VSB_printf(vsb, "%s", *dest ? "on" : "off"); } return (0); } /*--------------------------------------------------------------------*/ static int tweak_generic_uint(struct vsb *vsb, volatile unsigned *dest, const char *arg, const char *min, const char *max, const char *min_reason, const char *max_reason) { unsigned u, minv = 0, maxv = 0; char *p; if (arg != NULL && arg != JSON_FMT) { if (min != NULL) { p = NULL; minv = strtoul(min, &p, 0); if (*arg == '\0' || *p != '\0') { VSB_printf(vsb, "Illegal Min: %s\n", min); return (-1); } } if (max != NULL) { p = NULL; maxv = strtoul(max, &p, 0); if (*arg == '\0' || *p != '\0') { VSB_printf(vsb, "Illegal Max: %s\n", max); return (-1); } } p = NULL; if (!strcasecmp(arg, "unlimited")) u = UINT_MAX; else { u = strtoul(arg, &p, 0); if (*arg == '\0' || *p != '\0') { VSB_printf(vsb, "Not a number (%s)\n", arg); return (-1); } } if (min != NULL && u < minv) { VSB_printf(vsb, "Must be at least %s", min); if (min_reason != NULL) VSB_printf(vsb, " (%s)", min_reason); VSB_putc(vsb, '\n'); return (-1); } if (max != NULL && u > maxv) { VSB_printf(vsb, "Must be no more than %s", max); if (max_reason != NULL) VSB_printf(vsb, " (%s)", max_reason); VSB_putc(vsb, '\n'); return (-1); } *dest = u; } else if (*dest == UINT_MAX && arg != JSON_FMT) { VSB_cat(vsb, "unlimited"); } else { VSB_printf(vsb, "%u", *dest); } return (0); } /*--------------------------------------------------------------------*/ int v_matchproto_(tweak_t) tweak_uint(struct vsb *vsb, const struct parspec *par, const char *arg) { volatile unsigned *dest; dest = par->priv; return (tweak_generic_uint(vsb, dest, arg, par->min, par->max, par->dyn_min_reason, par->dyn_max_reason)); } /*--------------------------------------------------------------------*/ static void fmt_bytes(struct vsb *vsb, uintmax_t t) { const char *p; if (t == 0 || t & 0xff) { VSB_printf(vsb, "%jub", t); return; } for (p = "kMGTPEZY"; *p; p++) { if (t & 0x300) { VSB_printf(vsb, "%.2f%c", t / 1024.0, *p); return; } t /= 1024; if (t & 0x0ff) { VSB_printf(vsb, "%ju%c", t, *p); return; } } VSB_cat(vsb, "(bogus number)"); } static int tweak_generic_bytes(struct vsb *vsb, volatile ssize_t *dest, const char *arg, const char *min, const char *max) { uintmax_t r, rmin = 0, rmax = 0; const char *p; if (arg != NULL && arg != JSON_FMT) { if (min != NULL) { p = VNUM_2bytes(min, &rmin, 0); if (p != NULL) { VSB_printf(vsb, "Invalid min-val: %s\n", min); return (-1); } } if (max != NULL) { p = VNUM_2bytes(max, &rmax, 0); if (p != NULL) { VSB_printf(vsb, "Invalid max-val: %s\n", max); return (-1); } } p = VNUM_2bytes(arg, &r, 0); if (p != NULL) { VSB_cat(vsb, "Could not convert to bytes.\n"); VSB_printf(vsb, "%s\n", p); VSB_cat(vsb, " Try something like '80k' or '120M'\n"); return (-1); } if ((uintmax_t)((ssize_t)r) != r) { fmt_bytes(vsb, r); VSB_cat(vsb, " is too large for this architecture.\n"); return (-1); } if (max != NULL && r > rmax) { VSB_printf(vsb, "Must be no more than %s\n", max); VSB_cat(vsb, "\n"); return (-1); } if (min != NULL && r < rmin) { VSB_printf(vsb, "Must be at least %s\n", min); return (-1); } *dest = r; } else if (arg == JSON_FMT) { VSB_printf(vsb, "%zd", *dest); } else { fmt_bytes(vsb, *dest); } return (0); } /*--------------------------------------------------------------------*/ int v_matchproto_(tweak_t) tweak_bytes(struct vsb *vsb, const struct parspec *par, const char *arg) { volatile ssize_t *dest; dest = par->priv; return (tweak_generic_bytes(vsb, dest, arg, par->min, par->max)); } /*--------------------------------------------------------------------*/ int v_matchproto_(tweak_t) tweak_bytes_u(struct vsb *vsb, const struct parspec *par, const char *arg) { volatile unsigned *d1; volatile ssize_t dest; d1 = par->priv; dest = *d1; if (tweak_generic_bytes(vsb, &dest, arg, par->min, par->max)) return (-1); *d1 = dest; return (0); } /*-------------------------------------------------------------------- * vsl_buffer and vsl_reclen have dependencies. */ int v_matchproto_(tweak_t) tweak_vsl_buffer(struct vsb *vsb, const struct parspec *par, const char *arg) { volatile unsigned *d1; volatile ssize_t dest; d1 = par->priv; dest = *d1; if (tweak_generic_bytes(vsb, &dest, arg, par->min, par->max)) return (-1); *d1 = dest; MCF_ParamConf(MCF_MAXIMUM, "vsl_reclen", "%u", *d1 - 12); return (0); } int v_matchproto_(tweak_t) tweak_vsl_reclen(struct vsb *vsb, const struct parspec *par, const char *arg) { volatile unsigned *d1; volatile ssize_t dest; d1 = par->priv; dest = *d1; if (tweak_generic_bytes(vsb, &dest, arg, par->min, par->max)) return (-1); *d1 = dest; MCF_ParamConf(MCF_MINIMUM, "vsl_buffer", "%u", *d1 + 12); return (0); } /*--------------------------------------------------------------------*/ int v_matchproto_(tweak_t) tweak_string(struct vsb *vsb, const struct parspec *par, const char *arg) { char **p = TRUST_ME(par->priv); AN(p); if (arg == NULL) { VSB_quote(vsb, *p, -1, 0); } else if (arg == JSON_FMT) { VSB_putc(vsb, '"'); VSB_quote(vsb, *p, -1, VSB_QUOTE_JSON); VSB_putc(vsb, '"'); } else { REPLACE(*p, arg); } return (0); } /*--------------------------------------------------------------------*/ int v_matchproto_(tweak_t) tweak_poolparam(struct vsb *vsb, const struct parspec *par, const char *arg) { volatile struct poolparam *pp, px; struct parspec pt; char **av; int retval = 0; pp = par->priv; if (arg == JSON_FMT) { VSB_cat(vsb, "{\n"); VSB_indent(vsb, 8); VSB_printf(vsb, "\"min_pool\": %u,\n", pp->min_pool); VSB_printf(vsb, "\"max_pool\": %u,\n", pp->max_pool); VSB_printf(vsb, "\"max_age\": %g\n", pp->max_age); VSB_indent(vsb, -4); VSB_cat(vsb, "}"); } else if (arg == NULL) { VSB_printf(vsb, "%u,%u,%g", pp->min_pool, pp->max_pool, pp->max_age); } else { av = VAV_Parse(arg, NULL, ARGV_COMMA); do { if (av[0] != NULL) { VSB_printf(vsb, "Parse error: %s", av[0]); retval = -1; break; } if (av[1] == NULL || av[2] == NULL || av[3] == NULL) { VSB_cat(vsb, "Three fields required:" " min_pool, max_pool and max_age\n"); retval = -1; break; } px = *pp; retval = tweak_generic_uint(vsb, &px.min_pool, av[1], par->min, par->max, par->dyn_min_reason, par->dyn_max_reason); if (retval) break; retval = tweak_generic_uint(vsb, &px.max_pool, av[2], par->min, par->max, par->dyn_min_reason, par->dyn_max_reason); if (retval) break; pt.priv = &px.max_age; pt.min = "0"; pt.max = "1000000"; retval = tweak_generic_double(vsb, av[3], &pt, parse_decimal, "%.0f"); if (retval) break; if (px.min_pool > px.max_pool) { VSB_cat(vsb, "min_pool cannot be larger" " than max_pool\n"); retval = -1; break; } *pp = px; } while (0); VAV_Free(av); } return (retval); } /*-------------------------------------------------------------------- * Thread pool tweaks. * * The min/max values automatically update the opposites appropriate * limit, so they don't end up crossing. */ int v_matchproto_(tweak_t) tweak_thread_pool_min(struct vsb *vsb, const struct parspec *par, const char *arg) { if (tweak_uint(vsb, par, arg)) return (-1); MCF_ParamConf(MCF_MINIMUM, "thread_pool_max", "%u", mgt_param.wthread_min); MCF_ParamConf(MCF_MAXIMUM, "thread_pool_reserve", "%u", mgt_param.wthread_min * 950 / 1000); return (0); } int v_matchproto_(tweak_t) tweak_thread_pool_max(struct vsb *vsb, const struct parspec *par, const char *arg) { if (tweak_uint(vsb, par, arg)) return (-1); MCF_ParamConf(MCF_MAXIMUM, "thread_pool_min", "%u", mgt_param.wthread_max); return (0); } /*-------------------------------------------------------------------- * Tweak storage */ int v_matchproto_(tweak_t) tweak_storage(struct vsb *vsb, const struct parspec *par, const char *arg) { struct stevedore *stv; /* XXX: If we want to remove the MUST_RESTART flag from the * h2_rxbuf_storage parameter, we could have a mechanism here * that when the child is running calls out through CLI to change * the stevedore being used. */ if (arg == NULL || arg == JSON_FMT) return (tweak_string(vsb, par, arg)); if (!strcmp(arg, "Transient")) { /* Always allow setting to the special name * "Transient". There will always be a stevedore with this * name, but it may not have been configured at the time * this is called. */ } else { /* Only allow setting the value to a known configured * stevedore */ STV_Foreach(stv) { CHECK_OBJ_NOTNULL(stv, STEVEDORE_MAGIC); if (!strcmp(stv->ident, arg)) break; } if (stv == NULL) { VSB_printf(vsb, "unknown storage backend '%s'", arg); return (-1); } } return (tweak_string(vsb, par, arg)); } /*-------------------------------------------------------------------- * Tweak alias */ int v_matchproto_(tweak_t) tweak_alias(struct vsb *vsb, const struct parspec *par, const char *arg) { const struct parspec *orig; struct parspec alias[1]; orig = TRUST_ME(par->priv); AN(orig); memcpy(alias, orig, sizeof *orig); alias->name = par->name; alias->priv = TRUST_ME(orig); return (alias->func(vsb, alias, arg)); } /*-------------------------------------------------------------------- * Tweak bits */ enum bit_do {BSET, BCLR, BTST}; static int bit(uint8_t *p, unsigned no, enum bit_do act) { uint8_t b; p += (no >> 3); b = (0x80 >> (no & 7)); if (act == BSET) *p |= b; else if (act == BCLR) *p &= ~b; return (*p & b); } static inline void bit_clear(uint8_t *p, unsigned l) { memset(p, 0, ((size_t)l + 7) >> 3); } /*-------------------------------------------------------------------- */ static int bit_tweak(struct vsb *vsb, uint8_t *p, unsigned l, const char *arg, const char * const *tags, const char *desc, char sign) { int i, n; unsigned j; char **av; const char *s; av = VAV_Parse(arg, &n, ARGV_COMMA); if (av[0] != NULL) { VSB_printf(vsb, "Cannot parse: %s\n", av[0]); VAV_Free(av); return (-1); } for (i = 1; av[i] != NULL; i++) { s = av[i]; if (sign == '+' && !strcmp(s, "none")) { bit_clear(p, l); continue; } if (sign == '-' && !strcmp(s, "all")) { bit_clear(p, l); continue; } if (*s != '-' && *s != '+') { VSB_printf(vsb, "Missing '+' or '-' (%s)\n", s); VAV_Free(av); return (-1); } for (j = 0; j < l; j++) { if (tags[j] != NULL && !strcasecmp(s + 1, tags[j])) break; } if (tags[j] == NULL) { VSB_printf(vsb, "Unknown %s (%s)\n", desc, s); VAV_Free(av); return (-1); } assert(j < l); if (s[0] == sign) (void)bit(p, j, BSET); else (void)bit(p, j, BCLR); } VAV_Free(av); return (0); } /*-------------------------------------------------------------------- */ static int tweak_generic_bits(struct vsb *vsb, const struct parspec *par, const char *arg, uint8_t *p, unsigned l, const char * const *tags, const char *desc, char sign) { unsigned j; if (arg != NULL && !strcmp(arg, "default")) { /* XXX: deprecated in favor of param.reset */ return (tweak_generic_bits(vsb, par, par->def, p, l, tags, desc, sign)); } if (arg != NULL && arg != JSON_FMT) return (bit_tweak(vsb, p, l, arg, tags, desc, sign)); if (arg == JSON_FMT) VSB_putc(vsb, '"'); VSB_cat(vsb, sign == '+' ? "none" : "all"); for (j = 0; j < l; j++) { if (bit(p, j, BTST)) VSB_printf(vsb, ",%c%s", sign, tags[j]); } if (arg == JSON_FMT) VSB_putc(vsb, '"'); return (0); } /*-------------------------------------------------------------------- * The vsl_mask parameter */ static const char * const VSL_tags[256] = { # define SLTM(foo,flags,sdesc,ldesc) [SLT_##foo] = #foo, # include "tbl/vsl_tags.h" }; int v_matchproto_(tweak_t) tweak_vsl_mask(struct vsb *vsb, const struct parspec *par, const char *arg) { return (tweak_generic_bits(vsb, par, arg, mgt_param.vsl_mask, SLT__Reserved, VSL_tags, "VSL tag", '-')); } /*-------------------------------------------------------------------- * The debug parameter */ static const char * const debug_tags[] = { # define DEBUG_BIT(U, l, d) [DBG_##U] = #l, # include "tbl/debug_bits.h" NULL }; int v_matchproto_(tweak_t) tweak_debug(struct vsb *vsb, const struct parspec *par, const char *arg) { return (tweak_generic_bits(vsb, par, arg, mgt_param.debug_bits, DBG_Reserved, debug_tags, "debug bit", '+')); } /*-------------------------------------------------------------------- * The experimental parameter */ static const char * const experimental_tags[] = { # define EXPERIMENTAL_BIT(U, l, d) [EXPERIMENT_##U] = #l, # include "tbl/experimental_bits.h" NULL }; int v_matchproto_(tweak_t) tweak_experimental(struct vsb *vsb, const struct parspec *par, const char *arg) { return (tweak_generic_bits(vsb, par, arg, mgt_param.experimental_bits, EXPERIMENT_Reserved, experimental_tags, "experimental bit", '+')); } /*-------------------------------------------------------------------- * The feature parameter */ static const char * const feature_tags[] = { # define FEATURE_BIT(U, l, d) [FEATURE_##U] = #l, # include "tbl/feature_bits.h" NULL }; int v_matchproto_(tweak_t) tweak_feature(struct vsb *vsb, const struct parspec *par, const char *arg) { return (tweak_generic_bits(vsb, par, arg, mgt_param.feature_bits, FEATURE_Reserved, feature_tags, "feature bit", '+')); } /*-------------------------------------------------------------------- * The vcc_feature parameter */ static const char * const vcc_feature_tags[] = { # define VCC_FEATURE_BIT(U, l, d) [VCC_FEATURE_##U] = #l, # include "tbl/vcc_feature_bits.h" NULL }; int v_matchproto_(tweak_t) tweak_vcc_feature(struct vsb *vsb, const struct parspec *par, const char *arg) { const struct parspec *orig; char buf[32]; int val; if (arg != NULL && arg != JSON_FMT && strcmp(par->name, "vcc_feature")) { orig = TRUST_ME(par->priv); val = parse_boolean(vsb, arg); if (val < 0) return (-1); bprintf(buf, "%c%s", val ? '+' : '-', par->name + strlen("vcc_")); return (tweak_vcc_feature(vsb, orig, buf)); } return (tweak_generic_bits(vsb, par, arg, mgt_param.vcc_feature_bits, VCC_FEATURE_Reserved, vcc_feature_tags, "vcc_feature bit", '+')); } varnish-7.5.0/bin/varnishd/mgt/mgt_shmem.c000066400000000000000000000100101457605730600205030ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #include "config.h" #include #include #include #include #include #include #include #include "mgt/mgt.h" #include "vsm_priv.h" #include "common/heritage.h" #include "common/vsmw.h" static struct vsmw *mgt_vsmw; /*-------------------------------------------------------------------- */ void mgt_SHM_static_alloc(const void *ptr, ssize_t size, const char *category, const char *ident) { void *p; p = VSMW_Allocf(mgt_vsmw, NULL, category, size, "%s", ident); AN(p); memcpy(p, ptr, size); } /*-------------------------------------------------------------------- * Exit handler that clears the owning pid from the SHMLOG */ static void mgt_shm_atexit(void) { /* Do not let VCC kill our VSM */ if (getpid() != heritage.mgt_pid) return; VJ_master(JAIL_MASTER_FILE); VSMW_Destroy(&mgt_vsmw); if (!MGT_DO_DEBUG(DBG_VTC_MODE)) { VJ_master(JAIL_MASTER_SYSTEM); AZ(system("rm -rf " VSM_MGT_DIRNAME)); AZ(system("rm -rf " VSM_CHILD_DIRNAME)); } VJ_master(JAIL_MASTER_LOW); } /*-------------------------------------------------------------------- * Initialize VSM subsystem */ void mgt_SHM_Init(void) { int fd; VJ_master(JAIL_MASTER_SYSTEM); AZ(system("rm -rf " VSM_MGT_DIRNAME)); VJ_master(JAIL_MASTER_FILE); AZ(mkdir(VSM_MGT_DIRNAME, 0755)); fd = open(VSM_MGT_DIRNAME, O_RDONLY); VJ_fix_fd(fd, JAIL_FIXFD_VSMMGT); VJ_master(JAIL_MASTER_LOW); mgt_vsmw = VSMW_New(fd, 0640, "_.index"); AN(mgt_vsmw); heritage.proc_vsmw = mgt_vsmw; /* Setup atexit handler */ AZ(atexit(mgt_shm_atexit)); } void mgt_SHM_ChildNew(void) { VJ_master(JAIL_MASTER_SYSTEM); AZ(system("rm -rf " VSM_CHILD_DIRNAME)); VJ_master(JAIL_MASTER_FILE); AZ(mkdir(VSM_CHILD_DIRNAME, 0750)); heritage.vsm_fd = open(VSM_CHILD_DIRNAME, O_RDONLY); assert(heritage.vsm_fd >= 0); VJ_fix_fd(heritage.vsm_fd, JAIL_FIXFD_VSMWRK); VJ_master(JAIL_MASTER_LOW); MCH_Fd_Inherit(heritage.vsm_fd, "VSMW"); heritage.param = VSMW_Allocf(mgt_vsmw, NULL, VSM_CLASS_PARAM, sizeof *heritage.param, ""); AN(heritage.param); *heritage.param = mgt_param; heritage.panic_str_len = 64 * 1024; heritage.panic_str = VSMW_Allocf(mgt_vsmw, NULL, "Panic", heritage.panic_str_len, ""); AN(heritage.panic_str); } void mgt_SHM_ChildDestroy(void) { closefd(&heritage.vsm_fd); if (!MGT_DO_DEBUG(DBG_VTC_MODE)) { VJ_master(JAIL_MASTER_SYSTEM); AZ(system("rm -rf " VSM_CHILD_DIRNAME)); VJ_master(JAIL_MASTER_LOW); } VSMW_Free(mgt_vsmw, (void**)&heritage.panic_str); VSMW_Free(mgt_vsmw, (void**)&heritage.param); } varnish-7.5.0/bin/varnishd/mgt/mgt_symtab.c000066400000000000000000000143261457605730600207070ustar00rootroot00000000000000/*- * Copyright (c) 2019 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * VCL/VMOD symbol table */ #include "config.h" #include #include #include #include #include #include #include "mgt/mgt.h" #include "mgt/mgt_vcl.h" #include "vcli_serve.h" #include "vjsn.h" /*--------------------------------------------------------------------*/ static const char * mgt_vcl_symtab_val(const struct vjsn_val *vv, const char *val) { const struct vjsn_val *jv; jv = vjsn_child(vv, val); AN(jv); assert(vjsn_is_string(jv)); AN(jv->value); return (jv->value); } static void mgt_vcl_import_vcl(struct vclprog *vp1, const struct vjsn_val *vv) { struct vclprog *vp2; CHECK_OBJ_NOTNULL(vp1, VCLPROG_MAGIC); AN(vv); vp2 = mcf_vcl_byname(mgt_vcl_symtab_val(vv, "name")); CHECK_OBJ_NOTNULL(vp2, VCLPROG_MAGIC); mgt_vcl_dep_add(vp1, vp2)->vj = vv; } static int mgt_vcl_cache_vmod(const char *nm, const char *fm, const char *to) { int fi, fo; int ret = 0; ssize_t sz; char buf[BUFSIZ]; fo = open(to, O_WRONLY | O_CREAT | O_EXCL, 0744); if (fo < 0 && errno == EEXIST) return (0); if (fo < 0) { fprintf(stderr, "While creating copy of vmod %s:\n\t%s: %s\n", nm, to, VAS_errtxt(errno)); return (1); } fi = open(fm, O_RDONLY); if (fi < 0) { fprintf(stderr, "Opening vmod %s from %s: %s\n", nm, fm, VAS_errtxt(errno)); AZ(unlink(to)); closefd(&fo); return (1); } while (1) { sz = read(fi, buf, sizeof buf); if (sz == 0) break; if (sz < 0 || sz != write(fo, buf, sz)) { fprintf(stderr, "Copying vmod %s: %s\n", nm, VAS_errtxt(errno)); AZ(unlink(to)); ret = 1; break; } } closefd(&fi); AZ(fchmod(fo, 0444)); closefd(&fo); return (ret); } static void mgt_vcl_import_vmod(struct vclprog *vp, const struct vjsn_val *vv) { struct vmodfile *vf; struct vmoddep *vd; const char *v_name; const char *v_file; const char *v_dst; const struct vjsn_val *jv; CHECK_OBJ_NOTNULL(vp, VCLPROG_MAGIC); AN(vv); jv = vjsn_child(vv, "vext"); if (vjsn_is_true(jv)) return; AN(jv); v_name = mgt_vcl_symtab_val(vv, "name"); v_file = mgt_vcl_symtab_val(vv, "file"); v_dst = mgt_vcl_symtab_val(vv, "dst"); VTAILQ_FOREACH(vf, &vmodhead, list) if (!strcmp(vf->fname, v_dst)) break; if (vf == NULL) { ALLOC_OBJ(vf, VMODFILE_MAGIC); AN(vf); REPLACE(vf->fname, v_dst); VTAILQ_INIT(&vf->vcls); AZ(mgt_vcl_cache_vmod(v_name, v_file, v_dst)); VTAILQ_INSERT_TAIL(&vmodhead, vf, list); } ALLOC_OBJ(vd, VMODDEP_MAGIC); AN(vd); vd->to = vf; VTAILQ_INSERT_TAIL(&vp->vmods, vd, lfrom); VTAILQ_INSERT_TAIL(&vf->vcls, vd, lto); } void mgt_vcl_symtab(struct vclprog *vp, const char *input) { struct vjsn *vj; struct vjsn_val *v1, *v2; const char *typ, *err; CHECK_OBJ_NOTNULL(vp, VCLPROG_MAGIC); AN(input); vj = vjsn_parse(input, &err); if (err != NULL) { fprintf(stderr, "FATAL: Symtab parse error: %s\n%s\n", err, input); } AZ(err); AN(vj); vp->symtab = vj; assert(vjsn_is_array(vj->value)); VTAILQ_FOREACH(v1, &vj->value->children, list) { assert(vjsn_is_object(v1)); v2 = vjsn_child(v1, "dir"); if (v2 == NULL) continue; assert(vjsn_is_string(v2)); if (strcmp(v2->value, "import")) continue; typ = mgt_vcl_symtab_val(v1, "type"); if (!strcmp(typ, "$VMOD")) mgt_vcl_import_vmod(vp, v1); else if (!strcmp(typ, "$VCL")) mgt_vcl_import_vcl(vp, v1); else WRONG("Bad symtab import entry"); } } void mgt_vcl_symtab_clean(struct vclprog *vp) { if (vp->symtab) vjsn_delete(&vp->symtab); } /*--------------------------------------------------------------------*/ static void mcf_vcl_vjsn_dump(struct cli *cli, const struct vjsn_val *vj, int indent) { struct vjsn_val *vj1; AN(cli); AN(vj); VCLI_Out(cli, "%*s", indent, ""); if (vj->name != NULL) VCLI_Out(cli, "[\"%s\"]: ", vj->name); VCLI_Out(cli, "{%s}", vj->type); if (vj->value != NULL) VCLI_Out(cli, " <%s>", vj->value); VCLI_Out(cli, "\n"); VTAILQ_FOREACH(vj1, &vj->children, list) mcf_vcl_vjsn_dump(cli, vj1, indent + 2); } void v_matchproto_(cli_func_t) mcf_vcl_symtab(struct cli *cli, const char * const *av, void *priv) { struct vclprog *vp; struct vcldep *vd; (void)av; (void)priv; VTAILQ_FOREACH(vp, &vclhead, list) { if (mcf_is_label(vp)) VCLI_Out(cli, "Label: %s\n", vp->name); else VCLI_Out(cli, "Vcl: %s\n", vp->name); if (!VTAILQ_EMPTY(&vp->dfrom)) { VCLI_Out(cli, " imports from:\n"); VTAILQ_FOREACH(vd, &vp->dfrom, lfrom) { VCLI_Out(cli, " %s\n", vd->to->name); if (vd->vj) mcf_vcl_vjsn_dump(cli, vd->vj, 6); } } if (!VTAILQ_EMPTY(&vp->dto)) { VCLI_Out(cli, " exports to:\n"); VTAILQ_FOREACH(vd, &vp->dto, lto) { VCLI_Out(cli, " %s\n", vd->from->name); if (vd->vj) mcf_vcl_vjsn_dump(cli, vd->vj, 6); } } if (vp->symtab != NULL) { VCLI_Out(cli, " symtab:\n"); mcf_vcl_vjsn_dump(cli, vp->symtab->value, 4); } } } varnish-7.5.0/bin/varnishd/mgt/mgt_util.c000066400000000000000000000130571457605730600203650ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * The management process, various utility functions */ #include "config.h" #include #include #include #include #include #include #include #include "mgt/mgt.h" #include "common/heritage.h" #include "vav.h" #include "vct.h" int complain_to_stderr; /*--------------------------------------------------------------------*/ char * mgt_HostName(void) { char *p; char buf[1024]; AZ(gethostname(buf, sizeof buf)); p = strdup(buf); AN(p); return (p); } /*--------------------------------------------------------------------*/ void mgt_ProcTitle(const char *comp) { #ifdef HAVE_SETPROCTITLE if (strcmp(heritage.identity, "varnishd")) setproctitle("Varnish-%s -i %s", comp, heritage.identity); else setproctitle("Varnish-%s", comp); #else (void)comp; #endif } /*--------------------------------------------------------------------*/ static void mgt_sltm(const char *tag, const char *sdesc, const char *ldesc) { int i; assert(sdesc != NULL && ldesc != NULL); assert(*sdesc != '\0' || *ldesc != '\0'); printf("\n%s\n", tag); i = strlen(tag); printf("%*.*s\n\n", i, i, "------------------------------------"); if (*ldesc != '\0') printf("%s\n", ldesc); else if (*sdesc != '\0') printf("%s\n", sdesc); } /*lint -e{506} constant value boolean */ void mgt_DumpRstVsl(void) { printf( "\n.. The following is autogenerated output from " "varnishd -x vsl\n\n"); #define SLTM(tag, flags, sdesc, ldesc) mgt_sltm(#tag, sdesc, ldesc); #include "tbl/vsl_tags.h" } /*--------------------------------------------------------------------*/ struct vsb * mgt_BuildVident(void) { struct utsname uts; struct vsb *vsb; vsb = VSB_new_auto(); AN(vsb); if (!uname(&uts)) { VSB_printf(vsb, ",%s", uts.sysname); VSB_printf(vsb, ",%s", uts.release); VSB_printf(vsb, ",%s", uts.machine); } return (vsb); } /*-------------------------------------------------------------------- * 'Ello, I wish to register a complaint... */ #ifndef LOG_AUTHPRIV # define LOG_AUTHPRIV 0 #endif const char C_ERR[] = "Error:"; const char C_INFO[] = "Info:"; const char C_DEBUG[] = "Debug:"; const char C_SECURITY[] = "Security:"; const char C_CLI[] = "Cli:"; void MGT_Complain(const char *loud, const char *fmt, ...) { va_list ap; struct vsb *vsb; int sf; if (loud == C_CLI && !mgt_param.syslog_cli_traffic) return; vsb = VSB_new_auto(); AN(vsb); va_start(ap, fmt); VSB_vprintf(vsb, fmt, ap); va_end(ap); AZ(VSB_finish(vsb)); if (loud == C_ERR) sf = LOG_ERR; else if (loud == C_INFO) sf = LOG_INFO; else if (loud == C_DEBUG) sf = LOG_DEBUG; else if (loud == C_SECURITY) sf = LOG_WARNING | LOG_AUTHPRIV; else if (loud == C_CLI) sf = LOG_INFO; else WRONG("Wrong complaint loudness"); if (loud != C_CLI && (complain_to_stderr || loud != C_DEBUG)) fprintf(stderr, "%s %s\n", loud, VSB_data(vsb)); if (!MGT_DO_DEBUG(DBG_VTC_MODE)) syslog(sf, "%s", VSB_data(vsb)); VSB_destroy(&vsb); } /*--------------------------------------------------------------------*/ const void * MGT_Pick(const struct choice *cp, const char *which, const char *kind) { for (; cp->name != NULL; cp++) { if (!strcmp(cp->name, which)) return (cp->ptr); } ARGV_ERR("Unknown %s method \"%s\"\n", kind, which); } /*--------------------------------------------------------------------*/ char ** MGT_NamedArg(const char *spec, const char **name, const char *what) { const char *p, *q; char *r; char **av; int l; ASSERT_MGT(); p = strchr(spec, '='); q = strchr(spec, ','); if (p == NULL || (q != NULL && q < p)) { av = VAV_Parse(spec, NULL, ARGV_COMMA); p = NULL; } else if (VCT_invalid_name(spec, p) != NULL) { ARGV_ERR("invalid %s name \"%.*s\"=[...]\n", what, (int)(p - spec), spec); } else if (p[1] == '\0') { ARGV_ERR("Empty named %s argument \"%s\"\n", what, spec); } else { av = VAV_Parse(p + 1, NULL, ARGV_COMMA); } AN(av); if (av[0] != NULL) ARGV_ERR("%s\n", av[0]); if (p == NULL) { *name = NULL; } else { l = p - spec; r = malloc(1L + l); AN(r); memcpy(r, spec, l); r[l] = '\0'; *name = r; } return (av); } varnish-7.5.0/bin/varnishd/mgt/mgt_vcc.c000066400000000000000000000250011457605730600201530ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * VCL compiler stuff */ #include "config.h" #include #include #include #include #include #include #include #include "mgt/mgt.h" #include "mgt/mgt_vcl.h" #include "common/heritage.h" #include "storage/storage.h" #include "libvcc.h" #include "vcli_serve.h" #include "vfil.h" #include "vsub.h" #include "vtim.h" struct vcc_priv { unsigned magic; #define VCC_PRIV_MAGIC 0x70080cb8 const char *vclsrc; const char *vclsrcfile; struct vsb *dir; struct vsb *csrcfile; struct vsb *libfile; struct vsb *symfile; }; enum vcc_fini_e { VCC_SUCCESS, VCC_FAILED, }; char *mgt_cc_cmd; char *mgt_cc_cmd_def; char *mgt_cc_warn; const char *mgt_vcl_path; const char *mgt_vmod_path; #define VGC_SRC "vgc.c" #define VGC_LIB "vgc.so" #define VGC_SYM "vgc.sym" /*--------------------------------------------------------------------*/ void mgt_DumpBuiltin(void) { printf("%s\n", builtin_vcl); } /*-------------------------------------------------------------------- * Invoke system VCC compiler in a sub-process */ static void vcc_vext_iter_func(const char *filename, void *priv) { struct vsb *sb; /* VCC runs in the per-VCL subdir */ sb = VSB_new_auto(); AN(sb); VSB_cat(sb, "../"); VSB_cat(sb, filename); AZ(VSB_finish(sb)); VCC_VEXT(priv, VSB_data(sb)); VSB_destroy(&sb); } static void v_noreturn_ v_matchproto_(vsub_func_f) run_vcc(void *priv) { struct vsb *sb = NULL; struct vclprog *vpg; struct vcc_priv *vp; struct vcc *vcc; struct stevedore *stv; int i; VJ_subproc(JAIL_SUBPROC_VCC); CAST_OBJ_NOTNULL(vp, priv, VCC_PRIV_MAGIC); AZ(chdir(VSB_data(vp->dir))); vcc = VCC_New(); AN(vcc); VCC_Builtin_VCL(vcc, builtin_vcl); VCC_VCL_path(vcc, mgt_vcl_path); VCC_VMOD_path(vcc, mgt_vmod_path); #define VCC_FEATURE_BIT(U, l, d) \ VCC_Opt_ ## l(vcc, MGT_VCC_FEATURE(VCC_FEATURE_ ## U)); #include "tbl/vcc_feature_bits.h" vext_iter(vcc_vext_iter_func, vcc); STV_Foreach(stv) VCC_Predef(vcc, "VCL_STEVEDORE", stv->ident); VTAILQ_FOREACH(vpg, &vclhead, list) if (mcf_is_label(vpg)) VCC_Predef(vcc, "VCL_VCL", vpg->name); i = VCC_Compile(vcc, &sb, vp->vclsrc, vp->vclsrcfile, VGC_SRC, VGC_SYM); if (VSB_len(sb)) printf("%s", VSB_data(sb)); VSB_destroy(&sb); exit(i == 0 ? 0 : 2); } /*-------------------------------------------------------------------- * Expand the cc_command argument */ static const char * cc_expand(struct vsb *sb, const char *cc_cmd, char exp) { char buf[PATH_MAX]; const char *p; int pct; AN(sb); AN(cc_cmd); for (p = cc_cmd, pct = 0; *p; ++p) { if (pct) { switch (*p) { case 's': VSB_cat(sb, VGC_SRC); break; case 'o': VSB_cat(sb, VGC_LIB); break; case 'w': VSB_cat(sb, mgt_cc_warn); break; case 'd': VSB_cat(sb, mgt_cc_cmd_def); break; case 'D': if (exp == pct) return ("recursive expansion"); AZ(cc_expand(sb, mgt_cc_cmd_def, pct)); break; case 'n': AN(getcwd(buf, sizeof buf)); VSB_cat(sb, buf); break; case '%': VSB_putc(sb, '%'); break; default: VSB_putc(sb, '%'); VSB_putc(sb, *p); break; } pct = 0; } else if (*p == '%') { pct = 1; } else { VSB_putc(sb, *p); } } if (pct) VSB_putc(sb, '%'); return (NULL); } /*-------------------------------------------------------------------- * Invoke system C compiler in a sub-process */ static void v_matchproto_(vsub_func_f) run_cc(void *priv) { struct vcc_priv *vp; struct vsb *sb; const char *err; VJ_subproc(JAIL_SUBPROC_CC); CAST_OBJ_NOTNULL(vp, priv, VCC_PRIV_MAGIC); sb = VSB_new_auto(); AN(sb); err = cc_expand(sb, mgt_cc_cmd, '\0'); if (err != NULL) { VSB_destroy(&sb); fprintf(stderr, "cc_command: %s\n", err); exit(1); } AZ(VSB_finish(sb)); AZ(chdir(VSB_data(vp->dir))); (void)umask(027); (void)execl("/bin/sh", "/bin/sh", "-c", VSB_data(sb), (char*)0); VSB_destroy(&sb); // For flexelint } /*-------------------------------------------------------------------- * Attempt to open compiled VCL in a sub-process */ static void v_noreturn_ v_matchproto_(vsub_func_f) run_dlopen(void *priv) { struct vcc_priv *vp; VJ_subproc(JAIL_SUBPROC_VCLLOAD); CAST_OBJ_NOTNULL(vp, priv, VCC_PRIV_MAGIC); if (VCL_TestLoad(VSB_data(vp->libfile))) exit(1); exit(0); } /*-------------------------------------------------------------------- * Touch a filename and make it available to privsep-privs */ static int mgt_vcc_touchfile(const char *fn, struct vsb *sb) { int i; i = open(fn, O_WRONLY|O_CREAT|O_TRUNC, 0640); if (i < 0) { VSB_printf(sb, "Failed to create %s: %s\n", fn, VAS_errtxt(errno)); return (2); } closefd(&i); return (0); } /*-------------------------------------------------------------------- * Compile a VCL program, return shared object, errors in sb. */ static unsigned mgt_vcc_compile(struct vcc_priv *vp, struct vsb *sb, int C_flag) { char *csrc; unsigned subs; AN(sb); VSB_clear(sb); if (mgt_vcc_touchfile(VSB_data(vp->csrcfile), sb)) return (2); if (mgt_vcc_touchfile(VSB_data(vp->libfile), sb)) return (2); VJ_master(JAIL_MASTER_SYSTEM); subs = VSUB_run(sb, run_vcc, vp, "VCC-compiler", -1); VJ_master(JAIL_MASTER_LOW); if (subs) return (subs); if (C_flag) { csrc = VFIL_readfile(NULL, VSB_data(vp->csrcfile), NULL); AN(csrc); VSB_cat(sb, csrc); free(csrc); VSB_cat(sb, "/* EXTERNAL SYMBOL TABLE\n"); csrc = VFIL_readfile(NULL, VSB_data(vp->symfile), NULL); AN(csrc); VSB_cat(sb, csrc); VSB_cat(sb, "*/\n"); free(csrc); } VJ_master(JAIL_MASTER_SYSTEM); subs = VSUB_run(sb, run_cc, vp, "C-compiler", 10); VJ_master(JAIL_MASTER_LOW); if (subs) return (subs); VJ_master(JAIL_MASTER_SYSTEM); subs = VSUB_run(sb, run_dlopen, vp, "dlopen", 10); VJ_master(JAIL_MASTER_LOW); return (subs); } /*--------------------------------------------------------------------*/ static void mgt_vcc_init_vp(struct vcc_priv *vp) { INIT_OBJ(vp, VCC_PRIV_MAGIC); vp->csrcfile = VSB_new_auto(); AN(vp->csrcfile); vp->libfile = VSB_new_auto(); AN(vp->libfile); vp->symfile = VSB_new_auto(); AN(vp->symfile); vp->dir = VSB_new_auto(); AN(vp->dir); } static void mgt_vcc_fini_vp(struct vcc_priv *vp, enum vcc_fini_e vcc_status) { int ignore_enoent = (vcc_status == VCC_FAILED); if (!MGT_DO_DEBUG(DBG_VCL_KEEP)) { VJ_unlink(VSB_data(vp->csrcfile), ignore_enoent); VJ_unlink(VSB_data(vp->symfile), ignore_enoent); if (vcc_status != VCC_SUCCESS) { VJ_unlink(VSB_data(vp->libfile), ignore_enoent); VJ_rmdir(VSB_data(vp->dir)); } } VSB_destroy(&vp->csrcfile); VSB_destroy(&vp->libfile); VSB_destroy(&vp->symfile); VSB_destroy(&vp->dir); } char * mgt_VccCompile(struct cli *cli, struct vclprog *vcl, const char *vclname, const char *vclsrc, const char *vclsrcfile, int C_flag) { struct vcc_priv vp[1]; struct vsb *sb; unsigned status; char *p; AN(cli); sb = VSB_new_auto(); AN(sb); mgt_vcc_init_vp(vp); vp->vclsrc = vclsrc; vp->vclsrcfile = vclsrcfile; /* * The subdirectory must have a unique name to 100% certain evade * the refcounting semantics of dlopen(3). * * Bad implementations of dlopen(3) think the shlib you are opening * is the same, if the filename is the same as one already opened. * * Sensible implementations do a stat(2) and requires st_ino and * st_dev to also match. * * A correct implementation would run on filesystems which tickle * st_gen, and also insist that be the identical, before declaring * a match. * * Since no correct implementations are known to exist, we are subject * to really interesting races if you do something like: * * (running on 'boot' vcl) * vcl.load foo /foo.vcl * vcl.use foo * few/slow requests * vcl.use boot * vcl.discard foo * vcl.load foo /foo.vcl // dlopen(3) says "same-same" * vcl.use foo * * Because discard of the first 'foo' lingers on non-zero reference * count, and when it finally runs, it trashes the second 'foo' because * dlopen(3) decided they were really the same thing. * * The Best way to reproduce this is to have regexps in the VCL. */ VSB_printf(vp->dir, "vcl_%s.%.6f", vclname, VTIM_real()); AZ(VSB_finish(vp->dir)); VSB_printf(vp->csrcfile, "%s/%s", VSB_data(vp->dir), VGC_SRC); AZ(VSB_finish(vp->csrcfile)); VSB_printf(vp->libfile, "%s/%s", VSB_data(vp->dir), VGC_LIB); AZ(VSB_finish(vp->libfile)); VSB_printf(vp->symfile, "%s/%s", VSB_data(vp->dir), VGC_SYM); AZ(VSB_finish(vp->symfile)); if (VJ_make_subdir(VSB_data(vp->dir), "VCL", cli->sb)) { mgt_vcc_fini_vp(vp, VCC_FAILED); VSB_destroy(&sb); VCLI_Out(cli, "VCL compilation failed"); VCLI_SetResult(cli, CLIS_PARAM); return (NULL); } status = mgt_vcc_compile(vp, sb, C_flag); AZ(VSB_finish(sb)); if (VSB_len(sb) > 0) VCLI_Out(cli, "%s", VSB_data(sb)); VSB_destroy(&sb); if (status || C_flag) { mgt_vcc_fini_vp(vp, VCC_FAILED); if (status) { VCLI_Out(cli, "VCL compilation failed"); VCLI_SetResult(cli, CLIS_PARAM); } return (NULL); } p = VFIL_readfile(NULL, VSB_data(vp->symfile), NULL); AN(p); mgt_vcl_symtab(vcl, p); REPLACE(p, VSB_data(vp->libfile)); mgt_vcc_fini_vp(vp, VCC_SUCCESS); return (p); } varnish-7.5.0/bin/varnishd/mgt/mgt_vcl.c000066400000000000000000000626361457605730600202030ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * VCL management stuff */ #include "config.h" #include #include #include #include #include #include "mgt/mgt.h" #include "mgt/mgt_vcl.h" #include "common/heritage.h" #include "vcli_serve.h" #include "vct.h" #include "vev.h" #include "vte.h" #include "vtim.h" struct vclstate { const char *name; }; #define VCL_STATE(sym, str) \ static const struct vclstate VCL_STATE_ ## sym[1] = {{ str }}; #include "tbl/vcl_states.h" static const struct vclstate VCL_STATE_LABEL[1] = {{ "label" }}; static unsigned vcl_count; struct vclproghead vclhead = VTAILQ_HEAD_INITIALIZER(vclhead); static struct vclproghead discardhead = VTAILQ_HEAD_INITIALIZER(discardhead); struct vmodfilehead vmodhead = VTAILQ_HEAD_INITIALIZER(vmodhead); static struct vclprog *mgt_vcl_active; static struct vev *e_poker; static int mgt_vcl_setstate(struct cli *, struct vclprog *, const struct vclstate *); static int mgt_vcl_settemp(struct cli *, struct vclprog *, unsigned); static int mgt_vcl_askchild(struct cli *, struct vclprog *, unsigned); static void mgt_vcl_set_cooldown(struct vclprog *, vtim_mono); /*--------------------------------------------------------------------*/ static const struct vclstate * mcf_vcl_parse_state(struct cli *cli, const char *s) { if (s != NULL) { #define VCL_STATE(sym, str) \ if (!strcmp(s, str)) \ return (VCL_STATE_ ## sym); #include "tbl/vcl_states.h" } VCLI_Out(cli, "State must be one of auto, cold or warm."); VCLI_SetResult(cli, CLIS_PARAM); return (NULL); } struct vclprog * mcf_vcl_byname(const char *name) { struct vclprog *vp; VTAILQ_FOREACH(vp, &vclhead, list) if (!strcmp(name, vp->name)) return (vp); return (NULL); } static int mcf_invalid_vclname(struct cli *cli, const char *name) { const char *bad; AN(name); bad = VCT_invalid_name(name, NULL); if (bad != NULL) { VCLI_SetResult(cli, CLIS_PARAM); VCLI_Out(cli, "Illegal character in VCL name "); if (*bad > 0x20 && *bad < 0x7f) VCLI_Out(cli, "('%c')", *bad); else VCLI_Out(cli, "(0x%02x)", *bad & 0xff); return (-1); } return (0); } static struct vclprog * mcf_find_vcl(struct cli *cli, const char *name) { struct vclprog *vp; if (mcf_invalid_vclname(cli, name)) return (NULL); vp = mcf_vcl_byname(name); if (vp == NULL) { VCLI_SetResult(cli, CLIS_PARAM); VCLI_Out(cli, "No VCL named %s known\n", name); } return (vp); } static int mcf_find_no_vcl(struct cli *cli, const char *name) { if (mcf_invalid_vclname(cli, name)) return (0); if (mcf_vcl_byname(name) != NULL) { VCLI_SetResult(cli, CLIS_PARAM); VCLI_Out(cli, "Already a VCL named %s", name); return (0); } return (1); } int mcf_is_label(const struct vclprog *vp) { return (vp->state == VCL_STATE_LABEL); } /*--------------------------------------------------------------------*/ struct vcldep * mgt_vcl_dep_add(struct vclprog *vp_from, struct vclprog *vp_to) { struct vcldep *vd; CHECK_OBJ_NOTNULL(vp_from, VCLPROG_MAGIC); CHECK_OBJ_NOTNULL(vp_to, VCLPROG_MAGIC); assert(vp_to->state != VCL_STATE_COLD); ALLOC_OBJ(vd, VCLDEP_MAGIC); AN(vd); mgt_vcl_set_cooldown(vp_from, -1); mgt_vcl_set_cooldown(vp_to, -1); vd->from = vp_from; VTAILQ_INSERT_TAIL(&vp_from->dfrom, vd, lfrom); vd->to = vp_to; VTAILQ_INSERT_TAIL(&vp_to->dto, vd, lto); vp_to->nto++; return (vd); } static void mgt_vcl_dep_del(struct vcldep *vd) { CHECK_OBJ_NOTNULL(vd, VCLDEP_MAGIC); VTAILQ_REMOVE(&vd->from->dfrom, vd, lfrom); VTAILQ_REMOVE(&vd->to->dto, vd, lto); vd->to->nto--; if (vd->to->nto == 0) mgt_vcl_set_cooldown(vd->to, VTIM_mono()); FREE_OBJ(vd); } /*--------------------------------------------------------------------*/ static struct vclprog * mgt_vcl_add(const char *name, const struct vclstate *state) { struct vclprog *vp; assert(state == VCL_STATE_WARM || state == VCL_STATE_COLD || state == VCL_STATE_AUTO || state == VCL_STATE_LABEL); ALLOC_OBJ(vp, VCLPROG_MAGIC); XXXAN(vp); REPLACE(vp->name, name); VTAILQ_INIT(&vp->dfrom); VTAILQ_INIT(&vp->dto); VTAILQ_INIT(&vp->vmods); vp->state = state; if (vp->state != VCL_STATE_COLD) vp->warm = 1; VTAILQ_INSERT_TAIL(&vclhead, vp, list); if (vp->state != VCL_STATE_LABEL) vcl_count++; return (vp); } static void mgt_vcl_del(struct vclprog *vp) { char *p; struct vmoddep *vd; struct vmodfile *vf; struct vcldep *dep; CHECK_OBJ_NOTNULL(vp, VCLPROG_MAGIC); assert(VTAILQ_EMPTY(&vp->dto)); mgt_vcl_symtab_clean(vp); while ((dep = VTAILQ_FIRST(&vp->dfrom)) != NULL) { assert(dep->from == vp); mgt_vcl_dep_del(dep); } VTAILQ_REMOVE(&vclhead, vp, list); if (vp->state != VCL_STATE_LABEL) vcl_count--; if (vp->fname != NULL) { if (!MGT_DO_DEBUG(DBG_VCL_KEEP)) AZ(unlink(vp->fname)); p = strrchr(vp->fname, '/'); AN(p); *p = '\0'; VJ_master(JAIL_MASTER_FILE); /* * This will fail if any files are dropped next to the library * without us knowing. This happens for instance with GCOV. * Assume developers know how to clean up after themselves * (or alternatively: How to run out of disk space). */ (void)rmdir(vp->fname); VJ_master(JAIL_MASTER_LOW); free(vp->fname); } while (!VTAILQ_EMPTY(&vp->vmods)) { vd = VTAILQ_FIRST(&vp->vmods); CHECK_OBJ(vd, VMODDEP_MAGIC); vf = vd->to; CHECK_OBJ(vf, VMODFILE_MAGIC); VTAILQ_REMOVE(&vp->vmods, vd, lfrom); VTAILQ_REMOVE(&vf->vcls, vd, lto); FREE_OBJ(vd); if (VTAILQ_EMPTY(&vf->vcls)) { if (!MGT_DO_DEBUG(DBG_VMOD_SO_KEEP)) AZ(unlink(vf->fname)); VTAILQ_REMOVE(&vmodhead, vf, list); free(vf->fname); FREE_OBJ(vf); } } free(vp->name); FREE_OBJ(vp); } const char * mgt_has_vcl(void) { if (VTAILQ_EMPTY(&vclhead)) return ("No VCL loaded"); if (mgt_vcl_active == NULL) return ("No active VCL"); CHECK_OBJ_NOTNULL(mgt_vcl_active, VCLPROG_MAGIC); AN(mgt_vcl_active->warm); return (NULL); } /* * go_cold * * -1: leave alone * 0: timer not started - not currently used * >0: when timer started */ static void mgt_vcl_set_cooldown(struct vclprog *vp, vtim_mono now) { CHECK_OBJ_NOTNULL(vp, VCLPROG_MAGIC); if (vp == mgt_vcl_active || vp->state != VCL_STATE_AUTO || vp->warm == 0 || !VTAILQ_EMPTY(&vp->dto) || !VTAILQ_EMPTY(&vp->dfrom)) vp->go_cold = -1; else vp->go_cold = now; } static int mgt_vcl_settemp(struct cli *cli, struct vclprog *vp, unsigned warm) { int i; CHECK_OBJ_NOTNULL(vp, VCLPROG_MAGIC); if (warm == vp->warm) return (0); if (vp->state == VCL_STATE_AUTO || vp->state == VCL_STATE_LABEL) { mgt_vcl_set_cooldown(vp, -1); i = mgt_vcl_askchild(cli, vp, warm); mgt_vcl_set_cooldown(vp, VTIM_mono()); } else { i = mgt_vcl_setstate(cli, vp, warm ? VCL_STATE_WARM : VCL_STATE_COLD); } return (i); } static int mgt_vcl_requirewarm(struct cli *cli, struct vclprog *vp) { if (vp->state == VCL_STATE_COLD) { VCLI_SetResult(cli, CLIS_CANT); VCLI_Out(cli, "VCL '%s' is cold - set to auto or warm first", vp->name); return (1); } return (mgt_vcl_settemp(cli, vp, 1)); } static int mgt_vcl_askchild(struct cli *cli, struct vclprog *vp, unsigned warm) { unsigned status; char *p; int i; CHECK_OBJ_NOTNULL(vp, VCLPROG_MAGIC); if (!MCH_Running()) { vp->warm = warm; return (0); } i = mgt_cli_askchild(&status, &p, "vcl.state %s %d%s\n", vp->name, warm, vp->state->name); if (i && cli != NULL) { VCLI_SetResult(cli, status); VCLI_Out(cli, "%s", p); } else if (i) { MGT_Complain(C_ERR, "Please file ticket: VCL poker problem: " "'vcl.state %s %d%s' -> %03d '%s'", vp->name, warm, vp->state->name, i, p); } else { /* Success, update mgt's VCL state to reflect child's state */ vp->warm = warm; } free(p); return (i); } static int mgt_vcl_setstate(struct cli *cli, struct vclprog *vp, const struct vclstate *vs) { unsigned warm; int i; const struct vclstate *os; CHECK_OBJ_NOTNULL(vp, VCLPROG_MAGIC); assert(vs != VCL_STATE_LABEL); if (mcf_is_label(vp)) { AN(vp->warm); /* do not touch labels */ return (0); } if (vp->state == vs) return (0); os = vp->state; vp->state = vs; if (vp == mgt_vcl_active) { assert (vs == VCL_STATE_WARM || vs == VCL_STATE_AUTO); AN(vp->warm); warm = 1; } else if (vs == VCL_STATE_AUTO) { warm = vp->warm; } else { warm = (vs == VCL_STATE_WARM ? 1 : 0); } i = mgt_vcl_askchild(cli, vp, warm); if (i == 0) mgt_vcl_set_cooldown(vp, VTIM_mono()); else vp->state = os; return (i); } /*--------------------------------------------------------------------*/ static struct vclprog * mgt_new_vcl(struct cli *cli, const char *vclname, const char *vclsrc, const char *vclsrcfile, const char *state, int C_flag) { unsigned status; char *lib, *p; struct vclprog *vp; const struct vclstate *vs; AN(cli); if (vcl_count >= mgt_param.max_vcl && mgt_param.max_vcl_handling == 2) { VCLI_Out(cli, "Too many (%d) VCLs already loaded\n", vcl_count); VCLI_Out(cli, "(See max_vcl and max_vcl_handling parameters)"); VCLI_SetResult(cli, CLIS_CANT); return (NULL); } if (state == NULL) vs = VCL_STATE_AUTO; else vs = mcf_vcl_parse_state(cli, state); if (vs == NULL) return (NULL); vp = mgt_vcl_add(vclname, vs); lib = mgt_VccCompile(cli, vp, vclname, vclsrc, vclsrcfile, C_flag); if (lib == NULL) { mgt_vcl_del(vp); return (NULL); } AZ(C_flag); vp->fname = lib; if ((cli->result == CLIS_OK || cli->result == CLIS_TRUNCATED) && vcl_count > mgt_param.max_vcl && mgt_param.max_vcl_handling == 1) { VCLI_Out(cli, "%d VCLs loaded\n", vcl_count); VCLI_Out(cli, "Remember to vcl.discard the old/unused VCLs.\n"); VCLI_Out(cli, "(See max_vcl and max_vcl_handling parameters)"); } if (!MCH_Running()) return (vp); if (mgt_cli_askchild(&status, &p, "vcl.load %s %s %d%s\n", vp->name, vp->fname, vp->warm, vp->state->name)) { mgt_vcl_del(vp); VCLI_Out(cli, "%s", p); VCLI_SetResult(cli, status); free(p); return (NULL); } free(p); mgt_vcl_set_cooldown(vp, VTIM_mono()); return (vp); } /*--------------------------------------------------------------------*/ void mgt_vcl_startup(struct cli *cli, const char *vclsrc, const char *vclname, const char *origin, int C_flag) { char buf[20]; static int n = 0; struct vclprog *vp; AZ(MCH_Running()); AN(vclsrc); AN(origin); if (vclname == NULL) { bprintf(buf, "boot%d", n++); vclname = buf; } vp = mgt_new_vcl(cli, vclname, vclsrc, origin, NULL, C_flag); if (vp != NULL) { /* Last startup VCL becomes the automatically selected * active VCL. */ AN(vp->warm); mgt_vcl_active = vp; } } /*--------------------------------------------------------------------*/ int mgt_push_vcls(struct cli *cli, unsigned *status, char **p) { struct vclprog *vp; struct vcldep *vd; int done; AN(mgt_vcl_active); /* The VCL has not been loaded yet, it cannot fail */ (void)cli; VTAILQ_FOREACH(vp, &vclhead, list) vp->loaded = 0; do { done = 1; VTAILQ_FOREACH(vp, &vclhead, list) { if (vp->loaded) continue; VTAILQ_FOREACH(vd, &vp->dfrom, lfrom) if (!vd->to->loaded) break; if (vd != NULL) { done = 0; continue; } if (mcf_is_label(vp)) { vd = VTAILQ_FIRST(&vp->dfrom); AN(vd); if (mgt_cli_askchild(status, p, "vcl.label %s %s\n", vp->name, vd->to->name)) return (1); } else { if (mgt_cli_askchild(status, p, "vcl.load \"%s\" %s %d%s\n", vp->name, vp->fname, vp->warm, vp->state->name)) return (1); } vp->loaded = 1; free(*p); *p = NULL; } } while (!done); if (mgt_cli_askchild(status, p, "vcl.use \"%s\"\n", mgt_vcl_active->name)) { return (1); } free(*p); *p = NULL; return (0); } /*--------------------------------------------------------------------*/ static void v_matchproto_(cli_func_t) mcf_vcl_inline(struct cli *cli, const char * const *av, void *priv) { struct vclprog *vp; (void)priv; if (!mcf_find_no_vcl(cli, av[2])) return; vp = mgt_new_vcl(cli, av[2], av[3], "", av[4], 0); if (vp != NULL && !MCH_Running()) VCLI_Out(cli, "VCL compiled.\n"); } static void v_matchproto_(cli_func_t) mcf_vcl_load(struct cli *cli, const char * const *av, void *priv) { struct vclprog *vp; (void)priv; if (!mcf_find_no_vcl(cli, av[2])) return; vp = mgt_new_vcl(cli, av[2], NULL, av[3], av[4], 0); if (vp != NULL && !MCH_Running()) VCLI_Out(cli, "VCL compiled.\n"); } static void v_matchproto_(cli_func_t) mcf_vcl_state(struct cli *cli, const char * const *av, void *priv) { const struct vclstate *state; struct vclprog *vp; (void)priv; vp = mcf_find_vcl(cli, av[2]); if (vp == NULL) return; if (mcf_is_label(vp)) { VCLI_Out(cli, "Labels are always warm"); VCLI_SetResult(cli, CLIS_PARAM); return; } state = mcf_vcl_parse_state(cli, av[3]); if (state == NULL) return; if (state == VCL_STATE_COLD) { if (!VTAILQ_EMPTY(&vp->dto)) { assert(vp->state != VCL_STATE_COLD); VCLI_Out(cli, "A labeled VCL cannot be set cold"); VCLI_SetResult(cli, CLIS_CANT); return; } if (vp == mgt_vcl_active) { VCLI_Out(cli, "Cannot set the active VCL cold."); VCLI_SetResult(cli, CLIS_CANT); return; } } (void)mgt_vcl_setstate(cli, vp, state); } static void v_matchproto_(cli_func_t) mcf_vcl_use(struct cli *cli, const char * const *av, void *priv) { unsigned status; char *p = NULL; struct vclprog *vp, *vp2; vtim_mono now; (void)priv; vp = mcf_find_vcl(cli, av[2]); if (vp == NULL) return; if (vp == mgt_vcl_active) return; if (mgt_vcl_requirewarm(cli, vp)) return; if (MCH_Running() && mgt_cli_askchild(&status, &p, "vcl.use %s\n", av[2])) { VCLI_SetResult(cli, status); VCLI_Out(cli, "%s", p); } else { VCLI_Out(cli, "VCL '%s' now active", av[2]); vp2 = mgt_vcl_active; mgt_vcl_active = vp; now = VTIM_mono(); mgt_vcl_set_cooldown(vp, now); if (vp2 != NULL) mgt_vcl_set_cooldown(vp2, now); } free(p); } static void mgt_vcl_discard(struct cli *cli, struct vclprog *vp) { char *p = NULL; unsigned status; AN(vp); AN(vp->discard); assert(vp != mgt_vcl_active); while (!VTAILQ_EMPTY(&vp->dto)) mgt_vcl_discard(cli, VTAILQ_FIRST(&vp->dto)->from); if (mcf_is_label(vp)) { AN(vp->warm); vp->warm = 0; } else { (void)mgt_vcl_setstate(cli, vp, VCL_STATE_COLD); } if (MCH_Running()) { AZ(vp->warm); if (mgt_cli_askchild(&status, &p, "vcl.discard %s\n", vp->name)) assert(status == CLIS_OK || status == CLIS_COMMS); free(p); } VTAILQ_REMOVE(&discardhead, vp, discard_list); mgt_vcl_del(vp); } static int mgt_vcl_discard_mark(struct cli *cli, const char *glob) { struct vclprog *vp; unsigned marked = 0; VTAILQ_FOREACH(vp, &vclhead, list) { if (fnmatch(glob, vp->name, 0)) continue; if (vp == mgt_vcl_active) { VCLI_SetResult(cli, CLIS_CANT); VCLI_Out(cli, "Cannot discard active VCL program %s\n", vp->name); return (-1); } if (!vp->discard) VTAILQ_INSERT_TAIL(&discardhead, vp, discard_list); vp->discard = 1; marked++; } if (marked == 0) { VCLI_SetResult(cli, CLIS_PARAM); VCLI_Out(cli, "No VCL name matches %s\n", glob); } return (marked); } static void mgt_vcl_discard_depfail(struct cli *cli, struct vclprog *vp) { struct vcldep *vd; int n; AN(cli); AN(vp); assert(!VTAILQ_EMPTY(&vp->dto)); VCLI_SetResult(cli, CLIS_CANT); AN(vp->warm); if (!mcf_is_label(vp)) VCLI_Out(cli, "Cannot discard labeled VCL program %s:\n", vp->name); else VCLI_Out(cli, "Cannot discard VCL label %s, other VCLs depend on it:\n", vp->name); n = 0; VTAILQ_FOREACH(vd, &vp->dto, lto) { if (n++ == 5) { VCLI_Out(cli, "\t[...]"); break; } VCLI_Out(cli, "\t%s\n", vd->from->name); } } static int mgt_vcl_discard_depcheck(struct cli *cli) { struct vclprog *vp; struct vcldep *vd; VTAILQ_FOREACH(vp, &discardhead, discard_list) { VTAILQ_FOREACH(vd, &vp->dto, lto) if (!vd->from->discard) { mgt_vcl_discard_depfail(cli, vp); return (-1); } } return (0); } static void mgt_vcl_discard_clear(void) { struct vclprog *vp, *vp2; VTAILQ_FOREACH_SAFE(vp, &discardhead, discard_list, vp2) { AN(vp->discard); vp->discard = 0; VTAILQ_REMOVE(&discardhead, vp, discard_list); } } static void v_matchproto_(cli_func_t) mcf_vcl_discard(struct cli *cli, const char * const *av, void *priv) { const struct vclprog *vp; (void)priv; assert(VTAILQ_EMPTY(&discardhead)); VTAILQ_FOREACH(vp, &vclhead, list) AZ(vp->discard); for (av += 2; *av != NULL; av++) { if (mgt_vcl_discard_mark(cli, *av) <= 0) { mgt_vcl_discard_clear(); break; } } if (mgt_vcl_discard_depcheck(cli) != 0) mgt_vcl_discard_clear(); while (!VTAILQ_EMPTY(&discardhead)) mgt_vcl_discard(cli, VTAILQ_FIRST(&discardhead)); } static void v_matchproto_(cli_func_t) mcf_vcl_list(struct cli *cli, const char * const *av, void *priv) { unsigned status; char *p; struct vclprog *vp; struct vcldep *vd; const struct vclstate *vs; struct vte *vte; /* NB: Shall generate same output as vcl_cli_list() */ (void)av; (void)priv; if (MCH_Running()) { if (!mgt_cli_askchild(&status, &p, "vcl.list\n")) { VCLI_SetResult(cli, status); VCLI_Out(cli, "%s", p); } free(p); } else { vte = VTE_new(7, 80); AN(vte); VTAILQ_FOREACH(vp, &vclhead, list) { VTE_printf(vte, "%s", vp == mgt_vcl_active ? "active" : "available"); vs = vp->warm ? VCL_STATE_WARM : VCL_STATE_COLD; VTE_printf(vte, "\t%s\t%s", vp->state->name, vs->name); VTE_printf(vte, "\t-\t%s", vp->name); if (mcf_is_label(vp)) { vd = VTAILQ_FIRST(&vp->dfrom); AN(vd); VTE_printf(vte, "\t->\t%s", vd->to->name); if (vp->nto > 0) VTE_printf(vte, " (%d return(vcl)%s)", vp->nto, vp->nto > 1 ? "'s" : ""); } else if (vp->nto > 0) { VTE_printf(vte, "\t<-\t(%d label%s)", vp->nto, vp->nto > 1 ? "s" : ""); } VTE_cat(vte, "\n"); } AZ(VTE_finish(vte)); AZ(VTE_format(vte, VCLI_VTE_format, cli)); VTE_destroy(&vte); } } static void v_matchproto_(cli_func_t) mcf_vcl_list_json(struct cli *cli, const char * const *av, void *priv) { unsigned status; char *p; struct vclprog *vp; struct vcldep *vd; const struct vclstate *vs; /* NB: Shall generate same output as vcl_cli_list() */ (void)priv; if (MCH_Running()) { if (!mgt_cli_askchild(&status, &p, "vcl.list -j\n")) { VCLI_SetResult(cli, status); VCLI_Out(cli, "%s", p); } free(p); } else { VCLI_JSON_begin(cli, 2, av); VTAILQ_FOREACH(vp, &vclhead, list) { VCLI_Out(cli, ",\n"); VCLI_Out(cli, "{\n"); VSB_indent(cli->sb, 2); VCLI_Out(cli, "\"status\": \"%s\",\n", vp == mgt_vcl_active ? "active" : "available"); VCLI_Out(cli, "\"state\": \"%s\",\n", vp->state->name); vs = vp->warm ? VCL_STATE_WARM : VCL_STATE_COLD; VCLI_Out(cli, "\"temperature\": \"%s\",\n", vs->name); VCLI_Out(cli, "\"name\": \"%s\"", vp->name); if (mcf_is_label(vp)) { vd = VTAILQ_FIRST(&vp->dfrom); AN(vd); VCLI_Out(cli, ",\n"); VCLI_Out(cli, "\"label\": {\n"); VSB_indent(cli->sb, 2); VCLI_Out(cli, "\"name\": \"%s\"", vd->to->name); if (vp->nto > 0) VCLI_Out(cli, ",\n\"refs\": %d", vp->nto); VSB_indent(cli->sb, -2); VCLI_Out(cli, "\n"); VCLI_Out(cli, "}"); } else if (vp->nto > 0) { VCLI_Out(cli, ",\n"); VCLI_Out(cli, "\"labels\": %d", vp->nto); } VSB_indent(cli->sb, -2); VCLI_Out(cli, "\n}"); } VCLI_JSON_end(cli); } } static int mcf_vcl_check_dag(struct cli *cli, struct vclprog *tree, struct vclprog *target) { struct vcldep *vd; if (target == tree) return (1); VTAILQ_FOREACH(vd, &tree->dfrom, lfrom) { if (!mcf_vcl_check_dag(cli, vd->to, target)) continue; if (mcf_is_label(tree)) VCLI_Out(cli, "Label %s points to vcl %s\n", tree->name, vd->to->name); else VCLI_Out(cli, "Vcl %s uses Label %s\n", tree->name, vd->to->name); return (1); } return (0); } static void v_matchproto_(cli_func_t) mcf_vcl_label(struct cli *cli, const char * const *av, void *priv) { struct vclprog *vpl; struct vclprog *vpt; unsigned status; char *p; int i; (void)priv; if (mcf_invalid_vclname(cli, av[2])) return; vpl = mcf_vcl_byname(av[2]); if (vpl != NULL && !mcf_is_label(vpl)) { VCLI_SetResult(cli, CLIS_PARAM); VCLI_Out(cli, "%s is not a label", vpl->name); return; } vpt = mcf_find_vcl(cli, av[3]); if (vpt == NULL) return; if (mcf_is_label(vpt)) { VCLI_SetResult(cli, CLIS_CANT); VCLI_Out(cli, "VCL labels cannot point to labels"); return; } if (vpl != NULL) { if (VTAILQ_FIRST(&vpl->dfrom)->to != vpt && mcf_vcl_check_dag(cli, vpt, vpl)) { VCLI_Out(cli, "Pointing label %s at %s would create a loop", vpl->name, vpt->name); VCLI_SetResult(cli, CLIS_PARAM); return; } } if (mgt_vcl_requirewarm(cli, vpt)) return; if (vpl != NULL) { /* potentially fail before we delete the dependency */ if (mgt_vcl_requirewarm(cli, vpl)) return; mgt_vcl_dep_del(VTAILQ_FIRST(&vpl->dfrom)); AN(VTAILQ_EMPTY(&vpl->dfrom)); } else { vpl = mgt_vcl_add(av[2], VCL_STATE_LABEL); } AN(vpl); if (mgt_vcl_requirewarm(cli, vpl)) return; (void)mgt_vcl_dep_add(vpl, vpt); if (!MCH_Running()) return; i = mgt_cli_askchild(&status, &p, "vcl.label %s %s\n", av[2], av[3]); if (i) { VCLI_SetResult(cli, status); VCLI_Out(cli, "%s", p); } free(p); } static void v_matchproto_(cli_func_t) mcf_vcl_deps(struct cli *cli, const char * const *av, void *priv) { struct vclprog *vp; struct vcldep *vd; struct vte *vte; (void)av; (void)priv; vte = VTE_new(2, 80); AN(vte); VTAILQ_FOREACH(vp, &vclhead, list) { if (VTAILQ_EMPTY(&vp->dfrom)) { VTE_printf(vte, "%s\n", vp->name); continue; } VTAILQ_FOREACH(vd, &vp->dfrom, lfrom) VTE_printf(vte, "%s\t%s\n", vp->name, vd->to->name); } AZ(VTE_finish(vte)); AZ(VTE_format(vte, VCLI_VTE_format, cli)); VTE_destroy(&vte); } static void v_matchproto_(cli_func_t) mcf_vcl_deps_json(struct cli *cli, const char * const *av, void *priv) { struct vclprog *vp; struct vcldep *vd; const char *sepd, *sepa; (void)priv; VCLI_JSON_begin(cli, 1, av); VTAILQ_FOREACH(vp, &vclhead, list) { VCLI_Out(cli, ",\n"); VCLI_Out(cli, "{\n"); VSB_indent(cli->sb, 2); VCLI_Out(cli, "\"name\": \"%s\",\n", vp->name); VCLI_Out(cli, "\"deps\": ["); VSB_indent(cli->sb, 2); sepd = ""; sepa = ""; VTAILQ_FOREACH(vd, &vp->dfrom, lfrom) { VCLI_Out(cli, "%s\n", sepd); VCLI_Out(cli, "\"%s\"", vd->to->name); sepd = ","; sepa = "\n"; } VSB_indent(cli->sb, -2); VCLI_Out(cli, "%s", sepa); VCLI_Out(cli, "]\n"); VSB_indent(cli->sb, -2); VCLI_Out(cli, "}"); } VCLI_JSON_end(cli); } /*--------------------------------------------------------------------*/ static int v_matchproto_(vev_cb_f) mgt_vcl_poker(const struct vev *e, int what) { struct vclprog *vp; vtim_mono now; (void)e; (void)what; e_poker->timeout = mgt_param.vcl_cooldown * .45; now = VTIM_mono(); VTAILQ_FOREACH(vp, &vclhead, list) { if (vp->go_cold == 0) mgt_vcl_set_cooldown(vp, now); else if (vp->go_cold > 0 && vp->go_cold + mgt_param.vcl_cooldown < now) (void)mgt_vcl_settemp(NULL, vp, 0); } return (0); } /*--------------------------------------------------------------------*/ static struct cli_proto cli_vcl[] = { { CLICMD_VCL_LOAD, "", mcf_vcl_load }, { CLICMD_VCL_INLINE, "", mcf_vcl_inline }, { CLICMD_VCL_USE, "", mcf_vcl_use }, { CLICMD_VCL_STATE, "", mcf_vcl_state }, { CLICMD_VCL_DISCARD, "", mcf_vcl_discard }, { CLICMD_VCL_LIST, "", mcf_vcl_list, mcf_vcl_list_json }, { CLICMD_VCL_DEPS, "", mcf_vcl_deps, mcf_vcl_deps_json }, { CLICMD_VCL_LABEL, "", mcf_vcl_label }, { CLICMD_DEBUG_VCL_SYMTAB, "d", mcf_vcl_symtab }, { NULL } }; /*--------------------------------------------------------------------*/ static void mgt_vcl_atexit(void) { struct vclprog *vp, *vp2; if (getpid() != heritage.mgt_pid) return; mgt_vcl_active = NULL; while (!VTAILQ_EMPTY(&vclhead)) VTAILQ_FOREACH_SAFE(vp, &vclhead, list, vp2) if (VTAILQ_EMPTY(&vp->dto)) mgt_vcl_del(vp); } void mgt_vcl_init(void) { e_poker = VEV_Alloc(); AN(e_poker); e_poker->timeout = 3; // random, prime e_poker->callback = mgt_vcl_poker; e_poker->name = "vcl poker"; AZ(VEV_Start(mgt_evb, e_poker)); AZ(atexit(mgt_vcl_atexit)); VCLS_AddFunc(mgt_cls, MCF_AUTH, cli_vcl); } varnish-7.5.0/bin/varnishd/mgt/mgt_vcl.h000066400000000000000000000055421457605730600202010ustar00rootroot00000000000000/*- * Copyright (c) 2019 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * VCL/VMOD symbol table */ struct vclprog; struct vmodfile; struct vjsn_val; struct vclstate; struct vmoddep { unsigned magic; #define VMODDEP_MAGIC 0xc1490542 VTAILQ_ENTRY(vmoddep) lfrom; struct vmodfile *to; VTAILQ_ENTRY(vmoddep) lto; }; struct vcldep { unsigned magic; #define VCLDEP_MAGIC 0xa9a17dc2 struct vclprog *from; VTAILQ_ENTRY(vcldep) lfrom; struct vclprog *to; VTAILQ_ENTRY(vcldep) lto; const struct vjsn_val *vj; }; struct vclprog { unsigned magic; #define VCLPROG_MAGIC 0x9ac09fea VTAILQ_ENTRY(vclprog) list; char *name; char *fname; unsigned warm; const struct vclstate *state; double go_cold; struct vjsn *symtab; VTAILQ_HEAD(, vcldep) dfrom; VTAILQ_HEAD(, vcldep) dto; int nto; int loaded; VTAILQ_HEAD(, vmoddep) vmods; unsigned discard; VTAILQ_ENTRY(vclprog) discard_list; }; struct vmodfile { unsigned magic; #define VMODFILE_MAGIC 0xffa1a0d5 char *fname; VTAILQ_ENTRY(vmodfile) list; VTAILQ_HEAD(, vmoddep) vcls; }; extern VTAILQ_HEAD(vclproghead, vclprog) vclhead; extern VTAILQ_HEAD(vmodfilehead, vmodfile) vmodhead; struct vclprog *mcf_vcl_byname(const char *name); struct vcldep *mgt_vcl_dep_add(struct vclprog *vp_from, struct vclprog *vp_to); int mcf_is_label(const struct vclprog *vp); void mgt_vcl_symtab_clean(struct vclprog *vp); void mgt_vcl_symtab(struct vclprog *vp, const char *input); void mcf_vcl_symtab(struct cli *cli, const char * const *av, void *priv); varnish-7.5.0/bin/varnishd/proxy/000077500000000000000000000000001457605730600167615ustar00rootroot00000000000000varnish-7.5.0/bin/varnishd/proxy/cache_proxy.h000066400000000000000000000037561457605730600214510ustar00rootroot00000000000000/*- * Copyright (c) 2018 GANDI SAS * All rights reserved. * * Author: Emmanuel Hocdet * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #define PP2_TYPE_ALPN 0x01 #define PP2_TYPE_AUTHORITY 0x02 #define PP2_TYPE_CRC32C 0x03 #define PP2_TYPE_NOOP 0x04 #define PP2_TYPE_SSL 0x20 #define PP2_SUBTYPE_SSL_VERSION 0x21 #define PP2_SUBTYPE_SSL_CN 0x22 #define PP2_SUBTYPE_SSL_CIPHER 0x23 #define PP2_SUBTYPE_SSL_SIG_ALG 0x24 #define PP2_SUBTYPE_SSL_KEY_ALG 0x25 #define PP2_SUBTYPE_SSL_MAX 0x25 int VPX_tlv(const struct req *req, int tlv, const void **dst, int *len); void VPX_Format_Proxy(struct vsb *, int, const struct suckaddr *, const struct suckaddr *, const char *); varnish-7.5.0/bin/varnishd/proxy/cache_proxy_proto.c000066400000000000000000000473631457605730600226710ustar00rootroot00000000000000/*- * Copyright (c) 2015 Varnish Software AS * Copyright (c) 2018 GANDI SAS * All rights reserved. * * Authors: Poul-Henning Kamp * Emmanuel Hocdet * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #include "config.h" #include #include #include "cache/cache_varnishd.h" #include "cache/cache_transport.h" #include "proxy/cache_proxy.h" #include "vend.h" #include "vsa.h" #include "vss.h" #include "vtcp.h" // max. PROXY payload length (excl. sig) - XXX parameter? #define VPX_MAX_LEN 1024 struct vpx_tlv { unsigned magic; #define VPX_TLV_MAGIC 0xdeb9a4a5 unsigned len; char tlv[]; }; static inline int vpx_ws_err(const struct req *req) { VSL(SLT_Error, req->sp->vxid, "insufficient workspace"); return (-1); } /********************************************************************** * PROXY 1 protocol */ static const char vpx1_sig[] = {'P', 'R', 'O', 'X', 'Y'}; static int vpx_proto1(const struct worker *wrk, const struct req *req) { const char *fld[5]; int i; char *p, *q; struct suckaddr *sa; ssize_t sz; int pfam = -1; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(req, REQ_MAGIC); CHECK_OBJ_NOTNULL(req->sp, SESS_MAGIC); q = memchr(req->htc->rxbuf_b, '\r', req->htc->rxbuf_e - req->htc->rxbuf_b); if (q == NULL) return (-1); *q++ = '\0'; /* Nuke the CRLF */ if (q == req->htc->rxbuf_e || *q != '\n') return (-1); *q++ = '\0'; /* Split the fields */ p = req->htc->rxbuf_b; for (i = 0; i < 5; i++) { p = strchr(p, ' '); if (p == NULL) { VSL(SLT_ProxyGarbage, req->sp->vxid, "PROXY1: Too few fields"); return (-1); } *p++ = '\0'; fld[i] = p; } if (strchr(p, ' ')) { VSL(SLT_ProxyGarbage, req->sp->vxid, "PROXY1: Too many fields"); return (-1); } if (!strcmp(fld[0], "TCP4")) pfam = PF_INET; else if (!strcmp(fld[0], "TCP6")) pfam = PF_INET6; else { VSL(SLT_ProxyGarbage, req->sp->vxid, "PROXY1: Wrong TCP[46] field"); return (-1); } if (! SES_Reserve_client_addr(req->sp, &sa, &sz)) return (vpx_ws_err(req)); assert (sz == vsa_suckaddr_len); if (VSS_ResolveOne(sa, fld[1], fld[3], pfam, SOCK_STREAM, AI_NUMERICHOST | AI_NUMERICSERV) == NULL) { VSL(SLT_ProxyGarbage, req->sp->vxid, "PROXY1: Cannot resolve source address"); return (-1); } if (! SES_Set_String_Attr(req->sp, SA_CLIENT_IP, fld[1])) return (vpx_ws_err(req)); if (! SES_Set_String_Attr(req->sp, SA_CLIENT_PORT, fld[3])) return (vpx_ws_err(req)); if (! SES_Reserve_server_addr(req->sp, &sa, &sz)) return (vpx_ws_err(req)); assert (sz == vsa_suckaddr_len); if (VSS_ResolveOne(sa, fld[2], fld[4], pfam, SOCK_STREAM, AI_NUMERICHOST | AI_NUMERICSERV) == NULL) { VSL(SLT_ProxyGarbage, req->sp->vxid, "PROXY1: Cannot resolve destination address"); return (-1); } VSL(SLT_Proxy, req->sp->vxid, "1 %s %s %s %s", fld[1], fld[3], fld[2], fld[4]); HTC_RxPipeline(req->htc, q); return (0); } /********************************************************************** * PROXY 2 protocol */ static const char vpx2_sig[] = { '\r', '\n', '\r', '\n', '\0', '\r', '\n', 'Q', 'U', 'I', 'T', '\n', }; static const uint32_t crctable[256] = { 0x00000000L, 0xF26B8303L, 0xE13B70F7L, 0x1350F3F4L, 0xC79A971FL, 0x35F1141CL, 0x26A1E7E8L, 0xD4CA64EBL, 0x8AD958CFL, 0x78B2DBCCL, 0x6BE22838L, 0x9989AB3BL, 0x4D43CFD0L, 0xBF284CD3L, 0xAC78BF27L, 0x5E133C24L, 0x105EC76FL, 0xE235446CL, 0xF165B798L, 0x030E349BL, 0xD7C45070L, 0x25AFD373L, 0x36FF2087L, 0xC494A384L, 0x9A879FA0L, 0x68EC1CA3L, 0x7BBCEF57L, 0x89D76C54L, 0x5D1D08BFL, 0xAF768BBCL, 0xBC267848L, 0x4E4DFB4BL, 0x20BD8EDEL, 0xD2D60DDDL, 0xC186FE29L, 0x33ED7D2AL, 0xE72719C1L, 0x154C9AC2L, 0x061C6936L, 0xF477EA35L, 0xAA64D611L, 0x580F5512L, 0x4B5FA6E6L, 0xB93425E5L, 0x6DFE410EL, 0x9F95C20DL, 0x8CC531F9L, 0x7EAEB2FAL, 0x30E349B1L, 0xC288CAB2L, 0xD1D83946L, 0x23B3BA45L, 0xF779DEAEL, 0x05125DADL, 0x1642AE59L, 0xE4292D5AL, 0xBA3A117EL, 0x4851927DL, 0x5B016189L, 0xA96AE28AL, 0x7DA08661L, 0x8FCB0562L, 0x9C9BF696L, 0x6EF07595L, 0x417B1DBCL, 0xB3109EBFL, 0xA0406D4BL, 0x522BEE48L, 0x86E18AA3L, 0x748A09A0L, 0x67DAFA54L, 0x95B17957L, 0xCBA24573L, 0x39C9C670L, 0x2A993584L, 0xD8F2B687L, 0x0C38D26CL, 0xFE53516FL, 0xED03A29BL, 0x1F682198L, 0x5125DAD3L, 0xA34E59D0L, 0xB01EAA24L, 0x42752927L, 0x96BF4DCCL, 0x64D4CECFL, 0x77843D3BL, 0x85EFBE38L, 0xDBFC821CL, 0x2997011FL, 0x3AC7F2EBL, 0xC8AC71E8L, 0x1C661503L, 0xEE0D9600L, 0xFD5D65F4L, 0x0F36E6F7L, 0x61C69362L, 0x93AD1061L, 0x80FDE395L, 0x72966096L, 0xA65C047DL, 0x5437877EL, 0x4767748AL, 0xB50CF789L, 0xEB1FCBADL, 0x197448AEL, 0x0A24BB5AL, 0xF84F3859L, 0x2C855CB2L, 0xDEEEDFB1L, 0xCDBE2C45L, 0x3FD5AF46L, 0x7198540DL, 0x83F3D70EL, 0x90A324FAL, 0x62C8A7F9L, 0xB602C312L, 0x44694011L, 0x5739B3E5L, 0xA55230E6L, 0xFB410CC2L, 0x092A8FC1L, 0x1A7A7C35L, 0xE811FF36L, 0x3CDB9BDDL, 0xCEB018DEL, 0xDDE0EB2AL, 0x2F8B6829L, 0x82F63B78L, 0x709DB87BL, 0x63CD4B8FL, 0x91A6C88CL, 0x456CAC67L, 0xB7072F64L, 0xA457DC90L, 0x563C5F93L, 0x082F63B7L, 0xFA44E0B4L, 0xE9141340L, 0x1B7F9043L, 0xCFB5F4A8L, 0x3DDE77ABL, 0x2E8E845FL, 0xDCE5075CL, 0x92A8FC17L, 0x60C37F14L, 0x73938CE0L, 0x81F80FE3L, 0x55326B08L, 0xA759E80BL, 0xB4091BFFL, 0x466298FCL, 0x1871A4D8L, 0xEA1A27DBL, 0xF94AD42FL, 0x0B21572CL, 0xDFEB33C7L, 0x2D80B0C4L, 0x3ED04330L, 0xCCBBC033L, 0xA24BB5A6L, 0x502036A5L, 0x4370C551L, 0xB11B4652L, 0x65D122B9L, 0x97BAA1BAL, 0x84EA524EL, 0x7681D14DL, 0x2892ED69L, 0xDAF96E6AL, 0xC9A99D9EL, 0x3BC21E9DL, 0xEF087A76L, 0x1D63F975L, 0x0E330A81L, 0xFC588982L, 0xB21572C9L, 0x407EF1CAL, 0x532E023EL, 0xA145813DL, 0x758FE5D6L, 0x87E466D5L, 0x94B49521L, 0x66DF1622L, 0x38CC2A06L, 0xCAA7A905L, 0xD9F75AF1L, 0x2B9CD9F2L, 0xFF56BD19L, 0x0D3D3E1AL, 0x1E6DCDEEL, 0xEC064EEDL, 0xC38D26C4L, 0x31E6A5C7L, 0x22B65633L, 0xD0DDD530L, 0x0417B1DBL, 0xF67C32D8L, 0xE52CC12CL, 0x1747422FL, 0x49547E0BL, 0xBB3FFD08L, 0xA86F0EFCL, 0x5A048DFFL, 0x8ECEE914L, 0x7CA56A17L, 0x6FF599E3L, 0x9D9E1AE0L, 0xD3D3E1ABL, 0x21B862A8L, 0x32E8915CL, 0xC083125FL, 0x144976B4L, 0xE622F5B7L, 0xF5720643L, 0x07198540L, 0x590AB964L, 0xAB613A67L, 0xB831C993L, 0x4A5A4A90L, 0x9E902E7BL, 0x6CFBAD78L, 0x7FAB5E8CL, 0x8DC0DD8FL, 0xE330A81AL, 0x115B2B19L, 0x020BD8EDL, 0xF0605BEEL, 0x24AA3F05L, 0xD6C1BC06L, 0xC5914FF2L, 0x37FACCF1L, 0x69E9F0D5L, 0x9B8273D6L, 0x88D28022L, 0x7AB90321L, 0xAE7367CAL, 0x5C18E4C9L, 0x4F48173DL, 0xBD23943EL, 0xF36E6F75L, 0x0105EC76L, 0x12551F82L, 0xE03E9C81L, 0x34F4F86AL, 0xC69F7B69L, 0xD5CF889DL, 0x27A40B9EL, 0x79B737BAL, 0x8BDCB4B9L, 0x988C474DL, 0x6AE7C44EL, 0xBE2DA0A5L, 0x4C4623A6L, 0x5F16D052L, 0xAD7D5351L }; static uint32_t crc32c(const uint8_t *buf, int len) { uint32_t crc = 0xffffffff; while (len-- > 0) { crc = (crc >> 8) ^ crctable[(crc ^ (*buf++)) & 0xff]; } return (crc ^ 0xffffffff); } struct vpx_tlv_iter { uint8_t t; void *p; uint16_t l; const char *e; unsigned char *_p; uint16_t _l; }; static void vpx_tlv_iter0(struct vpx_tlv_iter *vpi, void *p, unsigned l) { AN(p); assert(l < 65536); memset(vpi, 0, sizeof *vpi); vpi->_p = p; vpi->_l = l; } static int vpx_tlv_itern(struct vpx_tlv_iter *vpi) { if (vpi->_l == 0 || vpi->e != NULL) return (0); if (vpi->_l < 3) { vpi->e = "Dribble bytes"; return (0); } vpi->t = *vpi->_p; vpi->l = vbe16dec(vpi->_p + 1); if (vpi->l + 3 > vpi->_l) { vpi->e = "Length Error"; return (0); } vpi->p = vpi->_p + 3; vpi->_p += 3 + vpi->l; vpi->_l -= 3 + vpi->l; return (1); } #define VPX_TLV_FOREACH(ptr, len, itv) \ for (vpx_tlv_iter0(itv, ptr, len); \ (vpi->e == NULL) && vpx_tlv_itern(itv);) int VPX_tlv(const struct req *req, int typ, const void **dst, int *len) { struct vpx_tlv *tlv; struct vpx_tlv_iter vpi[1], vpi2[1]; uintptr_t *up; CHECK_OBJ_NOTNULL(req, REQ_MAGIC); CHECK_OBJ_NOTNULL(req->sp, SESS_MAGIC); AN(dst); AN(len); *dst = NULL; *len = 0; if (SES_Get_proxy_tlv(req->sp, &up) != 0 || *up == 0) return (-1); CAST_OBJ_NOTNULL(tlv, (void*)(*up), VPX_TLV_MAGIC); VPX_TLV_FOREACH(tlv->tlv, tlv->len, vpi) { if (vpi->t == typ) { *dst = vpi->p; *len = vpi->l; return (0); } if (vpi->t != PP2_TYPE_SSL) continue; VPX_TLV_FOREACH((char*)vpi->p + 5, vpi->l - 5, vpi2) { if (vpi2->t == typ) { *dst = vpi2->p; *len = vpi2->l; return (0); } } } return (-1); } static int vpx_proto2(const struct worker *wrk, const struct req *req) { uintptr_t *up; uint16_t tlv_len; const uint8_t *p, *ap, *pp; char *d, *tlv_start; sa_family_t pfam = 0xff; struct suckaddr *sa = NULL; ssize_t sz; char ha[VTCP_ADDRBUFSIZE]; char pa[VTCP_PORTBUFSIZE]; char hb[VTCP_ADDRBUFSIZE]; char pb[VTCP_PORTBUFSIZE]; struct vpx_tlv_iter vpi[1], vpi2[1]; struct vpx_tlv *tlv; uint16_t l; unsigned hdr_len, flen, alen; unsigned const plen = 2, aoff = 16; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(req, REQ_MAGIC); CHECK_OBJ_NOTNULL(req->sp, SESS_MAGIC); assert(req->htc->rxbuf_e - req->htc->rxbuf_b >= 16L); l = vbe16dec(req->htc->rxbuf_b + 14); assert(l <= VPX_MAX_LEN); // vpx_complete() hdr_len = l + 16L; assert(req->htc->rxbuf_e >= req->htc->rxbuf_b + hdr_len); HTC_RxPipeline(req->htc, req->htc->rxbuf_b + hdr_len); p = (const void *)req->htc->rxbuf_b; d = req->htc->rxbuf_b + 16L; /* Version @12 top half */ if ((p[12] >> 4) != 2) { VSL(SLT_ProxyGarbage, req->sp->vxid, "PROXY2: bad version (%d)", p[12] >> 4); return (-1); } /* Command @12 bottom half */ switch (p[12] & 0x0f) { case 0x0: VSL(SLT_Proxy, req->sp->vxid, "2 local local local local"); return (0); case 0x1: /* Proxied connection */ break; default: VSL(SLT_ProxyGarbage, req->sp->vxid, "PROXY2: bad command (%d)", p[12] & 0x0f); return (-1); } /* Address family & protocol @13 */ switch (p[13]) { case 0x00: /* UNSPEC|UNSPEC, ignore proxy header */ VSL(SLT_ProxyGarbage, req->sp->vxid, "PROXY2: Ignoring UNSPEC|UNSPEC addresses"); return (0); case 0x11: /* IPv4|TCP */ pfam = AF_INET; alen = 4; break; case 0x21: /* IPv6|TCP */ pfam = AF_INET6; alen = 16; break; default: /* Ignore proxy header */ VSL(SLT_ProxyGarbage, req->sp->vxid, "PROXY2: Ignoring unsupported protocol (0x%02x)", p[13]); return (0); } flen = 2 * alen + 2 * plen; if (l < flen) { VSL(SLT_ProxyGarbage, req->sp->vxid, "PROXY2: Ignoring short %s addresses (%u)", pfam == AF_INET ? "IPv4" : "IPv6", l); return (0); } l -= flen; d += flen; ap = p + aoff; pp = ap + 2 * alen; /* src/client */ if (! SES_Reserve_client_addr(req->sp, &sa, &sz)) return (vpx_ws_err(req)); assert(sz == vsa_suckaddr_len); AN(VSA_BuildFAP(sa, pfam, ap, alen, pp, plen)); VTCP_name(sa, hb, sizeof hb, pb, sizeof pb); ap += alen; pp += plen; /* dst/server */ if (! SES_Reserve_server_addr(req->sp, &sa, &sz)) return (vpx_ws_err(req)); assert(sz == vsa_suckaddr_len); AN(VSA_BuildFAP(sa, pfam, ap, alen, pp, plen)); VTCP_name(sa, ha, sizeof ha, pa, sizeof pa); if (! SES_Set_String_Attr(req->sp, SA_CLIENT_IP, hb)) return (vpx_ws_err(req)); if (! SES_Set_String_Attr(req->sp, SA_CLIENT_PORT, pb)) return (vpx_ws_err(req)); VSL(SLT_Proxy, req->sp->vxid, "2 %s %s %s %s", hb, pb, ha, pa); tlv_start = d; tlv_len = l; VPX_TLV_FOREACH(d, l, vpi) { if (vpi->t == PP2_TYPE_SSL) { if (vpi->l < 5) { vpi->e = "Length Error"; break; } VPX_TLV_FOREACH((char*)vpi->p + 5, vpi->l - 5, vpi2) { } vpi->e = vpi2->e; } else if (vpi->t == PP2_TYPE_CRC32C) { uint32_t n_crc32c = vbe32dec(vpi->p); vbe32enc(vpi->p, 0); if (crc32c(p, hdr_len) != n_crc32c) { VSL(SLT_ProxyGarbage, req->sp->vxid, "PROXY2: CRC error"); return (-1); } } } if (vpi->e != NULL) { VSL(SLT_ProxyGarbage, req->sp->vxid, "PROXY2: TLV %s", vpi->e); return (-1); } tlv = WS_Alloc(req->sp->ws, sizeof *tlv + tlv_len); if (tlv == NULL) return (vpx_ws_err(req)); INIT_OBJ(tlv, VPX_TLV_MAGIC); tlv->len = tlv_len; memcpy(tlv->tlv, tlv_start, tlv_len); if (! SES_Reserve_proxy_tlv(req->sp, &up, &sz)) return (vpx_ws_err(req)); assert(sz == sizeof up); *up = (uintptr_t)tlv; return (0); } /********************************************************************** * HTC_Rx completion detector */ static enum htc_status_e v_matchproto_(htc_complete_f) vpx_complete(struct http_conn *htc) { size_t z, l; uint16_t j; char *p, *q; CHECK_OBJ_NOTNULL(htc, HTTP_CONN_MAGIC); AN(WS_Reservation(htc->ws)); assert(pdiff(htc->rxbuf_b, htc->rxbuf_e) <= WS_ReservationSize(htc->ws)); l = htc->rxbuf_e - htc->rxbuf_b; p = htc->rxbuf_b; j = 0x3; for (z = 0; z < l; z++) { if (z < sizeof vpx1_sig && p[z] != vpx1_sig[z]) j &= ~1; if (z < sizeof vpx2_sig && p[z] != vpx2_sig[z]) j &= ~2; if (j == 0) return (HTC_S_JUNK); if (j == 1 && z == sizeof vpx1_sig) { q = memchr(p + z, '\n', htc->rxbuf_e - (p + z)); if (q != NULL && (q - htc->rxbuf_b) > 107) return (HTC_S_OVERFLOW); if (q == NULL) return (HTC_S_MORE); return (HTC_S_COMPLETE); } if (j == 2 && z == sizeof vpx2_sig) { if (l < 16) return (HTC_S_MORE); j = vbe16dec(p + 14); if (j > VPX_MAX_LEN) return (HTC_S_OVERFLOW); if (l < 16L + j) return (HTC_S_MORE); return (HTC_S_COMPLETE); } } return (HTC_S_MORE); } static void v_matchproto_(task_func_t) vpx_new_session(struct worker *wrk, void *arg) { struct req *req; struct sess *sp; enum htc_status_e hs; char *p; int i; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CAST_OBJ_NOTNULL(req, arg, REQ_MAGIC); sp = req->sp; CHECK_OBJ_NOTNULL(sp, SESS_MAGIC); /* Per specification */ assert(sizeof vpx1_sig == 5); assert(sizeof vpx2_sig == 12); HTC_RxInit(req->htc, req->ws); hs = HTC_RxStuff(req->htc, vpx_complete, NULL, NULL, NAN, sp->t_idle + cache_param->timeout_idle, NAN, VPX_MAX_LEN); if (hs != HTC_S_COMPLETE) { Req_Release(req); SES_DeleteHS(sp, hs, NAN); return; } p = req->htc->rxbuf_b; if (p[0] == vpx1_sig[0]) i = vpx_proto1(wrk, req); else if (p[0] == vpx2_sig[0]) i = vpx_proto2(wrk, req); else WRONG("proxy sig mismatch"); if (i) { Req_Release(req); SES_Delete(sp, SC_RX_JUNK, NAN); return; } SES_SetTransport(wrk, sp, req, &HTTP1_transport); } struct transport PROXY_transport = { .name = "PROXY", .proto_ident = "PROXY", .magic = TRANSPORT_MAGIC, .new_session = vpx_new_session, }; static void vpx_enc_addr(struct vsb *vsb, int proto, const struct suckaddr *s) { const struct sockaddr_in *sin4; const struct sockaddr_in6 *sin6; socklen_t sl; if (proto == PF_INET6) { sin6 = VSA_Get_Sockaddr(s, &sl); //lint !e826 AN(sin6); assert(sl >= sizeof *sin6); VSB_bcat(vsb, &sin6->sin6_addr, sizeof(sin6->sin6_addr)); } else { sin4 = VSA_Get_Sockaddr(s, &sl); //lint !e826 AN(sin4); assert(sl >= sizeof *sin4); VSB_bcat(vsb, &sin4->sin_addr, sizeof(sin4->sin_addr)); } } static void vpx_enc_port(struct vsb *vsb, const struct suckaddr *s) { uint8_t b[2]; vbe16enc(b, (uint16_t)VSA_Port(s)); VSB_bcat(vsb, b, sizeof(b)); } static void vpx_enc_authority(struct vsb *vsb, const char *authority, size_t l_authority) { uint16_t l; AN(vsb); if (l_authority == 0) return; AN(authority); AN(*authority); VSB_putc(vsb, PP2_TYPE_AUTHORITY); vbe16enc(&l, l_authority); VSB_bcat(vsb, &l, sizeof(l)); VSB_cat(vsb, authority); } /* short path for stringified addresses from session attributes */ static void vpx_format_proxy_v1(struct vsb *vsb, int proto, const char *cip, const char *cport, const char *sip, const char *sport) { AN(vsb); AN(cip); AN(cport); AN(sip); AN(sport); VSB_bcat(vsb, vpx1_sig, sizeof(vpx1_sig)); if (proto == PF_INET6) VSB_cat(vsb, " TCP6 "); else if (proto == PF_INET) VSB_cat(vsb, " TCP4 "); else WRONG("Wrong proxy v1 proto"); VSB_printf(vsb, "%s %s %s %s\r\n", cip, sip, cport, sport); AZ(VSB_finish(vsb)); } static void vpx_format_proxy_v2(struct vsb *vsb, int proto, const struct suckaddr *sac, const struct suckaddr *sas, const char *authority) { size_t l_authority = 0; uint16_t l_tlv = 0, l; AN(vsb); AN(sac); AN(sas); if (authority != NULL && *authority != '\0') { l_authority = strlen(authority); /* 3 bytes in the TLV before the authority string */ assert(3 + l_authority <= UINT16_MAX); l_tlv = 3 + l_authority; } VSB_bcat(vsb, vpx2_sig, sizeof(vpx2_sig)); VSB_putc(vsb, 0x21); if (proto == PF_INET6) { VSB_putc(vsb, 0x21); vbe16enc(&l, 0x24 + l_tlv); VSB_bcat(vsb, &l, sizeof(l)); } else if (proto == PF_INET) { VSB_putc(vsb, 0x11); vbe16enc(&l, 0x0c + l_tlv); VSB_bcat(vsb, &l, sizeof(l)); } else { WRONG("Wrong proxy v2 proto"); } vpx_enc_addr(vsb, proto, sac); vpx_enc_addr(vsb, proto, sas); vpx_enc_port(vsb, sac); vpx_enc_port(vsb, sas); vpx_enc_authority(vsb, authority, l_authority); AZ(VSB_finish(vsb)); } void VPX_Format_Proxy(struct vsb *vsb, int version, const struct suckaddr *sac, const struct suckaddr *sas, const char *authority) { int proto; char hac[VTCP_ADDRBUFSIZE]; char pac[VTCP_PORTBUFSIZE]; char has[VTCP_ADDRBUFSIZE]; char pas[VTCP_PORTBUFSIZE]; AN(vsb); AN(sac); AN(sas); assert(version == 1 || version == 2); proto = VSA_Get_Proto(sas); assert(proto == VSA_Get_Proto(sac)); if (version == 1) { VTCP_name(sac, hac, sizeof hac, pac, sizeof pac); VTCP_name(sas, has, sizeof has, pas, sizeof pas); vpx_format_proxy_v1(vsb, proto, hac, pac, has, pas); } else if (version == 2) { vpx_format_proxy_v2(vsb, proto, sac, sas, authority); } else WRONG("Wrong proxy version"); } #define PXY_BUFSZ \ (sizeof(vpx1_sig) + 4 /* TCPx */ + \ 2 * VTCP_ADDRBUFSIZE + 2 * VTCP_PORTBUFSIZE + \ 6 /* spaces, CRLF */ + 16 /* safety */ ) int VPX_Send_Proxy(int fd, int version, const struct sess *sp) { struct vsb vsb[1], *vsb2; struct suckaddr *sac, *sas; char ha[VTCP_ADDRBUFSIZE]; char pa[VTCP_PORTBUFSIZE]; char buf[PXY_BUFSZ]; int proto, r; CHECK_OBJ_NOTNULL(sp, SESS_MAGIC); assert(version == 1 || version == 2); AN(VSB_init(vsb, buf, sizeof buf)); AZ(SES_Get_server_addr(sp, &sas)); AN(sas); proto = VSA_Get_Proto(sas); if (version == 1) { VTCP_name(sas, ha, sizeof ha, pa, sizeof pa); vpx_format_proxy_v1(vsb, proto, SES_Get_String_Attr(sp, SA_CLIENT_IP), SES_Get_String_Attr(sp, SA_CLIENT_PORT), ha, pa); } else if (version == 2) { AZ(SES_Get_client_addr(sp, &sac)); AN(sac); vpx_format_proxy_v2(vsb, proto, sac, sas, NULL); } else WRONG("Wrong proxy version"); r = write(fd, VSB_data(vsb), VSB_len(vsb)); VTCP_Assert(r); if (!DO_DEBUG(DBG_PROTOCOL)) return (r); vsb2 = VSB_new_auto(); AN(vsb2); VSB_quote(vsb2, VSB_data(vsb), VSB_len(vsb), version == 2 ? VSB_QUOTE_HEX : 0); AZ(VSB_finish(vsb2)); VSL(SLT_Debug, NO_VXID, "PROXY_HDR %s", VSB_data(vsb2)); VSB_destroy(&vsb2); VSB_fini(vsb); return (r); } #undef PXY_BUFSZ varnish-7.5.0/bin/varnishd/storage/000077500000000000000000000000001457605730600172445ustar00rootroot00000000000000varnish-7.5.0/bin/varnishd/storage/mgt_stevedore.c000066400000000000000000000176151457605730600222710ustar00rootroot00000000000000/*- * Copyright (c) 2007-2011 Varnish Software AS * All rights reserved. * * Author: Dag-Erling Smørgav * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * STEVEDORE: one who works at or is responsible for loading and * unloading ships in port. Example: "on the wharves, stevedores were * unloading cargo from the far corners of the world." Origin: Spanish * estibador, from estibar to pack. First Known Use: 1788 */ #include "config.h" #include #include #include #include #include "mgt/mgt.h" #include "common/heritage.h" #include "vcli_serve.h" #include "storage/storage.h" VTAILQ_HEAD(stevedore_head, stevedore); static struct stevedore_head proto_stevedores = VTAILQ_HEAD_INITIALIZER(proto_stevedores); static struct stevedore_head pre_stevedores = VTAILQ_HEAD_INITIALIZER(pre_stevedores); static struct stevedore_head stevedores = VTAILQ_HEAD_INITIALIZER(stevedores); /* Name of transient storage */ #define TRANSIENT_STORAGE "Transient" struct stevedore *stv_transient; const char *mgt_stv_h2_rxbuf; /*--------------------------------------------------------------------*/ int STV__iter(struct stevedore ** const pp) { AN(pp); CHECK_OBJ_ORNULL(*pp, STEVEDORE_MAGIC); if (*pp != NULL) *pp = VTAILQ_NEXT(*pp, list); else if (!VTAILQ_EMPTY(&stevedores)) *pp = VTAILQ_FIRST(&stevedores); else *pp = VTAILQ_FIRST(&pre_stevedores); return (*pp != NULL); } /*--------------------------------------------------------------------*/ static void v_matchproto_(cli_func_t) stv_cli_list(struct cli *cli, const char * const *av, void *priv) { struct stevedore *stv; ASSERT_MGT(); (void)av; (void)priv; VCLI_Out(cli, "Storage devices:\n"); STV_Foreach(stv) VCLI_Out(cli, "\tstorage.%s = %s\n", stv->ident, stv->name); } static void v_matchproto_(cli_func_t) stv_cli_list_json(struct cli *cli, const char * const *av, void *priv) { struct stevedore *stv; int n = 0; (void)priv; ASSERT_MGT(); VCLI_JSON_begin(cli, 2, av); VCLI_Out(cli, ",\n"); STV_Foreach(stv) { VCLI_Out(cli, "%s", n ? ",\n" : ""); n++; VCLI_Out(cli, "{\n"); VSB_indent(cli->sb, 2); VCLI_Out(cli, "\"name\": "); VCLI_JSON_str(cli, stv->ident); VCLI_Out(cli, ",\n"); VCLI_Out(cli, "\"storage\": "); VCLI_JSON_str(cli, stv->name); VSB_indent(cli->sb, -2); VCLI_Out(cli, "\n}"); } VCLI_JSON_end(cli); } /*--------------------------------------------------------------------*/ static struct cli_proto cli_stv[] = { { CLICMD_STORAGE_LIST, "", stv_cli_list, stv_cli_list_json }, { NULL} }; /*-------------------------------------------------------------------- */ #ifdef WITH_PERSISTENT_STORAGE static void v_noreturn_ v_matchproto_(storage_init_f) smp_fake_init(struct stevedore *parent, int ac, char * const *av) { (void)parent; (void)ac; (void)av; ARGV_ERR( "-spersistent has been deprecated, please see:\n" " https://www.varnish-cache.org/docs/trunk/phk/persistent.html\n" "for details.\n" ); } static const struct stevedore smp_fake_stevedore = { .magic = STEVEDORE_MAGIC, .name = "persistent", .init = smp_fake_init, }; #endif /*-------------------------------------------------------------------- * Register a stevedore implementation by name. * VEXTs get to do this first, and since the list is searched front to * back a VEXT stevedore which inadvisably wants to steal "default" or * the name of another stevedore implementation can do so. */ void STV_Register(const struct stevedore *cstv, const char *altname) { struct stevedore *stv; CHECK_OBJ_NOTNULL(cstv, STEVEDORE_MAGIC); ALLOC_OBJ(stv, STEVEDORE_MAGIC); AN(stv); *stv = *cstv; if (altname != NULL) stv->ident = altname; else stv->ident = stv->name; VTAILQ_INSERT_TAIL(&proto_stevedores, stv, list); } static void STV_Register_The_Usual_Suspects(void) { STV_Register(&smf_stevedore, NULL); STV_Register(&sma_stevedore, NULL); STV_Register(&smd_stevedore, NULL); #ifdef WITH_PERSISTENT_STORAGE STV_Register(&smp_stevedore, NULL); STV_Register(&smp_fake_stevedore, NULL); #endif #if defined(HAVE_UMEM_H) STV_Register(&smu_stevedore, NULL); STV_Register(&smu_stevedore, "default"); #else STV_Register(&sma_stevedore, "default"); #endif } /*-------------------------------------------------------------------- * Parse a stevedore argument on the form: * [ name '=' ] strategy [ ',' arg ] * */ void STV_Config(const char *spec) { char **av, buf[8]; const char *ident; struct stevedore *stv; static unsigned seq = 0; av = MGT_NamedArg(spec, &ident, "-s"); AN(av); if (av[1] == NULL) ARGV_ERR("-s argument lacks strategy {malloc, file, ...}\n"); /* Append strategy to ident string */ VSB_printf(vident, ",-s%s", av[1]); if (ident == NULL) { bprintf(buf, "s%u", seq++); ident = strdup(buf); } VTAILQ_FOREACH(stv, &pre_stevedores, list) if (!strcmp(stv->ident, ident)) ARGV_ERR("(-s %s) '%s' is already defined\n", spec, ident); ALLOC_OBJ(stv, STEVEDORE_MAGIC); AN(stv); stv->av = av; stv->ident = ident; stv->name = av[1]; VTAILQ_INSERT_TAIL(&pre_stevedores, stv, list); } /*--------------------------------------------------------------------*/ void STV_Config_Final(void) { struct stevedore *stv; ASSERT_MGT(); VCLS_AddFunc(mgt_cls, MCF_AUTH, cli_stv); STV_Foreach(stv) if (!strcmp(stv->ident, TRANSIENT_STORAGE)) return; STV_Config(TRANSIENT_STORAGE "=default"); } /*-------------------------------------------------------------------- * Initialize configured stevedores in the worker process */ void STV_Init(void) { char **av; const char *ident; struct stevedore *stv; const struct stevedore *stv2; int ac; STV_Register_The_Usual_Suspects(); while (!VTAILQ_EMPTY(&pre_stevedores)) { stv = VTAILQ_FIRST(&pre_stevedores); VTAILQ_REMOVE(&pre_stevedores, stv, list); CHECK_OBJ_NOTNULL(stv, STEVEDORE_MAGIC); AN(stv->av); av = stv->av; AN(stv->ident); ident = stv->ident; for (ac = 0; av[ac + 2] != NULL; ac++) continue; VTAILQ_FOREACH(stv2, &proto_stevedores, list) if (!strcmp(stv2->ident, av[1])) break; if (stv2 == NULL) ARGV_ERR("Unknown stevedore method \"%s\"\n", av[1]); CHECK_OBJ_NOTNULL(stv2, STEVEDORE_MAGIC); *stv = *stv2; AN(stv->name); av += 2; stv->ident = ident; stv->av = av; if (stv->init != NULL) stv->init(stv, ac, av); else if (ac != 0) ARGV_ERR("(-s %s) too many arguments\n", stv->name); AN(stv->allocobj); AN(stv->methods); if (!strcmp(stv->ident, TRANSIENT_STORAGE)) { AZ(stv_transient); stv_transient = stv; } else VTAILQ_INSERT_TAIL(&stevedores, stv, list); /* NB: Do not free av, stevedore gets to keep it */ } AN(stv_transient); VTAILQ_INSERT_TAIL(&stevedores, stv_transient, list); } varnish-7.5.0/bin/varnishd/storage/mgt_storage_persistent.c000066400000000000000000000146071457605730600242130ustar00rootroot00000000000000/*- * Copyright (c) 2008-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Persistent storage method * * XXX: Before we start the client or maybe after it stops, we should give the * XXX: stevedores a chance to examine their storage for consistency. * * XXX: Do we ever free the LRU-lists ? */ #include "config.h" #include "cache/cache_varnishd.h" #include "common/heritage.h" #include #include #include #include "storage/storage.h" #include "storage/storage_simple.h" #include "vsha256.h" #include "storage/storage_persistent.h" #ifdef HAVE_SYS_PERSONALITY_H #include #endif #ifndef MAP_NOCORE #ifdef MAP_CONCEAL #define MAP_NOCORE MAP_CONCEAL /* XXX OpenBSD */ #else #define MAP_NOCORE 0 /* XXX Linux */ #endif #endif #ifndef MAP_NOSYNC #define MAP_NOSYNC 0 /* XXX Linux */ #endif /*-------------------------------------------------------------------- * Calculate cleaner metrics from silo dimensions */ static void smp_metrics(struct smp_sc *sc) { /* * We do not want to loose too big chunks of the silos * content when we are forced to clean a segment. * * For now insist that a segment covers no more than 1% of the silo. * * XXX: This should possibly depend on the size of the silo so * XXX: trivially small silos do not run into trouble along * XXX: the lines of "one object per segment". */ sc->min_nseg = 10; sc->max_segl = smp_stuff_len(sc, SMP_SPC_STUFF) / sc->min_nseg; fprintf(stderr, "min_nseg = %u, max_segl = %ju\n", sc->min_nseg, (uintmax_t)sc->max_segl); /* * The number of segments are limited by the size of the segment * table(s) and from that follows the minimum size of a segmement. */ sc->max_nseg = smp_stuff_len(sc, SMP_SEG1_STUFF) / sc->min_nseg; sc->min_segl = smp_stuff_len(sc, SMP_SPC_STUFF) / sc->max_nseg; while (sc->min_segl < sizeof(struct object)) { sc->max_nseg /= 2; sc->min_segl = smp_stuff_len(sc, SMP_SPC_STUFF) / sc->max_nseg; } fprintf(stderr, "max_nseg = %u, min_segl = %ju\n", sc->max_nseg, (uintmax_t)sc->min_segl); /* * Set our initial aim point at the exponential average of the * two extremes. * * XXX: This is a pretty arbitrary choice, but having no idea * XXX: object count, size distribution or ttl pattern at this * XXX: point, we have to do something. */ sc->aim_nseg = (unsigned) exp((log(sc->min_nseg) + log(sc->max_nseg))*.5); sc->aim_segl = smp_stuff_len(sc, SMP_SPC_STUFF) / sc->aim_nseg; fprintf(stderr, "aim_nseg = %u, aim_segl = %ju\n", sc->aim_nseg, (uintmax_t)sc->aim_segl); /* * How much space in the free reserve pool ? */ sc->free_reserve = sc->aim_segl * 10; fprintf(stderr, "free_reserve = %ju\n", (uintmax_t)sc->free_reserve); } /*-------------------------------------------------------------------- * Set up persistent storage silo in the master process. */ void v_matchproto_(storage_init_f) smp_mgt_init(struct stevedore *parent, int ac, char * const *av) { struct smp_sc *sc; void *target; int i, mmap_flags; AZ(av[ac]); #ifdef HAVE_SYS_PERSONALITY_H i = personality(0xffffffff); /* Fetch old personality. */ if (!(i & ADDR_NO_RANDOMIZE)) { i = personality(i | ADDR_NO_RANDOMIZE); if (i < 0) fprintf(stderr, "WARNING: Could not disable ASLR\n"); else fprintf(stderr, "NB: Disabled ASLR for Persistent\n"); } #endif /* Necessary alignment. See also smp_object::__filler__ */ assert(sizeof(struct smp_object) % 8 == 0); #define SIZOF(foo) fprintf(stderr, \ "sizeof(%s) = %zu = 0x%zx\n", #foo, sizeof(foo), sizeof(foo)); SIZOF(struct smp_ident); SIZOF(struct smp_sign); SIZOF(struct smp_segptr); SIZOF(struct smp_object); #undef SIZOF /* See comments in storage_persistent.h */ assert(sizeof(struct smp_ident) == SMP_IDENT_SIZE); /* Allocate softc */ ALLOC_OBJ(sc, SMP_SC_MAGIC); XXXAN(sc); sc->parent = parent; sc->fd = -1; VTAILQ_INIT(&sc->segments); /* Argument processing */ if (ac != 2) ARGV_ERR("(-spersistent) wrong number of arguments\n"); i = STV_GetFile(av[0], &sc->fd, &sc->filename, "-spersistent"); if (i == 2) ARGV_ERR("(-spersistent) need filename (not directory)\n"); sc->align = sizeof(void*) * 2; sc->granularity = getpagesize(); sc->mediasize = STV_FileSize(sc->fd, av[1], &sc->granularity, "-spersistent"); AZ(ftruncate(sc->fd, sc->mediasize)); /* Try to determine correct mmap address */ target = NULL; mmap_flags = MAP_NOCORE | MAP_NOSYNC | MAP_SHARED; #ifdef MAP_ALIGNED_SUPER mmap_flags |= MAP_ALIGNED_SUPER; #endif sc->base = (void*)mmap(target, sc->mediasize, PROT_READ|PROT_WRITE, mmap_flags, sc->fd, 0); if (sc->base == MAP_FAILED) ARGV_ERR("(-spersistent) failed to mmap (%s) @%p\n", VAS_errtxt(errno), target); smp_def_sign(sc, &sc->idn, 0, "SILO"); sc->ident = SIGN_DATA(&sc->idn); i = smp_valid_silo(sc); if (i) { printf("Warning SILO (%s) not reloaded (reason=%d)\n", sc->filename, i); smp_newsilo(sc); } AZ(smp_valid_silo(sc)); smp_metrics(sc); parent->priv = sc; /* XXX: only for sendfile I guess... */ MCH_Fd_Inherit(sc->fd, "storage_persistent"); } varnish-7.5.0/bin/varnishd/storage/stevedore.c000066400000000000000000000166651457605730600214260ustar00rootroot00000000000000/*- * Copyright (c) 2007-2015 Varnish Software AS * All rights reserved. * * Author: Dag-Erling Smørgav * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * STEVEDORE: one who works at or is responsible for loading and * unloading ships in port. Example: "on the wharves, stevedores were * unloading cargo from the far corners of the world." Origin: Spanish * estibador, from estibar to pack. First Known Use: 1788 */ #include "config.h" #include "cache/cache_varnishd.h" #include #include #include "storage/storage.h" #include "vrt_obj.h" extern const char *mgt_stv_h2_rxbuf; struct stevedore *stv_h2_rxbuf = NULL; static pthread_mutex_t stv_mtx; /*-------------------------------------------------------------------- * XXX: trust pointer writes to be atomic */ const struct stevedore * STV_next(void) { static struct stevedore *stv; struct stevedore *r; PTOK(pthread_mutex_lock(&stv_mtx)); if (!STV__iter(&stv)) AN(STV__iter(&stv)); if (stv == stv_transient) { stv = NULL; AN(STV__iter(&stv)); } r = stv; PTOK(pthread_mutex_unlock(&stv_mtx)); AN(r); return (r); } /*------------------------------------------------------------------- * Allocate storage for an object, based on the header information. * XXX: If we know (a hint of) the length, we could allocate space * XXX: for the body in the same allocation while we are at it. */ int STV_NewObject(struct worker *wrk, struct objcore *oc, const struct stevedore *stv, unsigned wsl) { CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(stv, STEVEDORE_MAGIC); wrk->strangelove = cache_param->nuke_limit; AN(stv->allocobj); if (stv->allocobj(wrk, stv, oc, wsl) == 0) { VSLb(wrk->vsl, SLT_Error, "Failed to create object object from %s %s", stv->name, stv->ident); return (0); } oc->oa_present = 0; wrk->stats->n_object++; VSLb(wrk->vsl, SLT_Storage, "%s %s", oc->stobj->stevedore->name, oc->stobj->stevedore->ident); return (1); } /*-------------------------------------------------------------------*/ struct stv_buffer { unsigned magic; #define STV_BUFFER_MAGIC 0xf39cb6c2 const struct stevedore *stv; size_t size; uintptr_t priv; }; struct stv_buffer * STV_AllocBuf(struct worker *wrk, const struct stevedore *stv, size_t size) { struct stv_buffer *stvbuf; uint8_t *buf; uintptr_t priv = 0; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(stv, STEVEDORE_MAGIC); if (size == 0) return (NULL); if (stv->allocbuf == NULL) return (NULL); wrk->strangelove = cache_param->nuke_limit; buf = stv->allocbuf(wrk, stv, size + PRNDUP(sizeof *stvbuf), &priv); if (buf == NULL) return (NULL); assert(PAOK(buf)); stvbuf = (void *)buf; INIT_OBJ(stvbuf, STV_BUFFER_MAGIC); stvbuf->stv = stv; stvbuf->priv = priv; stvbuf->size = size; return (stvbuf); } void STV_FreeBuf(struct worker *wrk, struct stv_buffer **pstvbuf) { struct stv_buffer *stvbuf; const struct stevedore *stv; uintptr_t priv; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); TAKE_OBJ_NOTNULL(stvbuf, pstvbuf, STV_BUFFER_MAGIC); CHECK_OBJ_NOTNULL(stvbuf->stv, STEVEDORE_MAGIC); stv = stvbuf->stv; priv = stvbuf->priv; CHECK_OBJ_NOTNULL(stv, STEVEDORE_MAGIC); ZERO_OBJ(stvbuf, sizeof *stvbuf); AN(stv->freebuf); stv->freebuf(wrk, stv, priv); } void * STV_GetBufPtr(struct stv_buffer *stvbuf, size_t *psize) { CHECK_OBJ_NOTNULL(stvbuf, STV_BUFFER_MAGIC); if (psize) *psize = stvbuf->size; return (&stvbuf[1]); } /*-------------------------------------------------------------------*/ void STV_open(void) { struct stevedore *stv; char buf[1024]; ASSERT_CLI(); PTOK(pthread_mutex_init(&stv_mtx, &mtxattr_errorcheck)); /* This string was prepared for us before the fork, and should * point to a configured stevedore. */ AN(mgt_stv_h2_rxbuf); stv_h2_rxbuf = NULL; STV_Foreach(stv) { CHECK_OBJ_NOTNULL(stv, STEVEDORE_MAGIC); bprintf(buf, "storage.%s", stv->ident); stv->vclname = strdup(buf); AN(stv->vclname); if (stv->open != NULL) stv->open(stv); if (!strcmp(stv->ident, mgt_stv_h2_rxbuf)) stv_h2_rxbuf = stv; } AN(stv_h2_rxbuf); } void STV_close(void) { struct stevedore *stv; int i; ASSERT_CLI(); for (i = 1; i >= 0; i--) { /* First send close warning */ STV_Foreach(stv) if (stv->close != NULL) stv->close(stv, i); } } /*------------------------------------------------------------------- * Notify the stevedores of BAN related events. A non-zero return * value indicates that the stevedore is unable to persist the * event. */ int STV_BanInfoDrop(const uint8_t *ban, unsigned len) { struct stevedore *stv; int r = 0; STV_Foreach(stv) if (stv->baninfo != NULL) r |= stv->baninfo(stv, BI_DROP, ban, len); return (r); } int STV_BanInfoNew(const uint8_t *ban, unsigned len) { struct stevedore *stv; int r = 0; STV_Foreach(stv) if (stv->baninfo != NULL) r |= stv->baninfo(stv, BI_NEW, ban, len); return (r); } /*------------------------------------------------------------------- * Export a complete ban list to the stevedores for persistence. * The stevedores should clear any previous ban lists and replace * them with this list. */ void STV_BanExport(const uint8_t *bans, unsigned len) { struct stevedore *stv; STV_Foreach(stv) if (stv->banexport != NULL) stv->banexport(stv, bans, len); } /*-------------------------------------------------------------------- * VRT functions for stevedores */ static const struct stevedore * stv_find(const char *nm) { struct stevedore *stv; STV_Foreach(stv) if (!strcmp(stv->ident, nm)) return (stv); return (NULL); } int VRT_Stv(const char *nm) { if (stv_find(nm) != NULL) return (1); return (0); } const char * v_matchproto_() VRT_STEVEDORE_string(VCL_STEVEDORE s) { if (s == NULL) return (NULL); CHECK_OBJ_NOTNULL(s, STEVEDORE_MAGIC); return (s->vclname); } VCL_STEVEDORE VRT_stevedore(const char *nm) { return (stv_find(nm)); } #define VRTSTVVAR(nm, vtype, ctype, dval) \ ctype \ VRT_stevedore_##nm(VCL_STEVEDORE stv) \ { \ if (stv == NULL) \ return (0); \ if (stv->var_##nm == NULL) \ return (dval); \ CHECK_OBJ_NOTNULL(stv, STEVEDORE_MAGIC); \ return (stv->var_##nm(stv)); \ } #include "tbl/vrt_stv_var.h" varnish-7.5.0/bin/varnishd/storage/stevedore_utils.c000066400000000000000000000127741457605730600226430ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Utility functions for stevedores and storage modules */ #include "config.h" #include #include #include #include #include #include #include "mgt/mgt.h" #include "common/heritage.h" #include "storage/storage.h" #include "vnum.h" #include "vfil.h" #ifndef O_LARGEFILE #define O_LARGEFILE 0 #endif /*-------------------------------------------------------------------- * Get a storage file. * * The fn argument can be an existing file, an existing directory or * a nonexistent filename in an existing directory. * * If a directory is specified, the file will be anonymous (unlinked) * * Return: * 0 if the file was preexisting. * 1 if the file was created. * 2 if the file is anonymous. * * Uses ARGV_ERR to exit in case of trouble. */ int STV_GetFile(const char *fn, int *fdp, const char **fnp, const char *ctx) { int fd; struct stat st; int retval = 1; char buf[FILENAME_MAX]; AN(fn); AN(fnp); AN(fdp); *fnp = NULL; *fdp = -1; /* try to create a new file of this name */ VJ_master(JAIL_MASTER_STORAGE); fd = open(fn, O_RDWR | O_CREAT | O_EXCL | O_LARGEFILE, 0600); if (fd >= 0) { VJ_fix_fd(fd, JAIL_FIXFD_FILE); *fdp = fd; *fnp = fn; VJ_master(JAIL_MASTER_LOW); return (retval); } if (stat(fn, &st)) ARGV_ERR( "(%s) \"%s\" does not exist and could not be created\n", ctx, fn); if (S_ISDIR(st.st_mode)) { bprintf(buf, "%s/varnish.XXXXXX", fn); fd = mkstemp(buf); if (fd < 0) ARGV_ERR("(%s) \"%s\" mkstemp(%s) failed (%s)\n", ctx, fn, buf, VAS_errtxt(errno)); AZ(unlink(buf)); *fnp = strdup(buf); AN(*fnp); retval = 2; } else if (S_ISREG(st.st_mode)) { fd = open(fn, O_RDWR | O_LARGEFILE); if (fd < 0) ARGV_ERR("(%s) \"%s\" could not open (%s)\n", ctx, fn, VAS_errtxt(errno)); *fnp = fn; retval = 0; } else ARGV_ERR( "(%s) \"%s\" is neither file nor directory\n", ctx, fn); AZ(fstat(fd, &st)); if (!S_ISREG(st.st_mode)) ARGV_ERR("(%s) \"%s\" was not a file after opening\n", ctx, fn); *fdp = fd; VJ_fix_fd(fd, JAIL_FIXFD_FILE); VJ_master(JAIL_MASTER_LOW); return (retval); } /*-------------------------------------------------------------------- * Decide file size. * * If the size specification is empty and the file exists with non-zero * size, use that, otherwise, interpret the specification. * * Handle off_t sizes and pointer width limitations. */ uintmax_t STV_FileSize(int fd, const char *size, unsigned *granularity, const char *ctx) { uintmax_t l, fssize; unsigned bs; const char *q; int i; off_t o; struct stat st; AN(granularity); AN(ctx); AZ(fstat(fd, &st)); xxxassert(S_ISREG(st.st_mode)); AZ(VFIL_fsinfo(fd, &bs, &fssize, NULL)); /* Increase granularity if it is lower than the filesystem block size */ *granularity = vmax_t(unsigned, *granularity, bs); if ((size == NULL || *size == '\0') && st.st_size != 0) { /* * We have no size specification, but an existing file, * use its existing size. */ l = st.st_size; } else if (size == NULL || *size == '\0') { ARGV_ERR("(%s) no size specified\n", ctx); } else { AN(size); q = VNUM_2bytes(size, &l, 0); if (q != NULL) ARGV_ERR("(%s) size \"%s\": %s\n", ctx, size, q); if (l < 1024*1024) ARGV_ERR("(%s) size \"%s\": too small, " "did you forget to specify M or G?\n", ctx, size); if (l > fssize) ARGV_ERR("(%s) size \"%s\": larger than file system\n", ctx, size); } /* * This trickery wouldn't be necessary if X/Open would * just add OFF_MAX to ... */ i = 0; while (1) { o = l; if (o == l && o > 0) break; l >>= 1; i++; } if (i) fprintf(stderr, "WARNING: (%s) file size reduced" " to %ju due to system \"off_t\" limitations\n", ctx, l); if (sizeof(void *) == 4 && l > INT32_MAX) { /*lint !e506 !e774 !e845 */ fprintf(stderr, "NB: Storage size limited to 2GB on 32 bit architecture,\n" "NB: otherwise we could run out of address space.\n" ); l = INT32_MAX; } /* Round down */ l -= (l % *granularity); return (l); } varnish-7.5.0/bin/varnishd/storage/storage.h000066400000000000000000000121341457605730600210620ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * This defines the backend interface between the stevedore and the * pluggable storage implementations. * */ struct stevedore; struct sess; struct objcore; struct worker; struct lru; struct vsl_log; struct vfp_ctx; struct obj_methods; enum baninfo { BI_NEW, BI_DROP }; /* Prototypes --------------------------------------------------------*/ typedef void storage_init_f(struct stevedore *, int ac, char * const *av); typedef void storage_open_f(struct stevedore *); typedef int storage_allocobj_f(struct worker *, const struct stevedore *, struct objcore *, unsigned); typedef void storage_close_f(const struct stevedore *, int pass); typedef int storage_baninfo_f(const struct stevedore *, enum baninfo event, const uint8_t *ban, unsigned len); typedef void storage_banexport_f(const struct stevedore *, const uint8_t *bans, unsigned len); typedef void storage_panic_f(struct vsb *vsb, const struct objcore *oc); typedef void *storage_allocbuf_f(struct worker *, const struct stevedore *, size_t size, uintptr_t *ppriv); typedef void storage_freebuf_f(struct worker *, const struct stevedore *, uintptr_t priv); struct storage; typedef struct object *sml_getobj_f(struct worker *, struct objcore *); typedef struct storage *sml_alloc_f(const struct stevedore *, size_t size); typedef void sml_free_f(struct storage *); /* Prototypes for VCL variable responders */ #define VRTSTVVAR(nm,vt,ct,def) \ typedef ct stv_var_##nm(const struct stevedore *); #include "tbl/vrt_stv_var.h" /*--------------------------------------------------------------------*/ struct stevedore { unsigned magic; #define STEVEDORE_MAGIC 0x4baf43db const char *name; /* Called in MGT process */ storage_init_f *init; /* Called in cache process * only allocobj is required, other callbacks are optional */ storage_open_f *open; storage_close_f *close; storage_allocobj_f *allocobj; storage_baninfo_f *baninfo; storage_banexport_f *banexport; storage_panic_f *panic; storage_allocbuf_f *allocbuf; storage_freebuf_f *freebuf; /* Only if SML is used */ sml_alloc_f *sml_alloc; sml_free_f *sml_free; sml_getobj_f *sml_getobj; const struct obj_methods *methods; /* Only if LRU is used */ struct lru *lru; #define VRTSTVVAR(nm, vtype, ctype, dval) stv_var_##nm *var_##nm; #include "tbl/vrt_stv_var.h" /* private fields for the stevedore */ void *priv; VTAILQ_ENTRY(stevedore) list; char **av; const char *ident; const char *vclname; }; extern struct stevedore *stv_transient; extern struct stevedore *stv_h2_rxbuf; /*--------------------------------------------------------------------*/ void STV_Register(const struct stevedore *, const char *altname); #define STV_Foreach(arg) for (arg = NULL; STV__iter(&arg);) int STV__iter(struct stevedore ** const ); /*--------------------------------------------------------------------*/ int STV_GetFile(const char *fn, int *fdp, const char **fnp, const char *ctx); uintmax_t STV_FileSize(int fd, const char *size, unsigned *granularity, const char *ctx); /*--------------------------------------------------------------------*/ struct lru *LRU_Alloc(void); void LRU_Free(struct lru **); void LRU_Add(struct objcore *, vtim_real now); void LRU_Remove(struct objcore *); int LRU_NukeOne(struct worker *, struct lru *); void LRU_Touch(struct worker *, struct objcore *, vtim_real now); /*--------------------------------------------------------------------*/ extern const struct stevedore smu_stevedore; extern const struct stevedore sma_stevedore; extern const struct stevedore smd_stevedore; extern const struct stevedore smf_stevedore; extern const struct stevedore smp_stevedore; varnish-7.5.0/bin/varnishd/storage/storage_debug.c000066400000000000000000000074331457605730600222310ustar00rootroot00000000000000/*- * Copyright 2021,2023 UPLEX - Nils Goroll Systemoptimierung * All rights reserved. * * Author: Nils Goroll * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * debug helper storage based on malloc */ #include "config.h" #include "cache/cache_varnishd.h" #include "cache/cache_obj.h" #include #include #include "storage/storage.h" #include "storage/storage_simple.h" #include "vtim.h" #include "vnum.h" /* we cheat and make the open delay a static to avoid * having to wrap all callbacks to unpack the priv * pointer. Consequence: last dopen applies to all * debug stevedores */ static vtim_dur dopen = 0.0; /* returns one byte less than requested */ static int v_matchproto_(objgetspace_f) smd_lsp_getspace(struct worker *wrk, struct objcore *oc, ssize_t *sz, uint8_t **ptr) { AN(sz); if (*sz > 2) (*sz)--; return (SML_methods.objgetspace(wrk, oc, sz, ptr)); } #define dur_arg(a, s, d) \ (! strncmp((a), (s), strlen(s)) \ && (d = VNUM_duration(a + strlen(s))) != nan("")) static void smd_open(struct stevedore *stv) { sma_stevedore.open(stv); fprintf(stderr, "-sdebug open delay %fs\n", dopen); if (dopen > 0.0) VTIM_sleep(dopen); } static void v_matchproto_(storage_init_f) smd_init(struct stevedore *parent, int aac, char * const *aav) { struct obj_methods *methods; const char *ident; int i, ac = 0; size_t nac; vtim_dur d, dinit = 0.0; char **av; //lint -e429 char *a; ident = parent->ident; memcpy(parent, &sma_stevedore, sizeof *parent); parent->ident = ident; parent->name = smd_stevedore.name; methods = malloc(sizeof *methods); AN(methods); memcpy(methods, &SML_methods, sizeof *methods); parent->methods = methods; assert(aac >= 0); nac = aac; nac++; av = calloc(nac, sizeof *av); AN(av); for (i = 0; i < aac; i++) { a = aav[i]; if (a != NULL) { if (! strcmp(a, "lessspace")) { methods->objgetspace = smd_lsp_getspace; continue; } if (dur_arg(a, "dinit=", d)) { dinit = d; continue; } if (dur_arg(a, "dopen=", d)) { dopen = d; continue; } } av[ac] = a; ac++; } assert(ac >= 0); assert(ac < (int)nac); AZ(av[ac]); sma_stevedore.init(parent, ac, av); free(av); fprintf(stderr, "-sdebug init delay %fs\n", dinit); fprintf(stderr, "-sdebug open delay in init %fs\n", dopen); if (dinit > 0.0) { VTIM_sleep(dinit); } parent->open = smd_open; } const struct stevedore smd_stevedore = { .magic = STEVEDORE_MAGIC, .name = "debug", .init = smd_init, // other callbacks initialized in smd_init() }; varnish-7.5.0/bin/varnishd/storage/storage_file.c000066400000000000000000000274421457605730600220640ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Storage method based on mmap'ed file */ #include "config.h" #include "cache/cache_varnishd.h" #include "common/heritage.h" #include #include #include #include "storage/storage.h" #include "storage/storage_simple.h" #include "vnum.h" #include "vfil.h" #include "VSC_smf.h" #ifndef MAP_NOCORE #ifdef MAP_CONCEAL #define MAP_NOCORE MAP_CONCEAL /* XXX OpenBSD */ #else #define MAP_NOCORE 0 /* XXX Linux */ #endif #endif #ifndef MAP_NOSYNC #define MAP_NOSYNC 0 /* XXX Linux */ #endif #define MINPAGES 128 /* * Number of buckets on free-list. * * Last bucket is "larger than" so choose number so that the second * to last bucket matches the 128k CHUNKSIZE in cache_fetch.c when * using the a 4K minimal page size */ #define NBUCKET (128 / 4 + 1) static struct VSC_lck *lck_smf; /*--------------------------------------------------------------------*/ VTAILQ_HEAD(smfhead, smf); struct smf { unsigned magic; #define SMF_MAGIC 0x0927a8a0 struct storage s; struct smf_sc *sc; int alloc; off_t size; off_t offset; unsigned char *ptr; VTAILQ_ENTRY(smf) order; VTAILQ_ENTRY(smf) status; struct smfhead *flist; }; struct smf_sc { unsigned magic; #define SMF_SC_MAGIC 0x52962ee7 struct lock mtx; struct VSC_smf *stats; const char *filename; int fd; unsigned pagesize; uintmax_t filesize; int advice; struct smfhead order; struct smfhead free[NBUCKET]; struct smfhead used; }; /*--------------------------------------------------------------------*/ static void v_matchproto_(storage_init_f) smf_init(struct stevedore *parent, int ac, char * const *av) { const char *size, *fn, *r; struct smf_sc *sc; unsigned u; uintmax_t page_size; int advice = MADV_RANDOM; AZ(av[ac]); size = NULL; page_size = getpagesize(); if (ac > 4) ARGV_ERR("(-sfile) too many arguments\n"); if (ac < 1 || *av[0] == '\0') ARGV_ERR("(-sfile) path is mandatory\n"); fn = av[0]; if (ac > 1 && *av[1] != '\0') size = av[1]; if (ac > 2 && *av[2] != '\0') { r = VNUM_2bytes(av[2], &page_size, 0); if (r != NULL) ARGV_ERR("(-sfile) granularity \"%s\": %s\n", av[2], r); } if (ac > 3) { if (!strcmp(av[3], "normal")) advice = MADV_NORMAL; else if (!strcmp(av[3], "random")) advice = MADV_RANDOM; else if (!strcmp(av[3], "sequential")) advice = MADV_SEQUENTIAL; else ARGV_ERR("(-s file) invalid advice: \"%s\"", av[3]); } AN(fn); ALLOC_OBJ(sc, SMF_SC_MAGIC); XXXAN(sc); VTAILQ_INIT(&sc->order); for (u = 0; u < NBUCKET; u++) VTAILQ_INIT(&sc->free[u]); VTAILQ_INIT(&sc->used); sc->pagesize = page_size; sc->advice = advice; parent->priv = sc; (void)STV_GetFile(fn, &sc->fd, &sc->filename, "-sfile"); MCH_Fd_Inherit(sc->fd, "storage_file"); sc->filesize = STV_FileSize(sc->fd, size, &sc->pagesize, "-sfile"); if (VFIL_allocate(sc->fd, (off_t)sc->filesize, 0)) ARGV_ERR("(-sfile) allocation error: %s\n", VAS_errtxt(errno)); } /*-------------------------------------------------------------------- * Insert/Remove from correct freelist */ static void insfree(struct smf_sc *sc, struct smf *sp) { off_t b, ns; struct smf *sp2; AZ(sp->alloc); assert(sp->flist == NULL); Lck_AssertHeld(&sc->mtx); b = sp->size / sc->pagesize; if (b >= NBUCKET) { b = NBUCKET - 1; sc->stats->g_smf_large++; } else { sc->stats->g_smf_frag++; } sp->flist = &sc->free[b]; ns = b * sc->pagesize; VTAILQ_FOREACH(sp2, sp->flist, status) { assert(sp2->size >= ns); AZ(sp2->alloc); assert(sp2->flist == sp->flist); if (sp->offset < sp2->offset) break; } if (sp2 == NULL) VTAILQ_INSERT_TAIL(sp->flist, sp, status); else VTAILQ_INSERT_BEFORE(sp2, sp, status); } static void remfree(const struct smf_sc *sc, struct smf *sp) { size_t b; AZ(sp->alloc); assert(sp->flist != NULL); Lck_AssertHeld(&sc->mtx); b = sp->size / sc->pagesize; if (b >= NBUCKET) { b = NBUCKET - 1; sc->stats->g_smf_large--; } else { sc->stats->g_smf_frag--; } assert(sp->flist == &sc->free[b]); VTAILQ_REMOVE(sp->flist, sp, status); sp->flist = NULL; } /*-------------------------------------------------------------------- * Allocate a range from the first free range that is large enough. */ static struct smf * alloc_smf(struct smf_sc *sc, off_t bytes) { struct smf *sp, *sp2; off_t b; AZ(bytes % sc->pagesize); b = bytes / sc->pagesize; if (b >= NBUCKET) b = NBUCKET - 1; sp = NULL; for (; b < NBUCKET - 1; b++) { sp = VTAILQ_FIRST(&sc->free[b]); if (sp != NULL) break; } if (sp == NULL) { VTAILQ_FOREACH(sp, &sc->free[NBUCKET -1], status) if (sp->size >= bytes) break; } if (sp == NULL) return (sp); assert(sp->size >= bytes); remfree(sc, sp); if (sp->size == bytes) { sp->alloc = 1; VTAILQ_INSERT_TAIL(&sc->used, sp, status); return (sp); } /* Split from front */ sp2 = malloc(sizeof *sp2); XXXAN(sp2); sc->stats->g_smf++; *sp2 = *sp; sp->offset += bytes; sp->ptr += bytes; sp->size -= bytes; sp2->size = bytes; sp2->alloc = 1; VTAILQ_INSERT_BEFORE(sp, sp2, order); VTAILQ_INSERT_TAIL(&sc->used, sp2, status); insfree(sc, sp); return (sp2); } /*-------------------------------------------------------------------- * Free a range. Attempt merge forward and backward, then sort into * free list according to age. */ static void free_smf(struct smf *sp) { struct smf *sp2; struct smf_sc *sc = sp->sc; CHECK_OBJ_NOTNULL(sp, SMF_MAGIC); AN(sp->alloc); assert(sp->size > 0); AZ(sp->size % sc->pagesize); VTAILQ_REMOVE(&sc->used, sp, status); sp->alloc = 0; sp2 = VTAILQ_NEXT(sp, order); if (sp2 != NULL && sp2->alloc == 0 && (sp2->ptr == sp->ptr + sp->size) && (sp2->offset == sp->offset + sp->size)) { sp->size += sp2->size; VTAILQ_REMOVE(&sc->order, sp2, order); remfree(sc, sp2); free(sp2); sc->stats->g_smf--; } sp2 = VTAILQ_PREV(sp, smfhead, order); if (sp2 != NULL && sp2->alloc == 0 && (sp->ptr == sp2->ptr + sp2->size) && (sp->offset == sp2->offset + sp2->size)) { remfree(sc, sp2); sp2->size += sp->size; VTAILQ_REMOVE(&sc->order, sp, order); free(sp); sc->stats->g_smf--; sp = sp2; } insfree(sc, sp); } /*-------------------------------------------------------------------- * Insert a newly created range as busy, then free it to do any collapses */ static void new_smf(struct smf_sc *sc, unsigned char *ptr, off_t off, size_t len) { struct smf *sp, *sp2; AZ(len % sc->pagesize); ALLOC_OBJ(sp, SMF_MAGIC); XXXAN(sp); sp->s.magic = STORAGE_MAGIC; sc->stats->g_smf++; sp->sc = sc; sp->size = len; sp->ptr = ptr; sp->offset = off; sp->alloc = 1; VTAILQ_FOREACH(sp2, &sc->order, order) { if (sp->ptr < sp2->ptr) { VTAILQ_INSERT_BEFORE(sp2, sp, order); break; } } if (sp2 == NULL) VTAILQ_INSERT_TAIL(&sc->order, sp, order); VTAILQ_INSERT_HEAD(&sc->used, sp, status); free_smf(sp); } /*--------------------------------------------------------------------*/ /* * XXX: This may be too aggressive and soak up too much address room. * XXX: On the other hand, the user, directly or implicitly asked us to * XXX: use this much storage, so we should make a decent effort. * XXX: worst case (I think), malloc will fail. */ static void smf_open_chunk(struct smf_sc *sc, off_t sz, off_t off, off_t *fail, off_t *sum) { void *p; off_t h; AN(sz); AZ(sz % sc->pagesize); if (*fail < (off_t)sc->pagesize * MINPAGES) return; if (sz > 0 && sz < *fail && sz < SSIZE_MAX) { p = mmap(NULL, sz, PROT_READ|PROT_WRITE, MAP_NOCORE | MAP_NOSYNC | MAP_SHARED, sc->fd, off); if (p != MAP_FAILED) { (void)madvise(p, sz, sc->advice); (*sum) += sz; new_smf(sc, p, off, sz); return; } } if (sz < *fail) *fail = sz; h = sz / 2; h -= (h % sc->pagesize); smf_open_chunk(sc, h, off, fail, sum); smf_open_chunk(sc, sz - h, off + h, fail, sum); } static void v_matchproto_(storage_open_f) smf_open(struct stevedore *st) { struct smf_sc *sc; off_t fail = 1 << 30; /* XXX: where is OFF_T_MAX ? */ off_t sum = 0; ASSERT_CLI(); st->lru = LRU_Alloc(); if (lck_smf == NULL) lck_smf = Lck_CreateClass(NULL, "smf"); CAST_OBJ_NOTNULL(sc, st->priv, SMF_SC_MAGIC); sc->stats = VSC_smf_New(NULL, NULL, st->ident); Lck_New(&sc->mtx, lck_smf); Lck_Lock(&sc->mtx); smf_open_chunk(sc, sc->filesize, 0, &fail, &sum); Lck_Unlock(&sc->mtx); if (sum < MINPAGES * (off_t)getpagesize()) { ARGV_ERR( "-sfile too small for this architecture," " minimum size is %jd MB\n", (MINPAGES * (intmax_t)getpagesize()) / (1<<20) ); } printf("SMF.%s mmap'ed %ju bytes of %ju\n", st->ident, (uintmax_t)sum, sc->filesize); /* XXX */ if (sum < MINPAGES * (off_t)getpagesize()) exit(4); sc->stats->g_space += sc->filesize; } /*--------------------------------------------------------------------*/ static struct storage * v_matchproto_(sml_alloc_f) smf_alloc(const struct stevedore *st, size_t sz) { struct smf *smf; struct smf_sc *sc; off_t size; CAST_OBJ_NOTNULL(sc, st->priv, SMF_SC_MAGIC); assert(sz > 0); // XXX missing OFF_T_MAX size = (off_t)sz; size += (sc->pagesize - 1UL); size &= ~(sc->pagesize - 1UL); Lck_Lock(&sc->mtx); sc->stats->c_req++; smf = alloc_smf(sc, size); if (smf == NULL) { sc->stats->c_fail++; Lck_Unlock(&sc->mtx); return (NULL); } CHECK_OBJ_NOTNULL(smf, SMF_MAGIC); sc->stats->g_alloc++; sc->stats->c_bytes += smf->size; sc->stats->g_bytes += smf->size; sc->stats->g_space -= smf->size; Lck_Unlock(&sc->mtx); CHECK_OBJ_NOTNULL(&smf->s, STORAGE_MAGIC); /*lint !e774 */ XXXAN(smf); assert(smf->size == size); smf->s.space = size; smf->s.priv = smf; smf->s.ptr = smf->ptr; smf->s.len = 0; return (&smf->s); } /*--------------------------------------------------------------------*/ static void v_matchproto_(sml_free_f) smf_free(struct storage *s) { struct smf *smf; struct smf_sc *sc; CHECK_OBJ_NOTNULL(s, STORAGE_MAGIC); CAST_OBJ_NOTNULL(smf, s->priv, SMF_MAGIC); sc = smf->sc; Lck_Lock(&sc->mtx); sc->stats->g_alloc--; sc->stats->c_freed += smf->size; sc->stats->g_bytes -= smf->size; sc->stats->g_space += smf->size; free_smf(smf); Lck_Unlock(&sc->mtx); } /*--------------------------------------------------------------------*/ const struct stevedore smf_stevedore = { .magic = STEVEDORE_MAGIC, .name = "file", .init = smf_init, .open = smf_open, .sml_alloc = smf_alloc, .sml_free = smf_free, .allocobj = SML_allocobj, .panic = SML_panic, .methods = &SML_methods, .allocbuf = SML_AllocBuf, .freebuf = SML_FreeBuf, }; varnish-7.5.0/bin/varnishd/storage/storage_lru.c000066400000000000000000000123751457605730600217460ustar00rootroot00000000000000/*- * Copyright (c) 2007-2015 Varnish Software AS * All rights reserved. * * Author: Dag-Erling Smørgav * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Least-Recently-Used logic for freeing space in stevedores. */ #include "config.h" #include #include "cache/cache_varnishd.h" #include "cache/cache_objhead.h" #include "storage/storage.h" struct lru { unsigned magic; #define LRU_MAGIC 0x3fec7bb0 VTAILQ_HEAD(,objcore) lru_head; struct lock mtx; }; static struct lru * lru_get(const struct objcore *oc) { CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); CHECK_OBJ_NOTNULL(oc->stobj->stevedore, STEVEDORE_MAGIC); CHECK_OBJ_NOTNULL(oc->stobj->stevedore->lru, LRU_MAGIC); return (oc->stobj->stevedore->lru); } struct lru * LRU_Alloc(void) { struct lru *lru; ALLOC_OBJ(lru, LRU_MAGIC); AN(lru); VTAILQ_INIT(&lru->lru_head); Lck_New(&lru->mtx, lck_lru); return (lru); } void LRU_Free(struct lru **pp) { struct lru *lru; TAKE_OBJ_NOTNULL(lru, pp, LRU_MAGIC); Lck_Lock(&lru->mtx); AN(VTAILQ_EMPTY(&lru->lru_head)); Lck_Unlock(&lru->mtx); Lck_Delete(&lru->mtx); FREE_OBJ(lru); } void LRU_Add(struct objcore *oc, vtim_real now) { struct lru *lru; CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); if (oc->flags & OC_F_PRIVATE) return; AZ(oc->boc); AN(isnan(oc->last_lru)); AZ(isnan(now)); lru = lru_get(oc); CHECK_OBJ_NOTNULL(lru, LRU_MAGIC); Lck_Lock(&lru->mtx); VTAILQ_INSERT_TAIL(&lru->lru_head, oc, lru_list); oc->last_lru = now; AZ(isnan(oc->last_lru)); Lck_Unlock(&lru->mtx); } void LRU_Remove(struct objcore *oc) { struct lru *lru; CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); if (oc->flags & OC_F_PRIVATE) return; AZ(oc->boc); lru = lru_get(oc); CHECK_OBJ_NOTNULL(lru, LRU_MAGIC); Lck_Lock(&lru->mtx); AZ(isnan(oc->last_lru)); VTAILQ_REMOVE(&lru->lru_head, oc, lru_list); oc->last_lru = NAN; Lck_Unlock(&lru->mtx); } void v_matchproto_(objtouch_f) LRU_Touch(struct worker *wrk, struct objcore *oc, vtim_real now) { struct lru *lru; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); if (oc->flags & OC_F_PRIVATE || isnan(oc->last_lru)) return; /* * To avoid the exphdl->mtx becoming a hotspot, we only * attempt to move objects if they have not been moved * recently and if the lock is available. This optimization * obviously leaves the LRU list imperfectly sorted. */ if (now - oc->last_lru < cache_param->lru_interval) return; lru = lru_get(oc); CHECK_OBJ_NOTNULL(lru, LRU_MAGIC); if (Lck_Trylock(&lru->mtx)) return; if (!isnan(oc->last_lru)) { VTAILQ_REMOVE(&lru->lru_head, oc, lru_list); VTAILQ_INSERT_TAIL(&lru->lru_head, oc, lru_list); VSC_C_main->n_lru_moved++; oc->last_lru = now; } Lck_Unlock(&lru->mtx); } /*-------------------------------------------------------------------- * Attempt to make space by nuking the oldest object on the LRU list * which isn't in use. * Returns: 1: did, 0: didn't; */ int LRU_NukeOne(struct worker *wrk, struct lru *lru) { struct objcore *oc, *oc2; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(lru, LRU_MAGIC); if (wrk->strangelove-- <= 0) { VSLb(wrk->vsl, SLT_ExpKill, "LRU reached nuke_limit"); VSC_C_main->n_lru_limited++; return (0); } /* Find the first currently unused object on the LRU. */ Lck_Lock(&lru->mtx); VTAILQ_FOREACH_SAFE(oc, &lru->lru_head, lru_list, oc2) { CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); AZ(isnan(oc->last_lru)); VSLb(wrk->vsl, SLT_ExpKill, "LRU_Cand p=%p f=0x%x r=%d", oc, oc->flags, oc->refcnt); if (HSH_Snipe(wrk, oc)) { VSC_C_main->n_lru_nuked++; // XXX per lru ? VTAILQ_REMOVE(&lru->lru_head, oc, lru_list); VTAILQ_INSERT_TAIL(&lru->lru_head, oc, lru_list); break; } } Lck_Unlock(&lru->mtx); if (oc == NULL) { VSLb(wrk->vsl, SLT_ExpKill, "LRU_Fail"); return (0); } /* XXX: We could grab and return one storage segment to our caller */ ObjSlim(wrk, oc); VSLb(wrk->vsl, SLT_ExpKill, "LRU xid=%ju", VXID(ObjGetXID(wrk, oc))); (void)HSH_DerefObjCore(wrk, &oc, 0); // Ref from HSH_Snipe return (1); } varnish-7.5.0/bin/varnishd/storage/storage_malloc.c000066400000000000000000000142571457605730600224140ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Storage method based on malloc(3) */ #include "config.h" #include "cache/cache_varnishd.h" #include "common/heritage.h" #include #include #include "storage/storage.h" #include "storage/storage_simple.h" #include "vnum.h" #include "VSC_sma.h" struct sma_sc { unsigned magic; #define SMA_SC_MAGIC 0x1ac8a345 struct lock sma_mtx; VCL_BYTES sma_max; VCL_BYTES sma_alloc; struct VSC_sma *stats; }; struct sma { unsigned magic; #define SMA_MAGIC 0x69ae9bb9 struct storage s; size_t sz; struct sma_sc *sc; }; static struct VSC_lck *lck_sma; static struct storage * v_matchproto_(sml_alloc_f) sma_alloc(const struct stevedore *st, size_t size) { struct sma_sc *sma_sc; struct sma *sma = NULL; void *p; CAST_OBJ_NOTNULL(sma_sc, st->priv, SMA_SC_MAGIC); Lck_Lock(&sma_sc->sma_mtx); sma_sc->stats->c_req++; if (sma_sc->sma_alloc + (VCL_BYTES)size > sma_sc->sma_max) { sma_sc->stats->c_fail++; size = 0; } else { sma_sc->sma_alloc += size; sma_sc->stats->c_bytes += size; sma_sc->stats->g_alloc++; sma_sc->stats->g_bytes += size; if (sma_sc->sma_max != VRT_INTEGER_MAX) sma_sc->stats->g_space -= size; } Lck_Unlock(&sma_sc->sma_mtx); if (size == 0) return (NULL); /* * Do not collaps the sma allocation with sma->s.ptr: it is not * a good idea. Not only would it make ->trim impossible, * performance-wise it would be a catastropy with chunksized * allocations growing another full page, just to accommodate the sma. */ p = malloc(size); if (p != NULL) { ALLOC_OBJ(sma, SMA_MAGIC); if (sma != NULL) sma->s.ptr = p; else free(p); } if (sma == NULL) { Lck_Lock(&sma_sc->sma_mtx); /* * XXX: Not nice to have counters go backwards, but we do * XXX: Not want to pick up the lock twice just for stats. */ sma_sc->stats->c_fail++; sma_sc->sma_alloc -= size; sma_sc->stats->c_bytes -= size; sma_sc->stats->g_alloc--; sma_sc->stats->g_bytes -= size; if (sma_sc->sma_max != VRT_INTEGER_MAX) sma_sc->stats->g_space += size; Lck_Unlock(&sma_sc->sma_mtx); return (NULL); } sma->sc = sma_sc; sma->sz = size; sma->s.priv = sma; sma->s.len = 0; sma->s.space = size; sma->s.magic = STORAGE_MAGIC; return (&sma->s); } static void v_matchproto_(sml_free_f) sma_free(struct storage *s) { struct sma_sc *sma_sc; struct sma *sma; CHECK_OBJ_NOTNULL(s, STORAGE_MAGIC); CAST_OBJ_NOTNULL(sma, s->priv, SMA_MAGIC); sma_sc = sma->sc; assert(sma->sz == sma->s.space); Lck_Lock(&sma_sc->sma_mtx); sma_sc->sma_alloc -= sma->sz; sma_sc->stats->g_alloc--; sma_sc->stats->g_bytes -= sma->sz; sma_sc->stats->c_freed += sma->sz; if (sma_sc->sma_max != VRT_INTEGER_MAX) sma_sc->stats->g_space += sma->sz; Lck_Unlock(&sma_sc->sma_mtx); free(sma->s.ptr); FREE_OBJ(sma); } static VCL_BYTES v_matchproto_(stv_var_used_space) sma_used_space(const struct stevedore *st) { struct sma_sc *sma_sc; CAST_OBJ_NOTNULL(sma_sc, st->priv, SMA_SC_MAGIC); return (sma_sc->sma_alloc); } static VCL_BYTES v_matchproto_(stv_var_free_space) sma_free_space(const struct stevedore *st) { struct sma_sc *sma_sc; CAST_OBJ_NOTNULL(sma_sc, st->priv, SMA_SC_MAGIC); return (sma_sc->sma_max - sma_sc->sma_alloc); } static void v_matchproto_(storage_init_f) sma_init(struct stevedore *parent, int ac, char * const *av) { const char *e; uintmax_t u; struct sma_sc *sc; ALLOC_OBJ(sc, SMA_SC_MAGIC); AN(sc); sc->sma_max = VRT_INTEGER_MAX; assert(sc->sma_max == VRT_INTEGER_MAX); parent->priv = sc; AZ(av[ac]); if (ac > 1) ARGV_ERR("(-s%s) too many arguments\n", parent->name); if (ac == 0 || *av[0] == '\0') return; e = VNUM_2bytes(av[0], &u, 0); if (e != NULL) ARGV_ERR("(-s%s) size \"%s\": %s\n", parent->name, av[0], e); if ((u != (uintmax_t)(size_t)u)) ARGV_ERR("(-s%s) size \"%s\": too big\n", parent->name, av[0]); if (u < 1024*1024) ARGV_ERR("(-s%s) size \"%s\": too small, " "did you forget to specify M or G?\n", parent->name, av[0]); sc->sma_max = u; } static void v_matchproto_(storage_open_f) sma_open(struct stevedore *st) { struct sma_sc *sma_sc; ASSERT_CLI(); st->lru = LRU_Alloc(); if (lck_sma == NULL) lck_sma = Lck_CreateClass(NULL, "sma"); CAST_OBJ_NOTNULL(sma_sc, st->priv, SMA_SC_MAGIC); Lck_New(&sma_sc->sma_mtx, lck_sma); sma_sc->stats = VSC_sma_New(NULL, NULL, st->ident); if (sma_sc->sma_max != VRT_INTEGER_MAX) sma_sc->stats->g_space = sma_sc->sma_max; } const struct stevedore sma_stevedore = { .magic = STEVEDORE_MAGIC, .name = "malloc", .init = sma_init, .open = sma_open, .sml_alloc = sma_alloc, .sml_free = sma_free, .allocobj = SML_allocobj, .panic = SML_panic, .methods = &SML_methods, .var_free_space = sma_free_space, .var_used_space = sma_used_space, .allocbuf = SML_AllocBuf, .freebuf = SML_FreeBuf, }; varnish-7.5.0/bin/varnishd/storage/storage_persistent.c000066400000000000000000000414551457605730600233450ustar00rootroot00000000000000/*- * Copyright (c) 2008-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Persistent storage method * * XXX: Before we start the client or maybe after it stops, we should give the * XXX: stevedores a chance to examine their storage for consistency. * * XXX: Do we ever free the LRU-lists ? */ #include "config.h" #include "cache/cache_varnishd.h" #include #include #include #include "cache/cache_obj.h" #include "cache/cache_objhead.h" #include "storage/storage.h" #include "storage/storage_simple.h" #include "vcli_serve.h" #include "vsha256.h" #include "vtim.h" #include "storage/storage_persistent.h" static struct obj_methods smp_oc_realmethods; static struct VSC_lck *lck_smp; static void smp_init(void); /*--------------------------------------------------------------------*/ /* * silos is unlocked, it only changes during startup when we are * single-threaded */ static VTAILQ_HEAD(,smp_sc) silos = VTAILQ_HEAD_INITIALIZER(silos); /*-------------------------------------------------------------------- * Add bans to silos */ static int smp_appendban(const struct smp_sc *sc, struct smp_signspace *spc, uint32_t len, const uint8_t *ban) { (void)sc; if (SIGNSPACE_FREE(spc) < len) return (-1); memcpy(SIGNSPACE_FRONT(spc), ban, len); smp_append_signspace(spc, len); return (0); } /* Trust that cache_ban.c takes care of locking */ static int smp_baninfo(const struct stevedore *stv, enum baninfo event, const uint8_t *ban, unsigned len) { struct smp_sc *sc; int r = 0; CAST_OBJ_NOTNULL(sc, stv->priv, SMP_SC_MAGIC); switch (event) { case BI_NEW: r |= smp_appendban(sc, &sc->ban1, len, ban); r |= smp_appendban(sc, &sc->ban2, len, ban); break; default: /* Ignored */ break; } return (r); } static void smp_banexport_spc(struct smp_signspace *spc, const uint8_t *bans, unsigned len) { smp_reset_signspace(spc); assert(SIGNSPACE_FREE(spc) >= len); memcpy(SIGNSPACE_DATA(spc), bans, len); smp_append_signspace(spc, len); smp_sync_sign(&spc->ctx); } static void smp_banexport(const struct stevedore *stv, const uint8_t *bans, unsigned len) { struct smp_sc *sc; CAST_OBJ_NOTNULL(sc, stv->priv, SMP_SC_MAGIC); smp_banexport_spc(&sc->ban1, bans, len); smp_banexport_spc(&sc->ban2, bans, len); } /*-------------------------------------------------------------------- * Attempt to open and read in a ban list */ static int smp_open_bans(const struct smp_sc *sc, struct smp_signspace *spc) { uint8_t *ptr, *pe; int i; ASSERT_CLI(); (void)sc; i = smp_chk_signspace(spc); if (i) return (i); ptr = SIGNSPACE_DATA(spc); pe = SIGNSPACE_FRONT(spc); BAN_Reload(ptr, pe - ptr); return (0); } /*-------------------------------------------------------------------- * Attempt to open and read in a segment list */ static int smp_open_segs(struct smp_sc *sc, struct smp_signspace *spc) { uint64_t length, l; struct smp_segptr *ss, *se; struct smp_seg *sg, *sg1, *sg2; int i, n = 0; ASSERT_CLI(); i = smp_chk_signspace(spc); if (i) return (i); ss = SIGNSPACE_DATA(spc); length = SIGNSPACE_LEN(spc); if (length == 0) { /* No segments */ sc->free_offset = sc->ident->stuff[SMP_SPC_STUFF]; return (0); } se = ss + length / sizeof *ss; se--; assert(ss <= se); /* * Locate the free reserve, there are only two basic cases, * but once we start dropping segments, things gets more complicated. */ sc->free_offset = se->offset + se->length; l = sc->mediasize - sc->free_offset; if (se->offset > ss->offset && l >= sc->free_reserve) { /* * [__xxxxyyyyzzzz___] * Plenty of space at tail, do nothing. */ } else if (ss->offset > se->offset) { /* * [zzzz____xxxxyyyy_] * (make) space between ends * We might nuke the entire tail end without getting * enough space, in which case we fall through to the * last check. */ while (ss < se && ss->offset > se->offset) { l = ss->offset - (se->offset + se->length); if (l > sc->free_reserve) break; ss++; n++; } } if (l < sc->free_reserve) { /* * [__xxxxyyyyzzzz___] * (make) space at front */ sc->free_offset = sc->ident->stuff[SMP_SPC_STUFF]; while (ss < se) { l = ss->offset - sc->free_offset; if (l > sc->free_reserve) break; ss++; n++; } } assert(l >= sc->free_reserve); sg1 = NULL; sg2 = NULL; for (; ss <= se; ss++) { ALLOC_OBJ(sg, SMP_SEG_MAGIC); AN(sg); VTAILQ_INIT(&sg->objcores); sg->p = *ss; sg->flags |= SMP_SEG_MUSTLOAD; /* * HACK: prevent save_segs from nuking segment until we have * HACK: loaded it. */ sg->nobj = 1; if (sg1 != NULL) { assert(sg1->p.offset != sg->p.offset); if (sg1->p.offset < sg->p.offset) assert(smp_segend(sg1) <= sg->p.offset); else assert(smp_segend(sg) <= sg1->p.offset); } if (sg2 != NULL) { assert(sg2->p.offset != sg->p.offset); if (sg2->p.offset < sg->p.offset) assert(smp_segend(sg2) <= sg->p.offset); else assert(smp_segend(sg) <= sg2->p.offset); } /* XXX: check that they are inside silo */ /* XXX: check that they don't overlap */ /* XXX: check that they are serial */ sg->sc = sc; VTAILQ_INSERT_TAIL(&sc->segments, sg, list); sg2 = sg; if (sg1 == NULL) sg1 = sg; } printf("Dropped %d segments to make free_reserve\n", n); return (0); } /*-------------------------------------------------------------------- * Silo worker thread */ static void * v_matchproto_(bgthread_t) smp_thread(struct worker *wrk, void *priv) { struct smp_sc *sc; struct smp_seg *sg; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CAST_OBJ_NOTNULL(sc, priv, SMP_SC_MAGIC); sc->thread = pthread_self(); /* First, load all the objects from all segments */ VTAILQ_FOREACH(sg, &sc->segments, list) if (sg->flags & SMP_SEG_MUSTLOAD) smp_load_seg(wrk, sc, sg); sc->flags |= SMP_SC_LOADED; BAN_Release(); printf("Silo completely loaded\n"); /* Housekeeping loop */ Lck_Lock(&sc->mtx); while (!(sc->flags & SMP_SC_STOP)) { sg = VTAILQ_FIRST(&sc->segments); if (sg != NULL && sg != sc->cur_seg && sg->nobj == 0) smp_save_segs(sc); Lck_Unlock(&sc->mtx); VTIM_sleep(3.14159265359 - 2); Lck_Lock(&sc->mtx); } smp_save_segs(sc); Lck_Unlock(&sc->mtx); pthread_exit(0); NEEDLESS(return (NULL)); } /*-------------------------------------------------------------------- * Open a silo in the worker process */ static void v_matchproto_(storage_open_f) smp_open(struct stevedore *st) { struct smp_sc *sc; ASSERT_CLI(); if (VTAILQ_EMPTY(&silos)) smp_init(); CAST_OBJ_NOTNULL(sc, st->priv, SMP_SC_MAGIC); Lck_New(&sc->mtx, lck_smp); Lck_Lock(&sc->mtx); sc->stevedore = st; /* We trust the parent to give us a valid silo, for good measure: */ AZ(smp_valid_silo(sc)); AZ(mprotect((void*)sc->base, 4096, PROT_READ)); sc->ident = SIGN_DATA(&sc->idn); /* Check ban lists */ if (smp_chk_signspace(&sc->ban1)) { /* Ban list 1 is broken, use ban2 */ AZ(smp_chk_signspace(&sc->ban2)); smp_copy_signspace(&sc->ban1, &sc->ban2); smp_sync_sign(&sc->ban1.ctx); } else { /* Ban1 is OK, copy to ban2 for consistency */ smp_copy_signspace(&sc->ban2, &sc->ban1); smp_sync_sign(&sc->ban2.ctx); } AZ(smp_open_bans(sc, &sc->ban1)); /* We attempt seg1 first, and if that fails, try seg2 */ if (smp_open_segs(sc, &sc->seg1)) AZ(smp_open_segs(sc, &sc->seg2)); /* * Grap a reference to the tail of the ban list, until the thread * has loaded all objects, so we can be sure that all of our * proto-bans survive until then. */ BAN_Hold(); /* XXX: save segments to ensure consistency between seg1 & seg2 ? */ /* XXX: abandon early segments to make sure we have free space ? */ (void)ObjSubscribeEvents(smp_oc_event, st, OEV_BANCHG|OEV_TTLCHG|OEV_INSERT); /* Open a new segment, so we are ready to write */ smp_new_seg(sc); /* Start the worker silo worker thread, it will load the objects */ WRK_BgThread(&sc->bgthread, "persistence", smp_thread, sc); VTAILQ_INSERT_TAIL(&silos, sc, list); Lck_Unlock(&sc->mtx); } /*-------------------------------------------------------------------- * Close a silo */ static void v_matchproto_(storage_close_f) smp_close(const struct stevedore *st, int warn) { struct smp_sc *sc; void *status; ASSERT_CLI(); CAST_OBJ_NOTNULL(sc, st->priv, SMP_SC_MAGIC); if (warn) { Lck_Lock(&sc->mtx); if (sc->cur_seg != NULL) smp_close_seg(sc, sc->cur_seg); AZ(sc->cur_seg); sc->flags |= SMP_SC_STOP; Lck_Unlock(&sc->mtx); } else { PTOK(pthread_join(sc->bgthread, &status)); AZ(status); } } /*-------------------------------------------------------------------- * Allocate a bite. * * Allocate [min_size...max_size] space from the bottom of the segment, * as is convenient. * * If 'so' + 'idx' is given, also allocate a smp_object from the top * of the segment. * * Return the segment in 'ssg' if given. */ static struct storage * smp_allocx(const struct stevedore *st, size_t min_size, size_t max_size, struct smp_object **so, unsigned *idx, struct smp_seg **ssg) { struct smp_sc *sc; struct storage *ss; struct smp_seg *sg; uint64_t left, extra; CAST_OBJ_NOTNULL(sc, st->priv, SMP_SC_MAGIC); assert(min_size <= max_size); max_size = IRNUP(sc, max_size); min_size = IRNUP(sc, min_size); extra = IRNUP(sc, sizeof(*ss)); if (so != NULL) { extra += sizeof(**so); AN(idx); } Lck_Lock(&sc->mtx); sg = NULL; ss = NULL; left = 0; if (sc->cur_seg != NULL) left = smp_spaceleft(sc, sc->cur_seg); if (left < extra + min_size) { if (sc->cur_seg != NULL) smp_close_seg(sc, sc->cur_seg); smp_new_seg(sc); if (sc->cur_seg != NULL) left = smp_spaceleft(sc, sc->cur_seg); else left = 0; } if (left >= extra + min_size) { AN(sc->cur_seg); if (left < extra + max_size) max_size = IRNDN(sc, left - extra); sg = sc->cur_seg; ss = (void*)(sc->base + sc->next_bot); sc->next_bot += max_size + IRNUP(sc, sizeof(*ss)); sg->nalloc++; if (so != NULL) { sc->next_top -= sizeof(**so); *so = (void*)(sc->base + sc->next_top); /* Render this smp_object mostly harmless */ EXP_ZERO((*so)); (*so)->ban = 0.; (*so)->ptr = 0; sg->objs = *so; *idx = ++sg->p.lobjlist; } (void)smp_spaceleft(sc, sg); /* for the assert */ } Lck_Unlock(&sc->mtx); if (ss == NULL) return (ss); AN(sg); assert(max_size >= min_size); /* Fill the storage structure */ INIT_OBJ(ss, STORAGE_MAGIC); ss->ptr = PRNUP(sc, ss + 1); ss->space = max_size; ss->priv = sc->base; if (ssg != NULL) *ssg = sg; return (ss); } /*-------------------------------------------------------------------- * Allocate an object */ static int v_matchproto_(storage_allocobj_f) smp_allocobj(struct worker *wrk, const struct stevedore *stv, struct objcore *oc, unsigned wsl) { struct object *o; struct storage *st; struct smp_sc *sc; struct smp_seg *sg; struct smp_object *so; unsigned objidx; unsigned ltot; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); CAST_OBJ_NOTNULL(sc, stv->priv, SMP_SC_MAGIC); /* Don't entertain already dead objects */ if (oc->flags & OC_F_DYING) return (0); if (oc->t_origin <= 0.) return (0); if (oc->ttl + oc->grace + oc->keep <= 0.) return (0); ltot = sizeof(struct object) + PRNDUP(wsl); ltot = IRNUP(sc, ltot); st = NULL; sg = NULL; so = NULL; objidx = 0; do { st = smp_allocx(stv, ltot, ltot, &so, &objidx, &sg); if (st != NULL && st->space < ltot) { stv->sml_free(st); // NOP st = NULL; } } while (st == NULL && LRU_NukeOne(wrk, stv->lru)); if (st == NULL) return (0); AN(st); AN(sg); AN(so); assert(st->space >= ltot); o = SML_MkObject(stv, oc, st->ptr); AN(oc->stobj->stevedore); assert(oc->stobj->stevedore == stv); CHECK_OBJ_NOTNULL(o, OBJECT_MAGIC); o->objstore = st; st->len = sizeof(*o); Lck_Lock(&sc->mtx); sg->nfixed++; sg->nobj++; /* We have to do this somewhere, might as well be here... */ assert(sizeof so->hash == DIGEST_LEN); memcpy(so->hash, oc->objhead->digest, DIGEST_LEN); EXP_COPY(so, oc); so->ptr = (uint8_t*)(o->objstore) - sc->base; so->ban = BAN_Time(oc->ban); smp_init_oc(oc, sg, objidx); VTAILQ_INSERT_TAIL(&sg->objcores, oc, lru_list); Lck_Unlock(&sc->mtx); return (1); } /*-------------------------------------------------------------------- * Allocate a bite */ static struct storage * v_matchproto_(sml_alloc_f) smp_alloc(const struct stevedore *st, size_t size) { return (smp_allocx(st, size > 4096 ? 4096 : size, size, NULL, NULL, NULL)); } /*--------------------------------------------------------------------*/ const struct stevedore smp_stevedore = { .magic = STEVEDORE_MAGIC, .name = "deprecated_persistent", .init = smp_mgt_init, .open = smp_open, .close = smp_close, .allocobj = smp_allocobj, .baninfo = smp_baninfo, .banexport = smp_banexport, .methods = &smp_oc_realmethods, .sml_alloc = smp_alloc, .sml_free = NULL, .sml_getobj = smp_sml_getobj, }; /*-------------------------------------------------------------------- * Persistence is a bear to test unadulterated, so we cheat by adding * a cli command we can use to make it do tricks for us. */ static void debug_report_silo(struct cli *cli, const struct smp_sc *sc) { struct smp_seg *sg; VCLI_Out(cli, "Silo: %s (%s)\n", sc->stevedore->ident, sc->filename); VTAILQ_FOREACH(sg, &sc->segments, list) { VCLI_Out(cli, " Seg: [0x%jx ... +0x%jx]\n", (uintmax_t)sg->p.offset, (uintmax_t)sg->p.length); if (sg == sc->cur_seg) VCLI_Out(cli, " Alloc: [0x%jx ... 0x%jx] = 0x%jx free\n", (uintmax_t)(sc->next_bot), (uintmax_t)(sc->next_top), (uintmax_t)(sc->next_top - sc->next_bot)); VCLI_Out(cli, " %u nobj, %u alloc, %u lobjlist, %u fixed\n", sg->nobj, sg->nalloc, sg->p.lobjlist, sg->nfixed); } } static void v_matchproto_(cli_func_t) debug_persistent(struct cli *cli, const char * const * av, void *priv) { struct smp_sc *sc; (void)priv; if (av[2] == NULL) { VTAILQ_FOREACH(sc, &silos, list) debug_report_silo(cli, sc); return; } VTAILQ_FOREACH(sc, &silos, list) if (!strcmp(av[2], sc->stevedore->ident)) break; if (sc == NULL) { VCLI_Out(cli, "Silo <%s> not found\n", av[2]); VCLI_SetResult(cli, CLIS_PARAM); return; } if (av[3] == NULL) { debug_report_silo(cli, sc); return; } Lck_Lock(&sc->mtx); if (!strcmp(av[3], "sync")) { if (sc->cur_seg != NULL) smp_close_seg(sc, sc->cur_seg); smp_new_seg(sc); } else if (!strcmp(av[3], "dump")) { debug_report_silo(cli, sc); } else { VCLI_Out(cli, "Unknown operation\n"); VCLI_SetResult(cli, CLIS_PARAM); } Lck_Unlock(&sc->mtx); } static struct cli_proto debug_cmds[] = { { CLICMD_DEBUG_PERSISTENT, "d", debug_persistent }, { NULL } }; /*-------------------------------------------------------------------- */ static void smp_init(void) { lck_smp = Lck_CreateClass(NULL, "smp"); CLI_AddFuncs(debug_cmds); smp_oc_realmethods.objfree = SML_methods.objfree; smp_oc_realmethods.objiterator = SML_methods.objiterator; smp_oc_realmethods.objgetspace = SML_methods.objgetspace; smp_oc_realmethods.objextend = SML_methods.objextend; smp_oc_realmethods.objbocdone = SML_methods.objbocdone; smp_oc_realmethods.objgetattr = SML_methods.objgetattr; smp_oc_realmethods.objsetattr = SML_methods.objsetattr; smp_oc_realmethods.objtouch = LRU_Touch; smp_oc_realmethods.objfree = smp_oc_objfree; } /*-------------------------------------------------------------------- * Pause until all silos have loaded. */ void SMP_Ready(void) { struct smp_sc *sc; ASSERT_CLI(); do { VTAILQ_FOREACH(sc, &silos, list) if (!(sc->flags & SMP_SC_LOADED)) break; if (sc != NULL) (void)sleep(1); } while (sc != NULL); } varnish-7.5.0/bin/varnishd/storage/storage_persistent.h000066400000000000000000000227471457605730600233550ustar00rootroot00000000000000/*- * Copyright (c) 2008-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Persistent storage method * * XXX: Before we start the client or maybe after it stops, we should give the * XXX: stevedores a chance to examine their storage for consistency. * * XXX: Do we ever free the LRU-lists ? */ /* * * Overall layout: * * struct smp_ident; Identification and geometry * sha256[...] checksum of same * * struct smp_sign; * banspace_1; First ban-space * sha256[...] checksum of same * * struct smp_sign; * banspace_2; Second ban-space * sha256[...] checksum of same * * struct smp_sign; * struct smp_segment_1[N]; First Segment table * sha256[...] checksum of same * * struct smp_sign; * struct smp_segment_2[N]; Second Segment table * sha256[...] checksum of same * * N segments { * struct smp_sign; * struct smp_object[M] Objects in segment * sha256[...] checksum of same * objspace * } * */ /* * The identblock is located in the first sector of the storage space. * This is written once and not subsequently modified in normal operation. * It is immediately followed by a SHA256sum of the structure, as stored. */ struct smp_ident { char ident[32]; /* Human readable ident * so people and programs * can tell what the file * or device contains. */ uint32_t byte_order; /* 0x12345678 */ uint32_t size; /* sizeof(struct smp_ident) */ uint32_t major_version; uint32_t unique; uint32_t align; /* alignment in silo */ uint32_t granularity; /* smallest ... in bytes */ uint64_t mediasize; /* ... in bytes */ uint64_t stuff[6]; /* pointers to stuff */ #define SMP_BAN1_STUFF 0 #define SMP_BAN2_STUFF 1 #define SMP_SEG1_STUFF 2 #define SMP_SEG2_STUFF 3 #define SMP_SPC_STUFF 4 #define SMP_END_STUFF 5 }; /* * The size of smp_ident should be fixed and constant across all platforms. * We enforce that with the following #define and an assert in smp_init() */ #define SMP_IDENT_SIZE 112 #define SMP_IDENT_STRING "Varnish Persistent Storage Silo" /* * This is used to sign various bits on the disk. */ struct smp_sign { char ident[8]; uint32_t unique; uint64_t mapped; /* The length field is the length of the signed data only * (does not include struct smp_sign) */ uint64_t length; /* NB: Must be last */ }; #define SMP_SIGN_SPACE (sizeof(struct smp_sign) + VSHA256_LEN) /* * A segment pointer. */ struct smp_segptr { uint64_t offset; /* rel to silo */ uint64_t length; /* rel to offset */ uint64_t objlist; /* rel to silo */ uint32_t lobjlist; /* len of objlist */ }; /* * An object descriptor * * A positive ttl is obj.ttl with obj.grace being NAN * A negative ttl is - (obj.ttl + obj.grace) */ struct smp_object { uint8_t hash[32]; /* really: DIGEST_LEN */ double t_origin; float ttl; float grace; float keep; uint32_t __filler__; /* -> align/8 on 32bit */ double ban; uint64_t ptr; /* rel to silo */ }; #define ASSERT_SILO_THREAD(sc) \ do {assert(pthread_equal(pthread_self(), (sc)->thread));} while (0) /* * Context for a signature. * * A signature is a sequence of bytes in the silo, signed by a SHA256 hash * which follows the bytes. * * The context structure allows us to append to a signature without * recalculating the entire SHA256 hash. */ struct smp_signctx { struct smp_sign *ss; struct VSHA256Context ctx; uint32_t unique; const char *id; }; /* * A space wrapped by a signature * * A signspace is a chunk of the silo that is wrapped by a * signature. It has attributes for size, so range checking can be * performed. * */ struct smp_signspace { struct smp_signctx ctx; uint8_t *start; uint64_t size; }; struct smp_sc; /* XXX: name confusion with on-media version ? */ struct smp_seg { unsigned magic; #define SMP_SEG_MAGIC 0x45c61895 struct smp_sc *sc; VTAILQ_HEAD(,objcore) objcores; VTAILQ_ENTRY(smp_seg) list; /* on smp_sc.smp_segments */ struct smp_segptr p; unsigned flags; #define SMP_SEG_MUSTLOAD (1 << 0) #define SMP_SEG_LOADED (1 << 1) uint32_t nobj; /* Number of objects */ uint32_t nalloc; /* Allocations */ uint32_t nfixed; /* How many fixed objects */ /* Only for open segment */ struct smp_object *objs; /* objdesc array */ struct smp_signctx ctx[1]; }; VTAILQ_HEAD(smp_seghead, smp_seg); struct smp_sc { unsigned magic; #define SMP_SC_MAGIC 0x7b73af0a struct stevedore *parent; pthread_t bgthread; unsigned flags; #define SMP_SC_LOADED (1 << 0) #define SMP_SC_STOP (1 << 1) const struct stevedore *stevedore; int fd; const char *filename; uint64_t mediasize; uintptr_t align; uint32_t granularity; uint32_t unique; uint8_t *base; struct smp_ident *ident; struct smp_seghead segments; struct smp_seg *cur_seg; uint64_t next_bot; /* next alloc address bottom */ uint64_t next_top; /* next alloc address top */ uint64_t free_offset; pthread_t thread; VTAILQ_ENTRY(smp_sc) list; struct smp_signctx idn; struct smp_signspace ban1; struct smp_signspace ban2; struct smp_signspace seg1; struct smp_signspace seg2; struct lock mtx; /* Cleaner metrics */ unsigned min_nseg; unsigned aim_nseg; unsigned max_nseg; uint64_t min_segl; uint64_t aim_segl; uint64_t max_segl; uint64_t free_reserve; }; /*--------------------------------------------------------------------*/ /* Pointer round up/down & assert */ #define PRNUP(sc, x) ((void*)RUP2((uintptr_t)(x), sc->align)) /* Integer round up/down & assert */ #define IRNDN(sc, x) RDN2(x, sc->align) #define IRNUP(sc, x) RUP2(x, sc->align) #define IASSERTALIGN(sc, x) assert(IRNDN(sc, x) == (x)) /*--------------------------------------------------------------------*/ #define ASSERT_PTR_IN_SILO(sc, ptr) \ assert((const void*)(ptr) >= (const void*)((sc)->base) && \ (const void*)(ptr) < (const void *)((sc)->base + (sc)->mediasize)) /*--------------------------------------------------------------------*/ #define SIGN_DATA(ctx) ((void *)((ctx)->ss + 1)) #define SIGN_END(ctx) ((void *)((int8_t *)SIGN_DATA(ctx) + (ctx)->ss->length)) #define SIGNSPACE_DATA(spc) (SIGN_DATA(&(spc)->ctx)) #define SIGNSPACE_FRONT(spc) (SIGN_END(&(spc)->ctx)) #define SIGNSPACE_LEN(spc) ((spc)->ctx.ss->length) #define SIGNSPACE_FREE(spc) ((spc)->size - SIGNSPACE_LEN(spc)) /* storage_persistent_mgt.c */ void smp_mgt_init(struct stevedore *parent, int ac, char * const *av); /* storage_persistent_silo.c */ void smp_load_seg(struct worker *, const struct smp_sc *sc, struct smp_seg *sg); void smp_new_seg(struct smp_sc *sc); void smp_close_seg(struct smp_sc *sc, struct smp_seg *sg); void smp_init_oc(struct objcore *oc, struct smp_seg *sg, unsigned objidx); void smp_save_segs(struct smp_sc *sc); sml_getobj_f smp_sml_getobj; void smp_oc_objfree(struct worker *, struct objcore *); obj_event_f smp_oc_event; /* storage_persistent_subr.c */ void smp_def_sign(const struct smp_sc *sc, struct smp_signctx *ctx, uint64_t off, const char *id); int smp_chk_sign(struct smp_signctx *ctx); void smp_reset_sign(struct smp_signctx *ctx); void smp_sync_sign(const struct smp_signctx *ctx); int smp_chk_signspace(struct smp_signspace *spc); void smp_append_signspace(struct smp_signspace *spc, uint32_t len); void smp_reset_signspace(struct smp_signspace *spc); void smp_copy_signspace(struct smp_signspace *dst, const struct smp_signspace *src); void smp_newsilo(struct smp_sc *sc); int smp_valid_silo(struct smp_sc *sc); /*-------------------------------------------------------------------- * Caculate payload of some stuff */ static inline uint64_t smp_stuff_len(const struct smp_sc *sc, unsigned stuff) { uint64_t l; assert(stuff < SMP_END_STUFF); l = sc->ident->stuff[stuff + 1] - sc->ident->stuff[stuff]; l -= SMP_SIGN_SPACE; return (l); } static inline uint64_t smp_segend(const struct smp_seg *sg) { return (sg->p.offset + sg->p.length); } static inline uint64_t smp_spaceleft(const struct smp_sc *sc, const struct smp_seg *sg) { IASSERTALIGN(sc, sc->next_bot); assert(sc->next_bot <= sc->next_top - IRNUP(sc, SMP_SIGN_SPACE)); assert(sc->next_bot >= sg->p.offset); assert(sc->next_top < sg->p.offset + sg->p.length); return ((sc->next_top - sc->next_bot) - IRNUP(sc, SMP_SIGN_SPACE)); } varnish-7.5.0/bin/varnishd/storage/storage_persistent_silo.c000066400000000000000000000370741457605730600243750ustar00rootroot00000000000000/*- * Copyright (c) 2008-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Persistent storage method * * XXX: Before we start the client or maybe after it stops, we should give the * XXX: stevedores a chance to examine their storage for consistency. * */ #include "config.h" #include #include #include "cache/cache_varnishd.h" #include "vsha256.h" #include "vend.h" #include "vtim.h" #include "cache/cache_objhead.h" #include "storage/storage.h" #include "storage/storage_simple.h" #include "storage/storage_persistent.h" /* * We use the top bit to mark objects still needing fixup * In theory this may need to be platform dependent */ #define NEED_FIXUP (1U << 31) /*-------------------------------------------------------------------- * Write the segmentlist back to the silo. * * We write the first copy, sync it synchronously, then write the * second copy and sync it synchronously. * * Provided the kernel doesn't lie, that means we will always have * at least one valid copy on in the silo. */ static void smp_save_seg(const struct smp_sc *sc, struct smp_signspace *spc) { struct smp_segptr *ss; struct smp_seg *sg; uint64_t length; Lck_AssertHeld(&sc->mtx); smp_reset_signspace(spc); ss = SIGNSPACE_DATA(spc); length = 0; VTAILQ_FOREACH(sg, &sc->segments, list) { assert(sg->p.offset < sc->mediasize); assert(sg->p.offset + sg->p.length <= sc->mediasize); *ss = sg->p; ss++; length += sizeof *ss; } smp_append_signspace(spc, length); smp_sync_sign(&spc->ctx); } void smp_save_segs(struct smp_sc *sc) { struct smp_seg *sg, *sg2; CHECK_OBJ_NOTNULL(sc, SMP_SC_MAGIC); Lck_AssertHeld(&sc->mtx); /* * Remove empty segments from the front of the list * before we write the segments to disk. */ VTAILQ_FOREACH_SAFE(sg, &sc->segments, list, sg2) { CHECK_OBJ_NOTNULL(sg, SMP_SEG_MAGIC); if (sg->nobj > 0) break; if (sg == sc->cur_seg) continue; VTAILQ_REMOVE(&sc->segments, sg, list); AN(VTAILQ_EMPTY(&sg->objcores)); FREE_OBJ(sg); } smp_save_seg(sc, &sc->seg1); smp_save_seg(sc, &sc->seg2); } /*-------------------------------------------------------------------- * Load segments * * The overall objective is to register the existence of an object, based * only on the minimally sized struct smp_object, without causing the * main object to be faulted in. * * XXX: We can test this by mprotecting the main body of the segment * XXX: until the first fixup happens, or even just over this loop, * XXX: However: the requires that the smp_objects starter further * XXX: into the segment than a page so that they do not get hit * XXX: by the protection. */ void smp_load_seg(struct worker *wrk, const struct smp_sc *sc, struct smp_seg *sg) { struct smp_object *so; struct objcore *oc; struct ban *ban; uint32_t no; double t_now = VTIM_real(); struct smp_signctx ctx[1]; ASSERT_SILO_THREAD(sc); CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(sg, SMP_SEG_MAGIC); assert(sg->flags & SMP_SEG_MUSTLOAD); sg->flags &= ~SMP_SEG_MUSTLOAD; AN(sg->p.offset); if (sg->p.objlist == 0) return; smp_def_sign(sc, ctx, sg->p.offset, "SEGHEAD"); if (smp_chk_sign(ctx)) return; /* test SEGTAIL */ /* test OBJIDX */ so = (void*)(sc->base + sg->p.objlist); sg->objs = so; no = sg->p.lobjlist; /* Clear the bogus "hold" count */ sg->nobj = 0; for (;no > 0; so++,no--) { if (EXP_WHEN(so) < t_now) continue; ban = BAN_FindBan(so->ban); AN(ban); oc = ObjNew(wrk); oc->stobj->stevedore = sc->parent; smp_init_oc(oc, sg, no); VTAILQ_INSERT_TAIL(&sg->objcores, oc, lru_list); oc->stobj->priv2 |= NEED_FIXUP; EXP_COPY(oc, so); sg->nobj++; oc->refcnt++; HSH_Insert(wrk, so->hash, oc, ban); AN(oc->ban); HSH_DerefBoc(wrk, oc); // XXX Keep it an stream resurrection? (void)HSH_DerefObjCore(wrk, &oc, HSH_RUSH_POLICY); wrk->stats->n_vampireobject++; } Pool_Sumstat(wrk); sg->flags |= SMP_SEG_LOADED; } /*-------------------------------------------------------------------- * Create a new segment */ void smp_new_seg(struct smp_sc *sc) { struct smp_seg tmpsg; struct smp_seg *sg; AZ(sc->cur_seg); Lck_AssertHeld(&sc->mtx); /* XXX: find where it goes in silo */ INIT_OBJ(&tmpsg, SMP_SEG_MAGIC); tmpsg.sc = sc; tmpsg.p.offset = sc->free_offset; /* XXX: align */ assert(tmpsg.p.offset >= sc->ident->stuff[SMP_SPC_STUFF]); assert(tmpsg.p.offset < sc->mediasize); tmpsg.p.length = sc->aim_segl; tmpsg.p.length = RDN2(tmpsg.p.length, 8); if (smp_segend(&tmpsg) > sc->mediasize) /* XXX: Consider truncation in this case */ tmpsg.p.offset = sc->ident->stuff[SMP_SPC_STUFF]; assert(smp_segend(&tmpsg) <= sc->mediasize); sg = VTAILQ_FIRST(&sc->segments); if (sg != NULL && tmpsg.p.offset <= sg->p.offset) { if (smp_segend(&tmpsg) > sg->p.offset) /* No more space, return (cur_seg will be NULL) */ /* XXX: Consider truncation instead of failing */ return; assert(smp_segend(&tmpsg) <= sg->p.offset); } if (tmpsg.p.offset == sc->ident->stuff[SMP_SPC_STUFF]) printf("Wrapped silo\n"); ALLOC_OBJ(sg, SMP_SEG_MAGIC); if (sg == NULL) return; *sg = tmpsg; VTAILQ_INIT(&sg->objcores); sg->p.offset = IRNUP(sc, sg->p.offset); sg->p.length -= sg->p.offset - tmpsg.p.offset; sg->p.length = IRNDN(sc, sg->p.length); assert(sg->p.offset + sg->p.length <= tmpsg.p.offset + tmpsg.p.length); sc->free_offset = sg->p.offset + sg->p.length; VTAILQ_INSERT_TAIL(&sc->segments, sg, list); /* Neuter the new segment in case there is an old one there */ AN(sg->p.offset); smp_def_sign(sc, sg->ctx, sg->p.offset, "SEGHEAD"); smp_reset_sign(sg->ctx); smp_sync_sign(sg->ctx); /* Set up our allocation points */ sc->cur_seg = sg; sc->next_bot = sg->p.offset + IRNUP(sc, SMP_SIGN_SPACE); sc->next_top = smp_segend(sg); sc->next_top -= IRNUP(sc, SMP_SIGN_SPACE); IASSERTALIGN(sc, sc->next_bot); IASSERTALIGN(sc, sc->next_top); sg->objs = (void*)(sc->base + sc->next_top); } /*-------------------------------------------------------------------- * Close a segment */ void smp_close_seg(struct smp_sc *sc, struct smp_seg *sg) { uint64_t left, dst, len; void *dp; CHECK_OBJ_NOTNULL(sc, SMP_SC_MAGIC); Lck_AssertHeld(&sc->mtx); CHECK_OBJ_NOTNULL(sg, SMP_SEG_MAGIC); assert(sg == sc->cur_seg); AN(sg->p.offset); sc->cur_seg = NULL; if (sg->nalloc == 0) { /* If segment is empty, delete instead */ VTAILQ_REMOVE(&sc->segments, sg, list); assert(sg->p.offset >= sc->ident->stuff[SMP_SPC_STUFF]); assert(sg->p.offset < sc->mediasize); sc->free_offset = sg->p.offset; AN(VTAILQ_EMPTY(&sg->objcores)); FREE_OBJ(sg); return; } /* * If there is enough space left, that we can move the smp_objects * down without overwriting the present copy, we will do so to * compact the segment. */ left = smp_spaceleft(sc, sg); len = sizeof(struct smp_object) * sg->p.lobjlist; if (len < left) { dst = sc->next_bot + IRNUP(sc, SMP_SIGN_SPACE); dp = sc->base + dst; assert((uintptr_t)dp + len < (uintptr_t)sg->objs); memcpy(dp, sg->objs, len); sc->next_top = dst; sg->objs = dp; sg->p.length = (sc->next_top - sg->p.offset) + len + IRNUP(sc, SMP_SIGN_SPACE); (void)smp_spaceleft(sc, sg); /* for the asserts */ } /* Update the segment header */ sg->p.objlist = sc->next_top; /* Write the (empty) OBJIDX signature */ sc->next_top -= IRNUP(sc, SMP_SIGN_SPACE); assert(sc->next_top >= sc->next_bot); smp_def_sign(sc, sg->ctx, sc->next_top, "OBJIDX"); smp_reset_sign(sg->ctx); smp_sync_sign(sg->ctx); /* Write the (empty) SEGTAIL signature */ smp_def_sign(sc, sg->ctx, sg->p.offset + sg->p.length - IRNUP(sc, SMP_SIGN_SPACE), "SEGTAIL"); smp_reset_sign(sg->ctx); smp_sync_sign(sg->ctx); /* Save segment list */ smp_save_segs(sc); sc->free_offset = smp_segend(sg); } /*--------------------------------------------------------------------- */ static struct smp_object * smp_find_so(const struct smp_seg *sg, unsigned priv2) { struct smp_object *so; priv2 &= ~NEED_FIXUP; assert(priv2 > 0); assert(priv2 <= sg->p.lobjlist); so = &sg->objs[sg->p.lobjlist - priv2]; return (so); } /*--------------------------------------------------------------------- * Check if a given storage structure is valid to use */ static int smp_loaded_st(const struct smp_sc *sc, const struct smp_seg *sg, const struct storage *st) { struct smp_seg *sg2; const uint8_t *pst; uint64_t o; (void)sg; /* XXX: faster: Start search from here */ pst = (const void *)st; if (pst < (sc->base + sc->ident->stuff[SMP_SPC_STUFF])) return (0x01); /* Before silo payload start */ if (pst > (sc->base + sc->ident->stuff[SMP_END_STUFF])) return (0x02); /* After silo end */ o = pst - sc->base; /* Find which segment contains the storage structure */ VTAILQ_FOREACH(sg2, &sc->segments, list) if (o > sg2->p.offset && (o + sizeof(*st)) < sg2->p.objlist) break; if (sg2 == NULL) return (0x04); /* No claiming segment */ if (!(sg2->flags & SMP_SEG_LOADED)) return (0x08); /* Claiming segment not loaded */ /* It is now safe to access the storage structure */ if (st->magic != STORAGE_MAGIC) return (0x10); /* Not enough magic */ if (o + st->space >= sg2->p.objlist) return (0x20); /* Allocation not inside segment */ if (st->len > st->space) return (0x40); /* Plain bad... */ /* * XXX: We could patch up st->stevedore and st->priv here * XXX: but if things go right, we will never need them. */ return (0); } /*--------------------------------------------------------------------- * objcore methods for persistent objects */ static void fix_ptr(const struct smp_seg *sg, const struct storage *st, void **ptr) { // See comment where used below uintptr_t u; u = (uintptr_t)(*ptr); if (u != 0) { u -= (uintptr_t)st->priv; u += (uintptr_t)sg->sc->base; } *ptr = (void *)u; } struct object * v_matchproto_(sml_getobj_f) smp_sml_getobj(struct worker *wrk, struct objcore *oc) { struct object *o; struct smp_seg *sg; struct smp_object *so; struct storage *st, *st2; uint64_t l; int bad; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); AN(oc->stobj->stevedore); CAST_OBJ_NOTNULL(sg, oc->stobj->priv, SMP_SEG_MAGIC); so = smp_find_so(sg, oc->stobj->priv2); /************************************************************** * The silo may have been remapped at a different address, * because the people who came up with ASLR were unable * imagine that there might be beneficial use-cases for * always mapping a file at the same specific address. * * We store the silos base address in struct storage->priv * and manually fix all the pointers in struct object and * the list of struct storage objects which hold the body. * When done, we update the storage->priv, so we can do the * same trick next time. * * This is a prohibitively expensive workaround, but we can * live with it, because the role of this stevedore is only * to keep the internal stevedore API honest. */ st = (void*)(sg->sc->base + so->ptr); fix_ptr(sg, st, (void**)&st->ptr); o = (void*)st->ptr; fix_ptr(sg, st, (void**)&o->objstore); fix_ptr(sg, st, (void**)&o->va_vary); fix_ptr(sg, st, (void**)&o->va_headers); fix_ptr(sg, st, (void**)&o->list.vtqh_first); fix_ptr(sg, st, (void**)&o->list.vtqh_last); st->priv = (void*)(sg->sc->base); st2 = o->list.vtqh_first; while (st2 != NULL) { fix_ptr(sg, st2, (void**)&st2->list.vtqe_next); fix_ptr(sg, st2, (void**)&st2->list.vtqe_prev); fix_ptr(sg, st2, (void**)&st2->ptr); st2->priv = (void*)(sg->sc->base); st2 = st2->list.vtqe_next; } /* * The object may not be in this segment since we allocate it * In a separate operation than the smp_object. We could check * that it is in a later segment, but that would be complicated. * XXX: For now, be happy if it is inside the silo */ ASSERT_PTR_IN_SILO(sg->sc, o); CHECK_OBJ_NOTNULL(o, OBJECT_MAGIC); /* * If this flag is not set, it will not be, and the lock is not * needed to test it. */ if (!(oc->stobj->priv2 & NEED_FIXUP)) return (o); Lck_Lock(&sg->sc->mtx); /* Check again, we might have raced. */ if (oc->stobj->priv2 & NEED_FIXUP) { /* We trust caller to have a refcnt for us */ bad = 0; l = 0; VTAILQ_FOREACH(st, &o->list, list) { bad |= smp_loaded_st(sg->sc, sg, st); if (bad) break; l += st->len; } if (l != vbe64dec(o->fa_len)) bad |= 0x100; if (bad) { EXP_ZERO(oc); EXP_ZERO(so); } sg->nfixed++; wrk->stats->n_object++; wrk->stats->n_vampireobject--; oc->stobj->priv2 &= ~NEED_FIXUP; } Lck_Unlock(&sg->sc->mtx); return (o); } void v_matchproto_(objfree_f) smp_oc_objfree(struct worker *wrk, struct objcore *oc) { struct smp_seg *sg; struct smp_object *so; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); CAST_OBJ_NOTNULL(sg, oc->stobj->priv, SMP_SEG_MAGIC); so = smp_find_so(sg, oc->stobj->priv2); Lck_Lock(&sg->sc->mtx); EXP_ZERO(so); so->ptr = 0; assert(sg->nobj > 0); sg->nobj--; if (oc->stobj->priv2 & NEED_FIXUP) { wrk->stats->n_vampireobject--; } else { assert(sg->nfixed > 0); sg->nfixed--; wrk->stats->n_object--; } VTAILQ_REMOVE(&sg->objcores, oc, lru_list); Lck_Unlock(&sg->sc->mtx); memset(oc->stobj, 0, sizeof oc->stobj); } /*--------------------------------------------------------------------*/ void smp_init_oc(struct objcore *oc, struct smp_seg *sg, unsigned objidx) { AZ(objidx & NEED_FIXUP); oc->stobj->priv = sg; oc->stobj->priv2 = objidx; } /*--------------------------------------------------------------------*/ void v_matchproto_(obj_event_f) smp_oc_event(struct worker *wrk, void *priv, struct objcore *oc, unsigned ev) { struct stevedore *st; struct smp_seg *sg; struct smp_object *so; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CAST_OBJ_NOTNULL(st, priv, STEVEDORE_MAGIC); CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); if (oc->stobj->stevedore != st) return; CAST_OBJ_NOTNULL(sg, oc->stobj->priv, SMP_SEG_MAGIC); CHECK_OBJ_NOTNULL(sg->sc, SMP_SC_MAGIC); so = smp_find_so(sg, oc->stobj->priv2); if (sg == sg->sc->cur_seg) { /* Lock necessary, we might race close_seg */ Lck_Lock(&sg->sc->mtx); if (ev & (OEV_BANCHG|OEV_INSERT)) so->ban = BAN_Time(oc->ban); if (ev & (OEV_TTLCHG|OEV_INSERT)) EXP_COPY(so, oc); Lck_Unlock(&sg->sc->mtx); } else { if (ev & (OEV_BANCHG|OEV_INSERT)) so->ban = BAN_Time(oc->ban); if (ev & (OEV_TTLCHG|OEV_INSERT)) EXP_COPY(so, oc); } } varnish-7.5.0/bin/varnishd/storage/storage_persistent_subr.c000066400000000000000000000265241457605730600244000ustar00rootroot00000000000000/*- * Copyright (c) 2008-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Persistent storage method * * XXX: Before we start the client or maybe after it stops, we should give the * XXX: stevedores a chance to examine their storage for consistency. * * XXX: Do we ever free the LRU-lists ? */ #include "config.h" #include "cache/cache_varnishd.h" #include "common/heritage.h" #include #include #include #include "storage/storage.h" #include "vrnd.h" #include "vsha256.h" #include "storage/storage_persistent.h" static void smp_msync(const void *addr, size_t length); /*-------------------------------------------------------------------- * SIGNATURE functions * The signature is SHA256 over: * 1. The smp_sign struct up to but not including the length field. * 2. smp_sign->length bytes, starting after the smp_sign structure * 3. The smp-sign->length field. * The signature is stored after the byte-range from step 2. */ /*-------------------------------------------------------------------- * Define a signature by location and identifier. */ void smp_def_sign(const struct smp_sc *sc, struct smp_signctx *ctx, uint64_t off, const char *id) { AZ(off & 7); /* Alignment */ assert(strlen(id) < sizeof ctx->ss->ident); memset(ctx, 0, sizeof *ctx); ctx->ss = (void*)(sc->base + off); ctx->unique = sc->unique; ctx->id = id; } /*-------------------------------------------------------------------- * Check that a signature is good, leave state ready for append */ int smp_chk_sign(struct smp_signctx *ctx) { struct VSHA256Context cx; unsigned char sign[VSHA256_LEN]; int r = 0; if (strncmp(ctx->id, ctx->ss->ident, sizeof ctx->ss->ident)) r = 1; else if (ctx->unique != ctx->ss->unique) r = 2; else if (!ctx->ss->mapped) r = 3; else { VSHA256_Init(&ctx->ctx); VSHA256_Update(&ctx->ctx, ctx->ss, offsetof(struct smp_sign, length)); VSHA256_Update(&ctx->ctx, SIGN_DATA(ctx), ctx->ss->length); cx = ctx->ctx; VSHA256_Update(&cx, &ctx->ss->length, sizeof(ctx->ss->length)); VSHA256_Final(sign, &cx); if (memcmp(sign, SIGN_END(ctx), sizeof sign)) r = 4; } if (r) { fprintf(stderr, "CHK(%p %s %p %s) = %d\n", ctx, ctx->id, ctx->ss, r > 1 ? ctx->ss->ident : "", r); } return (r); } /*-------------------------------------------------------------------- * Append data to a signature */ static void smp_append_sign(struct smp_signctx *ctx, const void *ptr, uint32_t len) { struct VSHA256Context cx; unsigned char sign[VSHA256_LEN]; if (len != 0) { VSHA256_Update(&ctx->ctx, ptr, len); ctx->ss->length += len; } cx = ctx->ctx; VSHA256_Update(&cx, &ctx->ss->length, sizeof(ctx->ss->length)); VSHA256_Final(sign, &cx); memcpy(SIGN_END(ctx), sign, sizeof sign); } /*-------------------------------------------------------------------- * Reset a signature to empty, prepare for appending. */ void smp_reset_sign(struct smp_signctx *ctx) { memset(ctx->ss, 0, sizeof *ctx->ss); assert(strlen(ctx->id) < sizeof *ctx->ss); strcpy(ctx->ss->ident, ctx->id); ctx->ss->unique = ctx->unique; ctx->ss->mapped = (uintptr_t)ctx->ss; VSHA256_Init(&ctx->ctx); VSHA256_Update(&ctx->ctx, ctx->ss, offsetof(struct smp_sign, length)); smp_append_sign(ctx, NULL, 0); } /*-------------------------------------------------------------------- * Force a write of a signature block to the backing store. */ void smp_sync_sign(const struct smp_signctx *ctx) { smp_msync(ctx->ss, SMP_SIGN_SPACE + ctx->ss->length); } /*-------------------------------------------------------------------- * Create and force a new signature to backing store */ static void smp_new_sign(const struct smp_sc *sc, struct smp_signctx *ctx, uint64_t off, const char *id) { smp_def_sign(sc, ctx, off, id); smp_reset_sign(ctx); smp_sync_sign(ctx); } /*-------------------------------------------------------------------- * Define a signature space by location, size and identifier */ static void smp_def_signspace(const struct smp_sc *sc, struct smp_signspace *spc, uint64_t off, uint64_t size, const char *id) { smp_def_sign(sc, &spc->ctx, off, id); spc->start = SIGN_DATA(&spc->ctx); spc->size = size - SMP_SIGN_SPACE; } /*-------------------------------------------------------------------- * Check that a signspace's signature space is good, leave state ready * for append */ int smp_chk_signspace(struct smp_signspace *spc) { return (smp_chk_sign(&spc->ctx)); } /*-------------------------------------------------------------------- * Append data to a signature space */ void smp_append_signspace(struct smp_signspace *spc, uint32_t len) { assert(len <= SIGNSPACE_FREE(spc)); smp_append_sign(&spc->ctx, SIGNSPACE_FRONT(spc), len); } /*-------------------------------------------------------------------- * Reset a signature space to empty, prepare for appending. */ void smp_reset_signspace(struct smp_signspace *spc) { smp_reset_sign(&spc->ctx); } /*-------------------------------------------------------------------- * Copy the contents of one signspace to another. Prepare for * appending. */ void smp_copy_signspace(struct smp_signspace *dst, const struct smp_signspace *src) { assert(SIGNSPACE_LEN(src) <= dst->size); smp_reset_signspace(dst); memcpy(SIGNSPACE_DATA(dst), SIGNSPACE_DATA(src), SIGNSPACE_LEN(src)); smp_append_signspace(dst, SIGNSPACE_LEN(src)); assert(SIGNSPACE_LEN(src) == SIGNSPACE_LEN(dst)); } /*-------------------------------------------------------------------- * Create a new signature space and force the signature to backing store. */ static void smp_new_signspace(const struct smp_sc *sc, struct smp_signspace *spc, uint64_t off, uint64_t size, const char *id) { smp_new_sign(sc, &spc->ctx, off, id); spc->start = SIGN_DATA(&spc->ctx); spc->size = size - SMP_SIGN_SPACE; } /*-------------------------------------------------------------------- * Force a write of a memory block (rounded to nearest pages) to * the backing store. */ static void smp_msync(const void *addr, size_t length) { uintptr_t start, end, pagesize; pagesize = getpagesize(); assert(pagesize > 0 && PWR2(pagesize)); start = RDN2((uintptr_t)addr, pagesize); end = RUP2((uintptr_t)addr + length, pagesize); assert(start < end); AZ(msync((void *)start, end - start, MS_SYNC)); } /*-------------------------------------------------------------------- * Initialize a Silo with a valid but empty structure. * * XXX: more intelligent sizing of things. */ void smp_newsilo(struct smp_sc *sc) { struct smp_ident *si; /* Choose a new random number */ AZ(VRND_RandomCrypto(&sc->unique, sizeof sc->unique)); smp_reset_sign(&sc->idn); si = sc->ident; memset(si, 0, sizeof *si); bstrcpy(si->ident, SMP_IDENT_STRING); si->byte_order = 0x12345678; si->size = sizeof *si; si->major_version = 2; si->unique = sc->unique; si->mediasize = sc->mediasize; si->granularity = sc->granularity; /* * Aim for cache-line-width */ si->align = sizeof(void*) * 2; sc->align = si->align; si->stuff[SMP_BAN1_STUFF] = sc->granularity; si->stuff[SMP_BAN2_STUFF] = si->stuff[SMP_BAN1_STUFF] + 1024*1024; si->stuff[SMP_SEG1_STUFF] = si->stuff[SMP_BAN2_STUFF] + 1024*1024; si->stuff[SMP_SEG2_STUFF] = si->stuff[SMP_SEG1_STUFF] + 1024*1024; si->stuff[SMP_SPC_STUFF] = si->stuff[SMP_SEG2_STUFF] + 1024*1024; si->stuff[SMP_END_STUFF] = si->mediasize; assert(si->stuff[SMP_SPC_STUFF] < si->stuff[SMP_END_STUFF]); smp_new_signspace(sc, &sc->ban1, si->stuff[SMP_BAN1_STUFF], smp_stuff_len(sc, SMP_BAN1_STUFF), "BAN 1"); smp_new_signspace(sc, &sc->ban2, si->stuff[SMP_BAN2_STUFF], smp_stuff_len(sc, SMP_BAN2_STUFF), "BAN 2"); smp_new_signspace(sc, &sc->seg1, si->stuff[SMP_SEG1_STUFF], smp_stuff_len(sc, SMP_SEG1_STUFF), "SEG 1"); smp_new_signspace(sc, &sc->seg2, si->stuff[SMP_SEG2_STUFF], smp_stuff_len(sc, SMP_SEG2_STUFF), "SEG 2"); smp_append_sign(&sc->idn, si, sizeof *si); smp_sync_sign(&sc->idn); } /*-------------------------------------------------------------------- * Check if a silo is valid. */ int smp_valid_silo(struct smp_sc *sc) { struct smp_ident *si; int i, j; assert(strlen(SMP_IDENT_STRING) < sizeof si->ident); i = smp_chk_sign(&sc->idn); if (i) return (i); si = sc->ident; if (strcmp(si->ident, SMP_IDENT_STRING)) return (12); if (si->byte_order != 0x12345678) return (13); if (si->size != sizeof *si) return (14); if (si->major_version != 2) return (15); if (si->mediasize != sc->mediasize) return (17); if (si->granularity != sc->granularity) return (18); if (si->align < sizeof(void*)) return (19); if (!PWR2(si->align)) return (20); sc->align = si->align; sc->unique = si->unique; /* XXX: Sanity check stuff[6] */ assert(si->stuff[SMP_BAN1_STUFF] > sizeof *si + VSHA256_LEN); assert(si->stuff[SMP_BAN2_STUFF] > si->stuff[SMP_BAN1_STUFF]); assert(si->stuff[SMP_SEG1_STUFF] > si->stuff[SMP_BAN2_STUFF]); assert(si->stuff[SMP_SEG2_STUFF] > si->stuff[SMP_SEG1_STUFF]); assert(si->stuff[SMP_SPC_STUFF] > si->stuff[SMP_SEG2_STUFF]); assert(si->stuff[SMP_END_STUFF] == sc->mediasize); assert(smp_stuff_len(sc, SMP_SEG1_STUFF) > 65536); assert(smp_stuff_len(sc, SMP_SEG1_STUFF) == smp_stuff_len(sc, SMP_SEG2_STUFF)); assert(smp_stuff_len(sc, SMP_BAN1_STUFF) > 65536); assert(smp_stuff_len(sc, SMP_BAN1_STUFF) == smp_stuff_len(sc, SMP_BAN2_STUFF)); smp_def_signspace(sc, &sc->ban1, si->stuff[SMP_BAN1_STUFF], smp_stuff_len(sc, SMP_BAN1_STUFF), "BAN 1"); smp_def_signspace(sc, &sc->ban2, si->stuff[SMP_BAN2_STUFF], smp_stuff_len(sc, SMP_BAN2_STUFF), "BAN 2"); smp_def_signspace(sc, &sc->seg1, si->stuff[SMP_SEG1_STUFF], smp_stuff_len(sc, SMP_SEG1_STUFF), "SEG 1"); smp_def_signspace(sc, &sc->seg2, si->stuff[SMP_SEG2_STUFF], smp_stuff_len(sc, SMP_SEG2_STUFF), "SEG 2"); /* We must have one valid BAN table */ i = smp_chk_signspace(&sc->ban1); j = smp_chk_signspace(&sc->ban2); if (i && j) return (100 + i * 10 + j); /* We must have one valid SEG table */ i = smp_chk_signspace(&sc->seg1); j = smp_chk_signspace(&sc->seg2); if (i && j) return (200 + i * 10 + j); return (0); } varnish-7.5.0/bin/varnishd/storage/storage_simple.c000066400000000000000000000441161457605730600224330ustar00rootroot00000000000000/*- * Copyright (c) 2007-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #include "config.h" #include "cache/cache_varnishd.h" #include "cache/cache_obj.h" #include "cache/cache_objhead.h" #include "storage/storage.h" #include "storage/storage_simple.h" #include "vtim.h" /* Flags for allocating memory in sml_stv_alloc */ #define LESS_MEM_ALLOCED_IS_OK 1 // marker pointer for sml_trimstore static void *trim_once = &trim_once; /*-------------------------------------------------------------------*/ static struct storage * objallocwithnuke(struct worker *, const struct stevedore *, ssize_t size, int flags); static struct storage * sml_stv_alloc(const struct stevedore *stv, ssize_t size, int flags) { struct storage *st; CHECK_OBJ_NOTNULL(stv, STEVEDORE_MAGIC); AN(stv->sml_alloc); if (!(flags & LESS_MEM_ALLOCED_IS_OK)) { if (size > cache_param->fetch_maxchunksize) return (NULL); else return (stv->sml_alloc(stv, size)); } if (size > cache_param->fetch_maxchunksize) size = cache_param->fetch_maxchunksize; assert(size <= UINT_MAX); /* field limit in struct storage */ for (;;) { /* try to allocate from it */ assert(size > 0); st = stv->sml_alloc(stv, size); if (st != NULL) break; if (size <= cache_param->fetch_chunksize) break; size /= 2; } CHECK_OBJ_ORNULL(st, STORAGE_MAGIC); return (st); } static void sml_stv_free(const struct stevedore *stv, struct storage *st) { CHECK_OBJ_NOTNULL(stv, STEVEDORE_MAGIC); CHECK_OBJ_NOTNULL(st, STORAGE_MAGIC); if (stv->sml_free != NULL) stv->sml_free(st); } /*-------------------------------------------------------------------- * This function is called by stevedores ->allocobj() method, which * very often will be SML_allocobj() below, to convert a slab * of storage into object which the stevedore can then register in its * internal state, before returning it to STV_NewObject(). * As you probably guessed: All this for persistence. */ struct object * SML_MkObject(const struct stevedore *stv, struct objcore *oc, void *ptr) { struct object *o; CHECK_OBJ_NOTNULL(stv, STEVEDORE_MAGIC); AN(stv->methods); CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); assert(PAOK(ptr)); o = ptr; INIT_OBJ(o, OBJECT_MAGIC); VTAILQ_INIT(&o->list); oc->stobj->stevedore = stv; oc->stobj->priv = o; oc->stobj->priv2 = 0; return (o); } /*-------------------------------------------------------------------- * This is the default ->allocobj() which all stevedores who do not * implement persistent storage can rely on. */ int v_matchproto_(storage_allocobj_f) SML_allocobj(struct worker *wrk, const struct stevedore *stv, struct objcore *oc, unsigned wsl) { struct object *o; struct storage *st = NULL; unsigned ltot; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(stv, STEVEDORE_MAGIC); CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); AN(stv->sml_alloc); ltot = sizeof(*o) + PRNDUP(wsl); do { st = stv->sml_alloc(stv, ltot); if (st != NULL && st->space < ltot) { stv->sml_free(st); st = NULL; } } while (st == NULL && LRU_NukeOne(wrk, stv->lru)); if (st == NULL) return (0); CHECK_OBJ_NOTNULL(st, STORAGE_MAGIC); o = SML_MkObject(stv, oc, st->ptr); CHECK_OBJ_NOTNULL(o, OBJECT_MAGIC); st->len = sizeof(*o); o->objstore = st; return (1); } void * v_matchproto_(storage_allocbuf_t) SML_AllocBuf(struct worker *wrk, const struct stevedore *stv, size_t size, uintptr_t *ppriv) { struct storage *st; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(stv, STEVEDORE_MAGIC); AN(ppriv); if (size > UINT_MAX) return (NULL); st = objallocwithnuke(wrk, stv, size, 0); if (st == NULL) return (NULL); assert(st->space >= size); st->len = size; *ppriv = (uintptr_t)st; return (st->ptr); } void v_matchproto_(storage_freebuf_t) SML_FreeBuf(struct worker *wrk, const struct stevedore *stv, uintptr_t priv) { struct storage *st; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(stv, STEVEDORE_MAGIC); CAST_OBJ_NOTNULL(st, (void *)priv, STORAGE_MAGIC); sml_stv_free(stv, st); } /*--------------------------------------------------------------------- */ static struct object * sml_getobj(struct worker *wrk, struct objcore *oc) { const struct stevedore *stv; struct object *o; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); stv = oc->stobj->stevedore; CHECK_OBJ_NOTNULL(stv, STEVEDORE_MAGIC); if (stv->sml_getobj != NULL) return (stv->sml_getobj(wrk, oc)); if (oc->stobj->priv == NULL) return (NULL); CAST_OBJ_NOTNULL(o, oc->stobj->priv, OBJECT_MAGIC); return (o); } static void v_matchproto_(objslim_f) sml_slim(struct worker *wrk, struct objcore *oc) { const struct stevedore *stv; struct object *o; struct storage *st, *stn; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); stv = oc->stobj->stevedore; CHECK_OBJ_NOTNULL(stv, STEVEDORE_MAGIC); o = sml_getobj(wrk, oc); CHECK_OBJ_NOTNULL(o, OBJECT_MAGIC); #define OBJ_AUXATTR(U, l) \ do { \ if (o->aa_##l != NULL) { \ sml_stv_free(stv, o->aa_##l); \ o->aa_##l = NULL; \ } \ } while (0); #include "tbl/obj_attr.h" VTAILQ_FOREACH_SAFE(st, &o->list, list, stn) { CHECK_OBJ_NOTNULL(st, STORAGE_MAGIC); VTAILQ_REMOVE(&o->list, st, list); sml_stv_free(stv, st); } } static void sml_bocfini(const struct stevedore *stv, struct boc *boc) { struct storage *st; CHECK_OBJ_NOTNULL(stv, STEVEDORE_MAGIC); CHECK_OBJ_NOTNULL(boc, BOC_MAGIC); if (boc->stevedore_priv == NULL || boc->stevedore_priv == trim_once) return; /* Free any leftovers from Trim */ TAKE_OBJ_NOTNULL(st, &boc->stevedore_priv, STORAGE_MAGIC); sml_stv_free(stv, st); } /* * called in two cases: * - oc->boc == NULL: cache object on LRU freed * - oc->boc != NULL: cache object replaced for backend error */ static void v_matchproto_(objfree_f) sml_objfree(struct worker *wrk, struct objcore *oc) { const struct stevedore *stv; struct storage *st; struct object *o; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); stv = oc->stobj->stevedore; CHECK_OBJ_NOTNULL(stv, STEVEDORE_MAGIC); CAST_OBJ_NOTNULL(o, oc->stobj->priv, OBJECT_MAGIC); sml_slim(wrk, oc); st = o->objstore; CHECK_OBJ_NOTNULL(st, STORAGE_MAGIC); FINI_OBJ(o); if (oc->boc != NULL) sml_bocfini(stv, oc->boc); else if (stv->lru != NULL) LRU_Remove(oc); sml_stv_free(stv, st); memset(oc->stobj, 0, sizeof oc->stobj); wrk->stats->n_object--; } static int v_matchproto_(objiterator_f) sml_iterator(struct worker *wrk, struct objcore *oc, void *priv, objiterate_f *func, int final) { struct boc *boc; enum boc_state_e state; struct object *obj; struct storage *st; struct storage *checkpoint = NULL; const struct stevedore *stv; ssize_t checkpoint_len = 0; ssize_t len = 0; int ret = 0, ret2; ssize_t ol; ssize_t nl; ssize_t sl; void *p; ssize_t l; unsigned u; obj = sml_getobj(wrk, oc); CHECK_OBJ_NOTNULL(obj, OBJECT_MAGIC); stv = oc->stobj->stevedore; CHECK_OBJ_NOTNULL(stv, STEVEDORE_MAGIC); boc = HSH_RefBoc(oc); if (boc == NULL) { VTAILQ_FOREACH_REVERSE_SAFE( st, &obj->list, storagehead, list, checkpoint) { u = 0; if (VTAILQ_PREV(st, storagehead, list) == NULL) u |= OBJ_ITER_END; if (final) u |= OBJ_ITER_FLUSH; if (ret == 0 && st->len > 0) ret = func(priv, u, st->ptr, st->len); if (final) { VTAILQ_REMOVE(&obj->list, st, list); sml_stv_free(stv, st); } else if (ret) break; } return (ret); } p = NULL; l = 0; u = 0; if (boc->fetched_so_far == 0) { ret = func(priv, OBJ_ITER_FLUSH, NULL, 0); if (ret) return (ret); } while (1) { ol = len; nl = ObjWaitExtend(wrk, oc, ol); if (boc->state == BOS_FAILED) { ret = -1; break; } if (nl == ol) { if (boc->state == BOS_FINISHED) break; continue; } Lck_Lock(&boc->mtx); AZ(VTAILQ_EMPTY(&obj->list)); if (checkpoint == NULL) { st = VTAILQ_LAST(&obj->list, storagehead); sl = 0; } else { st = checkpoint; sl = checkpoint_len; ol -= checkpoint_len; } while (st != NULL) { if (st->len > ol) { p = st->ptr + ol; l = st->len - ol; len += l; break; } ol -= st->len; assert(ol >= 0); nl -= st->len; assert(nl > 0); sl += st->len; st = VTAILQ_PREV(st, storagehead, list); if (VTAILQ_PREV(st, storagehead, list) != NULL) { if (final && checkpoint != NULL) { VTAILQ_REMOVE(&obj->list, checkpoint, list); sml_stv_free(stv, checkpoint); } checkpoint = st; checkpoint_len = sl; } } CHECK_OBJ_NOTNULL(obj, OBJECT_MAGIC); CHECK_OBJ_NOTNULL(st, STORAGE_MAGIC); st = VTAILQ_PREV(st, storagehead, list); if (st != NULL && st->len == 0) st = NULL; state = boc->state; Lck_Unlock(&boc->mtx); assert(l > 0 || state == BOS_FINISHED); u = 0; if (st == NULL || final) u |= OBJ_ITER_FLUSH; if (st == NULL && state == BOS_FINISHED) u |= OBJ_ITER_END; ret = func(priv, u, p, l); if (ret) break; } HSH_DerefBoc(wrk, oc); if ((u & OBJ_ITER_END) == 0) { ret2 = func(priv, OBJ_ITER_END, NULL, 0); if (ret == 0) ret = ret2; } return (ret); } /*-------------------------------------------------------------------- */ static struct storage * objallocwithnuke(struct worker *wrk, const struct stevedore *stv, ssize_t size, int flags) { struct storage *st = NULL; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(stv, STEVEDORE_MAGIC); if (size > cache_param->fetch_maxchunksize) { if (!(flags & LESS_MEM_ALLOCED_IS_OK)) return (NULL); size = cache_param->fetch_maxchunksize; } assert(size <= UINT_MAX); /* field limit in struct storage */ do { /* try to allocate from it */ st = sml_stv_alloc(stv, size, flags); if (st != NULL) break; /* no luck; try to free some space and keep trying */ if (stv->lru == NULL) break; } while (LRU_NukeOne(wrk, stv->lru)); CHECK_OBJ_ORNULL(st, STORAGE_MAGIC); return (st); } static int v_matchproto_(objgetspace_f) sml_getspace(struct worker *wrk, struct objcore *oc, ssize_t *sz, uint8_t **ptr) { struct object *o; struct storage *st; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); AN(sz); AN(ptr); if (*sz == 0) *sz = cache_param->fetch_chunksize; assert(*sz > 0); o = sml_getobj(wrk, oc); CHECK_OBJ_NOTNULL(o, OBJECT_MAGIC); CHECK_OBJ_NOTNULL(oc->boc, BOC_MAGIC); st = VTAILQ_FIRST(&o->list); if (st != NULL && st->len < st->space) { *sz = st->space - st->len; *ptr = st->ptr + st->len; assert (*sz > 0); return (1); } st = objallocwithnuke(wrk, oc->stobj->stevedore, *sz, LESS_MEM_ALLOCED_IS_OK); if (st == NULL) return (0); CHECK_OBJ_NOTNULL(oc->boc, BOC_MAGIC); Lck_Lock(&oc->boc->mtx); VTAILQ_INSERT_HEAD(&o->list, st, list); Lck_Unlock(&oc->boc->mtx); *sz = st->space - st->len; assert (*sz > 0); *ptr = st->ptr + st->len; return (1); } static void v_matchproto_(objextend_f) sml_extend(struct worker *wrk, struct objcore *oc, ssize_t l) { struct object *o; struct storage *st; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); assert(l > 0); o = sml_getobj(wrk, oc); CHECK_OBJ_NOTNULL(o, OBJECT_MAGIC); st = VTAILQ_FIRST(&o->list); CHECK_OBJ_NOTNULL(st, STORAGE_MAGIC); assert(st->len + l <= st->space); st->len += l; } static void v_matchproto_(objtrimstore_f) sml_trimstore(struct worker *wrk, struct objcore *oc) { const struct stevedore *stv; struct storage *st, *st1; struct object *o; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); CHECK_OBJ_NOTNULL(oc->boc, BOC_MAGIC); stv = oc->stobj->stevedore; CHECK_OBJ_NOTNULL(stv, STEVEDORE_MAGIC); if (oc->boc->stevedore_priv != NULL) WRONG("sml_trimstore already called"); oc->boc->stevedore_priv = trim_once; if (stv->sml_free == NULL) return; o = sml_getobj(wrk, oc); CHECK_OBJ_NOTNULL(o, OBJECT_MAGIC); st = VTAILQ_FIRST(&o->list); if (st == NULL) return; if (st->len == 0) { Lck_Lock(&oc->boc->mtx); VTAILQ_REMOVE(&o->list, st, list); Lck_Unlock(&oc->boc->mtx); sml_stv_free(stv, st); return; } if (st->space - st->len < 512) return; st1 = sml_stv_alloc(stv, st->len, 0); if (st1 == NULL) return; assert(st1->space >= st->len); memcpy(st1->ptr, st->ptr, st->len); st1->len = st->len; Lck_Lock(&oc->boc->mtx); VTAILQ_REMOVE(&o->list, st, list); VTAILQ_INSERT_HEAD(&o->list, st1, list); Lck_Unlock(&oc->boc->mtx); /* sml_bocdone frees this */ oc->boc->stevedore_priv = st; } static void v_matchproto_(objbocdone_f) sml_bocdone(struct worker *wrk, struct objcore *oc, struct boc *boc) { const struct stevedore *stv; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); CHECK_OBJ_NOTNULL(oc, OBJCORE_MAGIC); CHECK_OBJ_NOTNULL(boc, BOC_MAGIC); stv = oc->stobj->stevedore; CHECK_OBJ_NOTNULL(stv, STEVEDORE_MAGIC); sml_bocfini(stv, boc); if (stv->lru != NULL) { if (isnan(wrk->lastused)) wrk->lastused = VTIM_real(); LRU_Add(oc, wrk->lastused); // approx timestamp is OK } } static const void * v_matchproto_(objgetattr_f) sml_getattr(struct worker *wrk, struct objcore *oc, enum obj_attr attr, ssize_t *len) { struct object *o; ssize_t dummy; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); if (len == NULL) len = &dummy; o = sml_getobj(wrk, oc); CHECK_OBJ_NOTNULL(o, OBJECT_MAGIC); switch (attr) { /* Fixed size attributes */ #define OBJ_FIXATTR(U, l, s) \ case OA_##U: \ *len = sizeof o->fa_##l; \ return (o->fa_##l); #include "tbl/obj_attr.h" /* Variable size attributes */ #define OBJ_VARATTR(U, l) \ case OA_##U: \ if (o->va_##l == NULL) \ return (NULL); \ *len = o->va_##l##_len; \ return (o->va_##l); #include "tbl/obj_attr.h" /* Auxiliary attributes */ #define OBJ_AUXATTR(U, l) \ case OA_##U: \ if (o->aa_##l == NULL) \ return (NULL); \ CHECK_OBJ_NOTNULL(o->aa_##l, STORAGE_MAGIC); \ *len = o->aa_##l->len; \ return (o->aa_##l->ptr); #include "tbl/obj_attr.h" default: break; } WRONG("Unsupported OBJ_ATTR"); } static void * v_matchproto_(objsetattr_f) sml_setattr(struct worker *wrk, struct objcore *oc, enum obj_attr attr, ssize_t len, const void *ptr) { struct object *o; void *retval = NULL; struct storage *st; CHECK_OBJ_NOTNULL(wrk, WORKER_MAGIC); o = sml_getobj(wrk, oc); CHECK_OBJ_NOTNULL(o, OBJECT_MAGIC); st = o->objstore; switch (attr) { /* Fixed size attributes */ #define OBJ_FIXATTR(U, l, s) \ case OA_##U: \ assert(len == sizeof o->fa_##l); \ retval = o->fa_##l; \ break; #include "tbl/obj_attr.h" /* Variable size attributes */ #define OBJ_VARATTR(U, l) \ case OA_##U: \ if (o->va_##l##_len > 0) { \ AN(o->va_##l); \ assert(len == o->va_##l##_len); \ retval = o->va_##l; \ } else if (len > 0) { \ assert(len <= UINT_MAX); \ assert(st->len + len <= st->space); \ o->va_##l = st->ptr + st->len; \ st->len += len; \ o->va_##l##_len = len; \ retval = o->va_##l; \ } \ break; #include "tbl/obj_attr.h" /* Auxiliary attributes */ #define OBJ_AUXATTR(U, l) \ case OA_##U: \ if (o->aa_##l != NULL) { \ CHECK_OBJ_NOTNULL(o->aa_##l, STORAGE_MAGIC); \ assert(len == o->aa_##l->len); \ retval = o->aa_##l->ptr; \ break; \ } \ if (len == 0) \ break; \ o->aa_##l = objallocwithnuke(wrk, oc->stobj->stevedore, \ len, 0); \ if (o->aa_##l == NULL) \ break; \ CHECK_OBJ_NOTNULL(o->aa_##l, STORAGE_MAGIC); \ assert(len <= o->aa_##l->space); \ o->aa_##l->len = len; \ retval = o->aa_##l->ptr; \ break; #include "tbl/obj_attr.h" default: WRONG("Unsupported OBJ_ATTR"); break; } if (retval != NULL && ptr != NULL) memcpy(retval, ptr, len); return (retval); } const struct obj_methods SML_methods = { .objfree = sml_objfree, .objiterator = sml_iterator, .objgetspace = sml_getspace, .objextend = sml_extend, .objtrimstore = sml_trimstore, .objbocdone = sml_bocdone, .objslim = sml_slim, .objgetattr = sml_getattr, .objsetattr = sml_setattr, .objtouch = LRU_Touch, }; static void sml_panic_st(struct vsb *vsb, const char *hd, const struct storage *st) { VSB_printf(vsb, "%s = %p {priv=%p, ptr=%p, len=%u, space=%u},\n", hd, st, st->priv, st->ptr, st->len, st->space); } void SML_panic(struct vsb *vsb, const struct objcore *oc) { struct object *o; struct storage *st; VSB_printf(vsb, "Simple = %p,\n", oc->stobj->priv); if (oc->stobj->priv == NULL) return; CAST_OBJ_NOTNULL(o, oc->stobj->priv, OBJECT_MAGIC); sml_panic_st(vsb, "Obj", o->objstore); #define OBJ_FIXATTR(U, l, sz) \ VSB_printf(vsb, "%s = ", #U); \ VSB_quote(vsb, (const void*)o->fa_##l, sz, VSB_QUOTE_HEX); \ VSB_printf(vsb, ",\n"); #define OBJ_VARATTR(U, l) \ VSB_printf(vsb, "%s = {len=%u, ptr=%p},\n", \ #U, o->va_##l##_len, o->va_##l); #define OBJ_AUXATTR(U, l) \ do { \ if (o->aa_##l != NULL) sml_panic_st(vsb, #U, o->aa_##l);\ } while(0); #include "tbl/obj_attr.h" VTAILQ_FOREACH(st, &o->list, list) { sml_panic_st(vsb, "Body", st); } } varnish-7.5.0/bin/varnishd/storage/storage_simple.h000066400000000000000000000055141457605730600224370ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * SML is a set of methods for simple stevedores which just do simple * memory allocation and leave all the high-level stuff to SML. * */ /* Storage -----------------------------------------------------------*/ struct storage { unsigned magic; #define STORAGE_MAGIC 0x1a4e51c0 VTAILQ_ENTRY(storage) list; void *priv; unsigned char *ptr; unsigned len; unsigned space; }; /* Object ------------------------------------------------------------*/ VTAILQ_HEAD(storagehead, storage); struct object { unsigned magic; #define OBJECT_MAGIC 0x32851d42 struct storage *objstore; /* Fixed size attributes */ #define OBJ_FIXATTR(U, l, s) \ uint8_t fa_##l[s]; #include "tbl/obj_attr.h" /* Variable size attributes */ #define OBJ_VARATTR(U, l) \ uint8_t *va_##l; #include "tbl/obj_attr.h" #define OBJ_VARATTR(U, l) \ unsigned va_##l##_len; #include "tbl/obj_attr.h" /* Auxiliary attributes */ #define OBJ_AUXATTR(U, l) \ struct storage *aa_##l; #include "tbl/obj_attr.h" struct storagehead list; }; extern const struct obj_methods SML_methods; struct object *SML_MkObject(const struct stevedore *, struct objcore *, void *ptr); void *SML_AllocBuf(struct worker *, const struct stevedore *, size_t, uintptr_t *); void SML_FreeBuf(struct worker *, const struct stevedore *, uintptr_t); storage_allocobj_f SML_allocobj; storage_panic_f SML_panic; varnish-7.5.0/bin/varnishd/storage/storage_umem.c000066400000000000000000000273651457605730600221140ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2011 Varnish Software AS * Copyright 2017 UPLEX - Nils Goroll Systemoptimierung * All rights reserved. * * Authors: Poul-Henning Kamp * Nils Goroll * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Storage method based on libumem */ #include "config.h" #if defined(HAVE_UMEM_H) #include "cache/cache_varnishd.h" #include #include #include #include #include #include #include "storage/storage.h" #include "storage/storage_simple.h" #include "vnum.h" #include "common/heritage.h" #include "VSC_smu.h" struct smu_sc { unsigned magic; #define SMU_SC_MAGIC 0x7695f68e struct lock smu_mtx; VCL_BYTES smu_max; VCL_BYTES smu_alloc; struct VSC_smu *stats; umem_cache_t *smu_cache; }; struct smu { unsigned magic; #define SMU_MAGIC 0x3773300c struct storage s; size_t sz; struct smu_sc *sc; }; /* * We only want the umem slab allocator for cache storage, not also as a * substitute for malloc and friends. So we don't link with libumem, but * use dlopen/dlsym to get the slab allocator interface into function * pointers. */ typedef void * (*umem_alloc_f)(size_t size, int flags); typedef void (*umem_free_f)(void *buf, size_t size); typedef umem_cache_t * (*umem_cache_create_f)(char *debug_name, size_t bufsize, size_t align, umem_constructor_t *constructor, umem_destructor_t *destructor, umem_reclaim_t *reclaim, void *callback_data, vmem_t *source, int cflags); typedef void (*umem_cache_destroy_f)(umem_cache_t *cache); typedef void * (*umem_cache_alloc_f)(umem_cache_t *cache, int flags); typedef void (*umem_cache_free_f)(umem_cache_t *cache, void *buffer); static void *libumem_hndl = NULL; static umem_alloc_f umem_allocf = NULL; static umem_free_f umem_freef = NULL; static umem_cache_create_f umem_cache_createf = NULL; static umem_cache_destroy_f umem_cache_destroyf = NULL; static umem_cache_alloc_f umem_cache_allocf = NULL; static umem_cache_free_f umem_cache_freef = NULL; static const char * const def_umem_options = "perthread_cache=0,backend=mmap"; static const char * const env_umem_options = "UMEM_OPTIONS"; /* init required per cache get: smu->sz = size smu->s.ptr; smu->s.space = size */ static inline void smu_smu_init(struct smu *smu, struct smu_sc *sc) { INIT_OBJ(smu, SMU_MAGIC); smu->s.magic = STORAGE_MAGIC; smu->s.priv = smu; smu->sc = sc; } static int v_matchproto_(umem_constructor_t) smu_smu_constructor(void *buffer, void *callback_data, int flags) { struct smu *smu = buffer; struct smu_sc *sc; (void) flags; CAST_OBJ_NOTNULL(sc, callback_data, SMU_SC_MAGIC); smu_smu_init(smu, sc); return (0); } static void v_matchproto_(umem_destructor_t) smu_smu_destructor(void *buffer, void *callback_data) { struct smu *smu; struct smu_sc *sc; CAST_OBJ_NOTNULL(smu, buffer, SMU_MAGIC); CAST_OBJ_NOTNULL(sc, callback_data, SMU_SC_MAGIC); CHECK_OBJ_NOTNULL(&(smu->s), STORAGE_MAGIC); assert(smu->s.priv == smu); assert(smu->sc == sc); } static struct VSC_lck *lck_smu; static struct storage * v_matchproto_(sml_alloc_f) smu_alloc(const struct stevedore *st, size_t size) { struct smu_sc *smu_sc; struct smu *smu = NULL; void *p; CAST_OBJ_NOTNULL(smu_sc, st->priv, SMU_SC_MAGIC); Lck_Lock(&smu_sc->smu_mtx); smu_sc->stats->c_req++; if (smu_sc->smu_alloc + (int64_t)size > smu_sc->smu_max) { smu_sc->stats->c_fail++; size = 0; } else { smu_sc->smu_alloc += size; smu_sc->stats->c_bytes += size; smu_sc->stats->g_alloc++; smu_sc->stats->g_bytes += size; if (smu_sc->smu_max != VRT_INTEGER_MAX) smu_sc->stats->g_space -= size; } Lck_Unlock(&smu_sc->smu_mtx); if (size == 0) return (NULL); /* * Do not collaps the smu allocation with smu->s.ptr: it is not * a good idea. Not only would it make ->trim impossible, * performance-wise it would be a catastropy with chunksized * allocations growing another full page, just to accommodate the smu. */ p = umem_allocf(size, UMEM_DEFAULT); if (p != NULL) { AN(smu_sc->smu_cache); smu = umem_cache_allocf(smu_sc->smu_cache, UMEM_DEFAULT); if (smu != NULL) smu->s.ptr = p; else umem_freef(p, size); } if (smu == NULL) { Lck_Lock(&smu_sc->smu_mtx); /* * XXX: Not nice to have counters go backwards, but we do * XXX: Not want to pick up the lock twice just for stats. */ smu_sc->stats->c_fail++; smu_sc->smu_alloc -= size; smu_sc->stats->c_bytes -= size; smu_sc->stats->g_alloc--; smu_sc->stats->g_bytes -= size; if (smu_sc->smu_max != VRT_INTEGER_MAX) smu_sc->stats->g_space += size; Lck_Unlock(&smu_sc->smu_mtx); return (NULL); } smu->sz = size; smu->s.space = size; #ifndef BUG3210 assert(smu->sc == smu_sc); assert(smu->s.priv == smu); AZ(smu->s.len); #endif return (&smu->s); } static void v_matchproto_(sml_free_f) smu_free(struct storage *s) { struct smu *smu; struct smu_sc *sc; CHECK_OBJ_NOTNULL(s, STORAGE_MAGIC); CAST_OBJ_NOTNULL(smu, s->priv, SMU_MAGIC); CAST_OBJ_NOTNULL(sc, smu->sc, SMU_SC_MAGIC); Lck_Lock(&sc->smu_mtx); sc->smu_alloc -= smu->sz; sc->stats->g_alloc--; sc->stats->g_bytes -= smu->sz; sc->stats->c_freed += smu->sz; if (sc->smu_max != VRT_INTEGER_MAX) sc->stats->g_space += smu->sz; Lck_Unlock(&sc->smu_mtx); umem_freef(smu->s.ptr, smu->sz); smu_smu_init(smu, sc); AN(sc->smu_cache); umem_cache_freef(sc->smu_cache, smu); } static VCL_BYTES v_matchproto_(stv_var_used_space) smu_used_space(const struct stevedore *st) { struct smu_sc *smu_sc; CAST_OBJ_NOTNULL(smu_sc, st->priv, SMU_SC_MAGIC); return (smu_sc->smu_alloc); } static VCL_BYTES v_matchproto_(stv_var_free_space) smu_free_space(const struct stevedore *st) { struct smu_sc *smu_sc; CAST_OBJ_NOTNULL(smu_sc, st->priv, SMU_SC_MAGIC); return (smu_sc->smu_max - smu_sc->smu_alloc); } static void smu_umem_loaded_warn(void) { const char *e; static int warned = 0; if (warned++) return; fprintf(stderr, "notice:\tlibumem was already found to be loaded\n" "\tand will likely be used for all allocations\n"); e = getenv(env_umem_options); if (e == NULL || ! strstr(e, def_umem_options)) fprintf(stderr, "\tit is recommended to set %s=%s " "before starting varnish\n", env_umem_options, def_umem_options); } static int smu_umem_loaded(void) { void *h = NULL; h = dlopen("libumem.so", RTLD_NOLOAD); if (h) { AZ(dlclose(h)); return (1); } h = dlsym(RTLD_DEFAULT, "umem_alloc"); if (h) return (1); return (0); } static void v_matchproto_(storage_init_f) smu_init(struct stevedore *parent, int ac, char * const *av) { static int inited = 0; const char *e; uintmax_t u; struct smu_sc *sc; ALLOC_OBJ(sc, SMU_SC_MAGIC); AN(sc); sc->smu_max = VRT_INTEGER_MAX; assert(sc->smu_max == VRT_INTEGER_MAX); parent->priv = sc; AZ(av[ac]); if (ac > 1) ARGV_ERR("(-sumem) too many arguments\n"); if (ac == 1 && *av[0] != '\0') { e = VNUM_2bytes(av[0], &u, 0); if (e != NULL) ARGV_ERR("(-sumem) size \"%s\": %s\n", av[0], e); if ((u != (uintmax_t)(size_t)u)) ARGV_ERR("(-sumem) size \"%s\": too big\n", av[0]); if (u < 1024*1024) ARGV_ERR("(-sumem) size \"%s\": too small, " "did you forget to specify M or G?\n", av[0]); sc->smu_max = u; } if (inited++) return; if (smu_umem_loaded()) smu_umem_loaded_warn(); else AZ(setenv(env_umem_options, def_umem_options, 0)); /* Check if these load in the management process. */ (void) dlerror(); libumem_hndl = dlmopen(LM_ID_NEWLM, "libumem.so", RTLD_LAZY); if (libumem_hndl == NULL) ARGV_ERR("(-sumem) cannot open libumem.so: %s", dlerror()); #define DLSYM_UMEM(fptr,sym) \ do { \ (void) dlerror(); \ if (dlsym(libumem_hndl, #sym) == NULL) \ ARGV_ERR("(-sumem) cannot find symbol " \ #sym ": %s", \ dlerror()); \ fptr = NULL; \ } while(0) DLSYM_UMEM(umem_allocf, umem_alloc); DLSYM_UMEM(umem_freef, umem_free); DLSYM_UMEM(umem_cache_createf, umem_cache_create); DLSYM_UMEM(umem_cache_destroyf, umem_cache_destroy); DLSYM_UMEM(umem_cache_allocf, umem_cache_alloc); DLSYM_UMEM(umem_cache_freef, umem_cache_free); #undef DLSYM_UMEM AZ(dlclose(libumem_hndl)); libumem_hndl = NULL; } /* * Load the symbols for use in the child process, assert if they fail to load. */ static void smu_open_init(void) { static int inited = 0; if (inited++) { AN(libumem_hndl); AN(umem_allocf); return; } if (smu_umem_loaded()) smu_umem_loaded_warn(); else AN(getenv(env_umem_options)); AZ(libumem_hndl); libumem_hndl = dlopen("libumem.so", RTLD_LAZY); AN(libumem_hndl); #define DLSYM_UMEM(fptr,sym) \ do { \ fptr = (sym ## _f) dlsym(libumem_hndl, #sym); \ AN(fptr); \ } while(0) DLSYM_UMEM(umem_allocf, umem_alloc); DLSYM_UMEM(umem_freef, umem_free); DLSYM_UMEM(umem_cache_createf, umem_cache_create); DLSYM_UMEM(umem_cache_destroyf, umem_cache_destroy); DLSYM_UMEM(umem_cache_allocf, umem_cache_alloc); DLSYM_UMEM(umem_cache_freef, umem_cache_free); #undef DLSYM_UMEM } static void v_matchproto_(storage_open_f) smu_open(struct stevedore *st) { struct smu_sc *smu_sc; char ident[strlen(st->ident) + 1]; ASSERT_CLI(); st->lru = LRU_Alloc(); if (lck_smu == NULL) lck_smu = Lck_CreateClass(NULL, "smu"); CAST_OBJ_NOTNULL(smu_sc, st->priv, SMU_SC_MAGIC); Lck_New(&smu_sc->smu_mtx, lck_smu); smu_sc->stats = VSC_smu_New(NULL, NULL, st->ident); if (smu_sc->smu_max != VRT_INTEGER_MAX) smu_sc->stats->g_space = smu_sc->smu_max; smu_open_init(); bstrcpy(ident, st->ident); smu_sc->smu_cache = umem_cache_createf(ident, sizeof(struct smu), 0, // align smu_smu_constructor, smu_smu_destructor, NULL, // reclaim smu_sc, // callback_data NULL, // source 0 // cflags ); AN(smu_sc->smu_cache); } static void v_matchproto_(storage_close_f) smu_close(const struct stevedore *st, int warn) { struct smu_sc *smu_sc; ASSERT_CLI(); CAST_OBJ_NOTNULL(smu_sc, st->priv, SMU_SC_MAGIC); if (warn) return; #ifdef WORKAROUND_3190 /* see ticket 3190 for explanation */ umem_cache_destroyf(smu_sc->smu_cache); smu_sc->smu_cache = NULL; #endif /* XXX TODO? - LRU_Free - Lck Destroy */ } const struct stevedore smu_stevedore = { .magic = STEVEDORE_MAGIC, .name = "umem", .init = smu_init, .open = smu_open, .close = smu_close, .sml_alloc = smu_alloc, .sml_free = smu_free, .allocobj = SML_allocobj, .panic = SML_panic, .methods = &SML_methods, .var_free_space = smu_free_space, .var_used_space = smu_used_space, .allocbuf = SML_AllocBuf, .freebuf = SML_FreeBuf, }; #endif /* HAVE_UMEM_H */ varnish-7.5.0/bin/varnishd/vclflint.lnt000066400000000000000000000012161457605730600201400ustar00rootroot00000000000000// Copyright (c) 2007-2017 Varnish Software AS // SPDX-License-Identifier: BSD-2-Clause // See LICENSE file for full text of license // Flexelint configuration file for VCL compiler output // -esym(763, sess) // Redundant declaration for symbol 'sess' -esym(763, cli) // Redundant declaration for symbol 'cli' -esym(750, VCL_RET_*) // not ref -esym(750, VCL_MET_*) // not ref -esym(753, vsb) // not ref -esym(750, vrt_*) // not ref -esym(753, vrt_*) // not ref -esym(750, VRT_*) // not ref // Harmless -e752 // local declarator [...] not referenced -e754 // local structure member [...] not referenced -e526 // Symbol [...] not defined varnish-7.5.0/bin/varnishd/vclflint.sh000077500000000000000000000004641457605730600177640ustar00rootroot00000000000000#!/bin/sh # # Run flexelint on the VCL output LIBS="-p vmod_path=/home/phk/Varnish/trunk/varnish-cache/vmod/.libs" if [ "x$1" = "x" ] ; then ./varnishd $LIBS -C -b localhost > /tmp/_.c elif [ -f $1 ] ; then ./varnishd $LIBS -C -f $1 > /tmp/_.c else echo "usage!" 1>&2 fi flexelint vclflint.lnt /tmp/_.c varnish-7.5.0/bin/varnishd/waiter/000077500000000000000000000000001457605730600170735ustar00rootroot00000000000000varnish-7.5.0/bin/varnishd/waiter/cache_waiter.c000066400000000000000000000112101457605730600216500ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #include "config.h" #include "cache/cache_varnishd.h" #include #include "vbh.h" #include "waiter/waiter.h" #include "waiter/waiter_priv.h" #include "waiter/mgt_waiter.h" static int v_matchproto_(vbh_cmp_t) waited_cmp(void *priv, const void *a, const void *b) { const struct waiter *ww; const struct waited *aa, *bb; CAST_OBJ_NOTNULL(ww, priv, WAITER_MAGIC); CAST_OBJ_NOTNULL(aa, a, WAITED_MAGIC); CAST_OBJ_NOTNULL(bb, b, WAITED_MAGIC); return (Wait_When(aa) < Wait_When(bb)); } static void v_matchproto_(vbh_update_t) waited_update(void *priv, void *p, unsigned u) { struct waited *pp; (void)priv; CAST_OBJ_NOTNULL(pp, p, WAITED_MAGIC); pp->idx = u; } /**********************************************************************/ void Wait_Call(const struct waiter *w, struct waited *wp, enum wait_event ev, double now) { CHECK_OBJ_NOTNULL(w, WAITER_MAGIC); CHECK_OBJ_NOTNULL(wp, WAITED_MAGIC); AN(wp->func); assert(wp->idx == VBH_NOIDX); wp->func(wp, ev, now); } /**********************************************************************/ void Wait_HeapInsert(const struct waiter *w, struct waited *wp) { CHECK_OBJ_NOTNULL(w, WAITER_MAGIC); CHECK_OBJ_NOTNULL(wp, WAITED_MAGIC); assert(wp->idx == VBH_NOIDX); VBH_insert(w->heap, wp); } /* * XXX: wp is const because otherwise FlexeLint complains. However, *wp * XXX: will actually change as a result of calling this function, via * XXX: the pointer stored in the bin-heap. I can see how this const * XXX: could maybe confuse a compilers optimizer, but I do not expect * XXX: any harm to come from it. Caveat Emptor. */ int Wait_HeapDelete(const struct waiter *w, const struct waited *wp) { CHECK_OBJ_NOTNULL(w, WAITER_MAGIC); CHECK_OBJ_NOTNULL(wp, WAITED_MAGIC); if (wp->idx == VBH_NOIDX) return (0); VBH_delete(w->heap, wp->idx); return (1); } double Wait_HeapDue(const struct waiter *w, struct waited **wpp) { struct waited *wp; wp = VBH_root(w->heap); CHECK_OBJ_ORNULL(wp, WAITED_MAGIC); if (wp == NULL) { if (wpp != NULL) *wpp = NULL; return (0); } if (wpp != NULL) *wpp = wp; return (Wait_When(wp)); } /**********************************************************************/ int Wait_Enter(const struct waiter *w, struct waited *wp) { CHECK_OBJ_NOTNULL(w, WAITER_MAGIC); CHECK_OBJ_NOTNULL(wp, WAITED_MAGIC); assert(wp->fd > 0); // stdin never comes here AN(wp->func); wp->idx = VBH_NOIDX; return (w->impl->enter(w->priv, wp)); } /**********************************************************************/ const char * Waiter_GetName(void) { if (waiter != NULL) return (waiter->name); else return ("(No Waiter?)"); } struct waiter * Waiter_New(void) { struct waiter *w; AN(waiter); AN(waiter->name); AN(waiter->init); AN(waiter->enter); AN(waiter->fini); w = calloc(1, sizeof (struct waiter) + waiter->size); AN(w); INIT_OBJ(w, WAITER_MAGIC); w->priv = (void*)(w + 1); w->impl = waiter; VTAILQ_INIT(&w->waithead); w->heap = VBH_new(w, waited_cmp, waited_update); waiter->init(w); return (w); } void Waiter_Destroy(struct waiter **wp) { struct waiter *w; TAKE_OBJ_NOTNULL(w, wp, WAITER_MAGIC); AZ(VBH_root(w->heap)); AN(w->impl->fini); w->impl->fini(w); FREE_OBJ(w); } varnish-7.5.0/bin/varnishd/waiter/cache_waiter_epoll.c000066400000000000000000000143271457605730600230570ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2011 Varnish Software AS * All rights reserved. * * Author: Rogerio Carvalho Schneider * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Recommended reading: libev(3) "EVBACKEND_EPOLL" section * - thank you, Marc Alexander Lehmann */ #include "config.h" #if defined(HAVE_EPOLL_CTL) #include #include #include "cache/cache_varnishd.h" #include "waiter/waiter.h" #include "waiter/waiter_priv.h" #include "vtim.h" #ifndef EPOLLRDHUP # define EPOLLRDHUP 0 #endif #define NEEV 8192 struct vwe { unsigned magic; #define VWE_MAGIC 0x6bd73424 int epfd; struct waiter *waiter; pthread_t thread; double next; int pipe[2]; unsigned nwaited; int die; struct lock mtx; }; /*--------------------------------------------------------------------*/ static void * vwe_thread(void *priv) { struct epoll_event *ev, *ep; struct waited *wp; struct waiter *w; double now, then; int i, n, active; struct vwe *vwe; char c; CAST_OBJ_NOTNULL(vwe, priv, VWE_MAGIC); w = vwe->waiter; CHECK_OBJ_NOTNULL(w, WAITER_MAGIC); THR_SetName("cache-epoll"); THR_Init(); ev = malloc(sizeof(struct epoll_event) * NEEV); AN(ev); now = VTIM_real(); while (1) { while (1) { Lck_Lock(&vwe->mtx); /* * XXX: We could avoid many syscalls here if we were * XXX: allowed to just close the fd's on timeout. */ then = Wait_HeapDue(w, &wp); if (wp == NULL) { vwe->next = now + 100; break; } else if (then > now) { vwe->next = then; break; } CHECK_OBJ_NOTNULL(wp, WAITED_MAGIC); AZ(epoll_ctl(vwe->epfd, EPOLL_CTL_DEL, wp->fd, NULL)); vwe->nwaited--; AN(Wait_HeapDelete(w, wp)); Lck_Unlock(&vwe->mtx); Wait_Call(w, wp, WAITER_TIMEOUT, now); } then = vwe->next - now; i = (int)ceil(1e3 * then); assert(i > 0); Lck_Unlock(&vwe->mtx); do { /* Due to a linux kernel bug, epoll_wait can return EINTR when the process is subjected to ptrace or waking from OS suspend. */ n = epoll_wait(vwe->epfd, ev, NEEV, i); } while (n < 0 && errno == EINTR); assert(n >= 0); assert(n <= NEEV); now = VTIM_real(); for (ep = ev, i = 0; i < n; i++, ep++) { if (ep->data.ptr == vwe) { assert(read(vwe->pipe[0], &c, 1) == 1); continue; } CAST_OBJ_NOTNULL(wp, ep->data.ptr, WAITED_MAGIC); Lck_Lock(&vwe->mtx); active = Wait_HeapDelete(w, wp); Lck_Unlock(&vwe->mtx); if (!active) { VSL(SLT_Debug, NO_VXID, "epoll: spurious event (%d)", wp->fd); continue; } AZ(epoll_ctl(vwe->epfd, EPOLL_CTL_DEL, wp->fd, NULL)); vwe->nwaited--; if (ep->events & EPOLLIN) Wait_Call(w, wp, WAITER_ACTION, now); else if (ep->events & EPOLLERR) Wait_Call(w, wp, WAITER_REMCLOSE, now); else if (ep->events & EPOLLHUP) Wait_Call(w, wp, WAITER_REMCLOSE, now); else Wait_Call(w, wp, WAITER_REMCLOSE, now); } if (vwe->nwaited == 0 && vwe->die) break; } free(ev); closefd(&vwe->pipe[0]); closefd(&vwe->pipe[1]); closefd(&vwe->epfd); return (NULL); } /*--------------------------------------------------------------------*/ static int v_matchproto_(waiter_enter_f) vwe_enter(void *priv, struct waited *wp) { struct vwe *vwe; struct epoll_event ee; CAST_OBJ_NOTNULL(vwe, priv, VWE_MAGIC); ee.events = EPOLLIN | EPOLLRDHUP; ee.data.ptr = wp; Lck_Lock(&vwe->mtx); vwe->nwaited++; Wait_HeapInsert(vwe->waiter, wp); AZ(epoll_ctl(vwe->epfd, EPOLL_CTL_ADD, wp->fd, &ee)); /* If the epoll isn't due before our timeout, poke it via the pipe */ if (Wait_When(wp) < vwe->next) assert(write(vwe->pipe[1], "X", 1) == 1); Lck_Unlock(&vwe->mtx); return (0); } /*--------------------------------------------------------------------*/ static void v_matchproto_(waiter_init_f) vwe_init(struct waiter *w) { struct vwe *vwe; struct epoll_event ee; CHECK_OBJ_NOTNULL(w, WAITER_MAGIC); vwe = w->priv; INIT_OBJ(vwe, VWE_MAGIC); vwe->waiter = w; vwe->epfd = epoll_create(1); assert(vwe->epfd >= 0); Lck_New(&vwe->mtx, lck_waiter); AZ(pipe(vwe->pipe)); ee.events = EPOLLIN | EPOLLRDHUP; ee.data.ptr = vwe; AZ(epoll_ctl(vwe->epfd, EPOLL_CTL_ADD, vwe->pipe[0], &ee)); PTOK(pthread_create(&vwe->thread, NULL, vwe_thread, vwe)); } /*-------------------------------------------------------------------- * It is the callers responsibility to trigger all fd's waited on to * fail somehow. */ static void v_matchproto_(waiter_fini_f) vwe_fini(struct waiter *w) { struct vwe *vwe; void *vp; CAST_OBJ_NOTNULL(vwe, w->priv, VWE_MAGIC); Lck_Lock(&vwe->mtx); vwe->die = 1; assert(write(vwe->pipe[1], "Y", 1) == 1); Lck_Unlock(&vwe->mtx); PTOK(pthread_join(vwe->thread, &vp)); Lck_Delete(&vwe->mtx); } /*--------------------------------------------------------------------*/ #include "waiter/mgt_waiter.h" const struct waiter_impl waiter_epoll = { .name = "epoll", .init = vwe_init, .fini = vwe_fini, .enter = vwe_enter, .size = sizeof(struct vwe), }; #endif /* defined(HAVE_EPOLL_CTL) */ varnish-7.5.0/bin/varnishd/waiter/cache_waiter_kqueue.c000066400000000000000000000130411457605730600232330ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #include "config.h" #if defined(HAVE_KQUEUE) #include "cache/cache_varnishd.h" #include #include #include "waiter/waiter.h" #include "waiter/waiter_priv.h" #include "vtim.h" #define NKEV 256 struct vwk { unsigned magic; #define VWK_MAGIC 0x1cc2acc2 int kq; struct waiter *waiter; pthread_t thread; double next; int pipe[2]; unsigned nwaited; int die; struct lock mtx; }; /*--------------------------------------------------------------------*/ static void * vwk_thread(void *priv) { struct vwk *vwk; struct kevent ke[NKEV], *kp; int j, n; double now, then; struct timespec ts; struct waited *wp; struct waiter *w; char c; CAST_OBJ_NOTNULL(vwk, priv, VWK_MAGIC); w = vwk->waiter; CHECK_OBJ_NOTNULL(w, WAITER_MAGIC); THR_SetName("cache-kqueue"); THR_Init(); now = VTIM_real(); while (1) { while (1) { Lck_Lock(&vwk->mtx); /* * XXX: We could avoid many syscalls here if we were * XXX: allowed to just close the fd's on timeout. */ then = Wait_HeapDue(w, &wp); if (wp == NULL) { vwk->next = now + 100; break; } else if (then > now) { vwk->next = then; break; } CHECK_OBJ_NOTNULL(wp, WAITED_MAGIC); EV_SET(ke, wp->fd, EVFILT_READ, EV_DELETE, 0, 0, NULL); AZ(kevent(vwk->kq, ke, 1, NULL, 0, NULL)); AN(Wait_HeapDelete(w, wp)); Lck_Unlock(&vwk->mtx); Wait_Call(w, wp, WAITER_TIMEOUT, now); } then = vwk->next - now; ts = VTIM_timespec(then); Lck_Unlock(&vwk->mtx); n = kevent(vwk->kq, NULL, 0, ke, NKEV, &ts); assert(n >= 0); assert(n <= NKEV); now = VTIM_real(); for (kp = ke, j = 0; j < n; j++, kp++) { assert(kp->filter == EVFILT_READ); if ((uintptr_t)ke[j].udata == (uintptr_t)vwk) { assert(read(vwk->pipe[0], &c, 1) == 1); continue; } CAST_OBJ_NOTNULL(wp, (void*)ke[j].udata, WAITED_MAGIC); Lck_Lock(&vwk->mtx); AN(Wait_HeapDelete(w, wp)); Lck_Unlock(&vwk->mtx); vwk->nwaited--; if (kp->flags & EV_EOF) Wait_Call(w, wp, WAITER_REMCLOSE, now); else Wait_Call(w, wp, WAITER_ACTION, now); } if (vwk->nwaited == 0 && vwk->die) break; } closefd(&vwk->pipe[0]); closefd(&vwk->pipe[1]); closefd(&vwk->kq); return (NULL); } /*--------------------------------------------------------------------*/ static int v_matchproto_(waiter_enter_f) vwk_enter(void *priv, struct waited *wp) { struct vwk *vwk; struct kevent ke; CAST_OBJ_NOTNULL(vwk, priv, VWK_MAGIC); EV_SET(&ke, wp->fd, EVFILT_READ, EV_ADD|EV_ONESHOT, 0, 0, wp); Lck_Lock(&vwk->mtx); vwk->nwaited++; Wait_HeapInsert(vwk->waiter, wp); AZ(kevent(vwk->kq, &ke, 1, NULL, 0, NULL)); /* If the kqueue isn't due before our timeout, poke it via the pipe */ if (Wait_When(wp) < vwk->next) assert(write(vwk->pipe[1], "X", 1) == 1); Lck_Unlock(&vwk->mtx); return (0); } /*--------------------------------------------------------------------*/ static void v_matchproto_(waiter_init_f) vwk_init(struct waiter *w) { struct vwk *vwk; struct kevent ke; CHECK_OBJ_NOTNULL(w, WAITER_MAGIC); vwk = w->priv; INIT_OBJ(vwk, VWK_MAGIC); vwk->waiter = w; vwk->kq = kqueue(); assert(vwk->kq >= 0); Lck_New(&vwk->mtx, lck_waiter); AZ(pipe(vwk->pipe)); EV_SET(&ke, vwk->pipe[0], EVFILT_READ, EV_ADD, 0, 0, vwk); AZ(kevent(vwk->kq, &ke, 1, NULL, 0, NULL)); PTOK(pthread_create(&vwk->thread, NULL, vwk_thread, vwk)); } /*-------------------------------------------------------------------- * It is the callers responsibility to trigger all fd's waited on to * fail somehow. */ static void v_matchproto_(waiter_fini_f) vwk_fini(struct waiter *w) { struct vwk *vwk; void *vp; CAST_OBJ_NOTNULL(vwk, w->priv, VWK_MAGIC); Lck_Lock(&vwk->mtx); vwk->die = 1; assert(write(vwk->pipe[1], "Y", 1) == 1); Lck_Unlock(&vwk->mtx); PTOK(pthread_join(vwk->thread, &vp)); Lck_Delete(&vwk->mtx); } /*--------------------------------------------------------------------*/ #include "waiter/mgt_waiter.h" const struct waiter_impl waiter_kqueue = { .name = "kqueue", .init = vwk_init, .fini = vwk_fini, .enter = vwk_enter, .size = sizeof(struct vwk), }; #endif /* defined(HAVE_KQUEUE) */ varnish-7.5.0/bin/varnishd/waiter/cache_waiter_poll.c000066400000000000000000000157401457605730600227120ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #include "config.h" #include #include #include #include "cache/cache_varnishd.h" #include "waiter/waiter.h" #include "waiter/waiter_priv.h" #include "vtim.h" struct vwp { unsigned magic; #define VWP_MAGIC 0x4b2cc735 struct waiter *waiter; int pipes[2]; pthread_t thread; struct pollfd *pollfd; struct waited **idx; size_t npoll; size_t hpoll; }; /*-------------------------------------------------------------------- * It would make much more sense to not use two large vectors, but * the poll(2) API forces us to use at least one, so ... KISS. */ static void vwp_extend_pollspace(struct vwp *vwp) { size_t inc; if (vwp->npoll < (1<<12)) inc = (1<<10); else if (vwp->npoll < (1<<14)) inc = (1<<12); else if (vwp->npoll < (1<<16)) inc = (1<<14); else inc = (1<<16); VSL(SLT_Debug, NO_VXID, "Acceptor poll space increased by %zu to %zu", inc, vwp->npoll + inc); vwp->pollfd = realloc(vwp->pollfd, (vwp->npoll + inc) * sizeof(*vwp->pollfd)); AN(vwp->pollfd); memset(vwp->pollfd + vwp->npoll, 0, inc * sizeof(*vwp->pollfd)); vwp->idx = realloc(vwp->idx, (vwp->npoll + inc) * sizeof(*vwp->idx)); AN(vwp->idx); memset(vwp->idx + vwp->npoll, 0, inc * sizeof(*vwp->idx)); for (; inc > 0; inc--) vwp->pollfd[vwp->npoll++].fd = -1; } /*--------------------------------------------------------------------*/ static void vwp_add(struct vwp *vwp, struct waited *wp) { CHECK_OBJ_NOTNULL(wp, WAITED_MAGIC); VSL(SLT_Debug, NO_VXID, "vwp: ADD %d", wp->fd); CHECK_OBJ_NOTNULL(vwp, VWP_MAGIC); if (vwp->hpoll == vwp->npoll) vwp_extend_pollspace(vwp); assert(vwp->hpoll < vwp->npoll); assert(vwp->pollfd[vwp->hpoll].fd == -1); AZ(vwp->idx[vwp->hpoll]); vwp->pollfd[vwp->hpoll].fd = wp->fd; vwp->pollfd[vwp->hpoll].events = POLLIN; vwp->idx[vwp->hpoll] = wp; vwp->hpoll++; Wait_HeapInsert(vwp->waiter, wp); } static void vwp_del(struct vwp *vwp, int n) { vwp->hpoll--; if (n != vwp->hpoll) { vwp->pollfd[n] = vwp->pollfd[vwp->hpoll]; vwp->idx[n] = vwp->idx[vwp->hpoll]; } memset(&vwp->pollfd[vwp->hpoll], 0, sizeof(*vwp->pollfd)); vwp->pollfd[vwp->hpoll].fd = -1; vwp->idx[vwp->hpoll] = NULL; } /*--------------------------------------------------------------------*/ static void vwp_dopipe(struct vwp *vwp) { struct waited *w[128]; ssize_t ss; int i; ss = read(vwp->pipes[0], w, sizeof w); assert(ss > 0); i = 0; while (ss) { if (w[i] == NULL) { assert(ss == sizeof w[0]); assert(vwp->hpoll == 1); pthread_exit(NULL); } CHECK_OBJ_NOTNULL(w[i], WAITED_MAGIC); assert(w[i]->fd > 0); // no stdin vwp_add(vwp, w[i++]); ss -= sizeof w[0]; } } /*--------------------------------------------------------------------*/ static void * vwp_main(void *priv) { int t, v; struct vwp *vwp; struct waiter *w; struct waited *wp; double now, then; size_t z; THR_SetName("cache-poll"); THR_Init(); CAST_OBJ_NOTNULL(vwp, priv, VWP_MAGIC); w = vwp->waiter; while (1) { then = Wait_HeapDue(w, &wp); if (wp == NULL) t = -1; else t = (int)floor(1e3 * (then - VTIM_real())); assert(vwp->hpoll > 0); AN(vwp->pollfd); v = poll(vwp->pollfd, vwp->hpoll, t); assert(v >= 0); now = VTIM_real(); if (vwp->pollfd[0].revents) v--; for (z = 1; z < vwp->hpoll;) { assert(vwp->pollfd[z].fd != vwp->pipes[0]); wp = vwp->idx[z]; CHECK_OBJ_NOTNULL(wp, WAITED_MAGIC); if (v == 0 && Wait_HeapDue(w, NULL) > now) break; if (vwp->pollfd[z].revents) v--; then = Wait_When(wp); if (then <= now) { AN(Wait_HeapDelete(w, wp)); Wait_Call(w, wp, WAITER_TIMEOUT, now); vwp_del(vwp, z); } else if (vwp->pollfd[z].revents & POLLIN) { assert(wp->fd > 0); assert(wp->fd == vwp->pollfd[z].fd); AN(Wait_HeapDelete(w, wp)); Wait_Call(w, wp, WAITER_ACTION, now); vwp_del(vwp, z); } else { z++; } } if (vwp->pollfd[0].revents) vwp_dopipe(vwp); } NEEDLESS(return (NULL)); } /*--------------------------------------------------------------------*/ static int v_matchproto_(waiter_enter_f) vwp_enter(void *priv, struct waited *wp) { struct vwp *vwp; CAST_OBJ_NOTNULL(vwp, priv, VWP_MAGIC); if (write(vwp->pipes[1], &wp, sizeof wp) != sizeof wp) return (-1); return (0); } /*--------------------------------------------------------------------*/ static void v_matchproto_(waiter_init_f) vwp_init(struct waiter *w) { struct vwp *vwp; CHECK_OBJ_NOTNULL(w, WAITER_MAGIC); vwp = w->priv; INIT_OBJ(vwp, VWP_MAGIC); vwp->waiter = w; AZ(pipe(vwp->pipes)); // XXX: set write pipe non-blocking vwp->hpoll = 1; vwp_extend_pollspace(vwp); vwp->pollfd[0].fd = vwp->pipes[0]; vwp->pollfd[0].events = POLLIN; PTOK(pthread_create(&vwp->thread, NULL, vwp_main, vwp)); } /*-------------------------------------------------------------------- * It is the callers responsibility to trigger all fd's waited on to * fail somehow. */ static void v_matchproto_(waiter_fini_f) vwp_fini(struct waiter *w) { struct vwp *vwp; void *vp; CAST_OBJ_NOTNULL(vwp, w->priv, VWP_MAGIC); vp = NULL; while (vwp->hpoll > 1) (void)usleep(100000); // XXX: set write pipe blocking assert(write(vwp->pipes[1], &vp, sizeof vp) == sizeof vp); PTOK(pthread_join(vwp->thread, &vp)); closefd(&vwp->pipes[0]); closefd(&vwp->pipes[1]); free(vwp->pollfd); free(vwp->idx); } /*--------------------------------------------------------------------*/ #include "waiter/mgt_waiter.h" const struct waiter_impl waiter_poll = { .name = "poll", .init = vwp_init, .fini = vwp_fini, .enter = vwp_enter, .size = sizeof(struct vwp), }; varnish-7.5.0/bin/varnishd/waiter/cache_waiter_ports.c000066400000000000000000000165671457605730600231230ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006 Varnish Software AS * Copyright (c) 2007 OmniTI Computer Consulting, Inc. * Copyright (c) 2007 Theo Schlossnagle * Copyright 2010-2016 UPLEX - Nils Goroll Systemoptimierung * All rights reserved. * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * On concurrency: * * There are several options for the enter method to add an fd for the waiter * thread to look after: * * - share the binheap (requiring a mutex) - implemented for epoll and kqueues * * - send events to be entered through the events interface and keep the binheap * private to the waiter thread - implemented here. * * - some other message passing / mailbox * * It has not yet been determined which option is best. In the best case, by * sharing the binheap, we can save two port syscalls - but not always: * * - if the waited event has a timeout earlier than the first element on the * binheap, we need to kick the waiter thread anyway * * - if the waiter thread is busy, it will get the passed waited event together * with other events * * on the other end we need to sync on the mtx to protect the binheap. Solaris * uses userland adaptive mutexes: if the thread holding the lock is running, * spinlock, otherwise syscall. * * and the critical section for the mtx is basically "whenever not blocking in * port_getn", which does not sound too good with respect to scalability. * * At any rate, we could save even more syscalls by increasing nevents * (port_getn returns when nevents exist or the timeout is reached). This would * increase our latency reacting on POLLIN events. * */ #include "config.h" #if defined(HAVE_PORT_CREATE) #include #include #include #include #include "cache/cache_varnishd.h" #include "waiter/waiter.h" #include "waiter/waiter_priv.h" #include "waiter/mgt_waiter.h" #include "vtim.h" // XXX replace with process.max-port-events bound to a sensible maximum #define MAX_EVENTS 256 struct vws { unsigned magic; #define VWS_MAGIC 0x0b771473 struct waiter *waiter; pthread_t thread; double next; int dport; unsigned nwaited; int die; }; static inline void vws_add(struct vws *vws, int fd, void *data) { // POLLIN should be all we need here AZ(port_associate(vws->dport, PORT_SOURCE_FD, fd, POLLIN, data)); } static inline void vws_del(struct vws *vws, int fd) { port_dissociate(vws->dport, PORT_SOURCE_FD, fd); } static inline void vws_port_ev(struct vws *vws, struct waiter *w, port_event_t *ev, double now) { struct waited *wp; if (ev->portev_source == PORT_SOURCE_USER) { CAST_OBJ_NOTNULL(wp, ev->portev_user, WAITED_MAGIC); assert(wp->fd >= 0); vws->nwaited++; Wait_HeapInsert(vws->waiter, wp); vws_add(vws, wp->fd, wp); } else { assert(ev->portev_source == PORT_SOURCE_FD); CAST_OBJ_NOTNULL(wp, ev->portev_user, WAITED_MAGIC); assert(wp->fd >= 0); vws->nwaited--; /* * port_getn does not implicitly disassociate * * Ref: http://opensolaris.org/jive/thread.jspa?\ * threadID=129476&tstart=0 */ vws_del(vws, wp->fd); AN(Wait_HeapDelete(w, wp)); Wait_Call(w, wp, ev->portev_events & POLLERR ? WAITER_REMCLOSE : WAITER_ACTION, now); } } static void * vws_thread(void *priv) { struct waited *wp; struct waiter *w; struct vws *vws; double now, then; struct timespec ts; const double max_t = 100.0; port_event_t ev[MAX_EVENTS]; u_int nevents; int ei, ret; CAST_OBJ_NOTNULL(vws, priv, VWS_MAGIC); w = vws->waiter; CHECK_OBJ_NOTNULL(w, WAITER_MAGIC); THR_SetName("cache-ports"); THR_Init(); now = VTIM_real(); while (!vws->die) { while (1) { then = Wait_HeapDue(w, &wp); if (wp == NULL) { vws->next = now + max_t; break; } else if (then > now) { vws->next = then; break; } CHECK_OBJ_NOTNULL(wp, WAITED_MAGIC); vws_del(vws, wp->fd); AN(Wait_HeapDelete(w, wp)); Wait_Call(w, wp, WAITER_TIMEOUT, now); } then = vws->next - now; ts = VTIM_timespec(then); /* * min number of events we accept. could consider to scale up * for efficiency, but as we always get all waiting events up to * the maximum, we'd only optimize the idle case sacrificing * some latency */ nevents = 1; /* * see discussion in * - https://issues.apache.org/bugzilla/show_bug.cgi?id=47645 * - http://mail.opensolaris.org/pipermail/\ * networking-discuss/2009-August/011979.html * * comment from apr/poll/unix/port.c : * * This confusing API can return an event at the same time * that it reports EINTR or ETIME. * */ ret = port_getn(vws->dport, ev, MAX_EVENTS, &nevents, &ts); now = VTIM_real(); if (ret < 0 && errno == EBADF) { /* close on dport is our stop signal */ AN(vws->die); break; } if (ret < 0) assert((errno == EINTR) || (errno == ETIME)); for (ei = 0; ei < nevents; ei++) vws_port_ev(vws, w, &ev[ei], now); } return (NULL); } /*--------------------------------------------------------------------*/ static int vws_enter(void *priv, struct waited *wp) { int r; struct vws *vws; CAST_OBJ_NOTNULL(vws, priv, VWS_MAGIC); r = port_send(vws->dport, 0, TRUST_ME(wp)); if (r == -1 && errno == EAGAIN) return (-1); AZ(r); return (0); } /*--------------------------------------------------------------------*/ static void v_matchproto_(waiter_init_f) vws_init(struct waiter *w) { struct vws *vws; CHECK_OBJ_NOTNULL(w, WAITER_MAGIC); vws = w->priv; INIT_OBJ(vws, VWS_MAGIC); vws->waiter = w; vws->dport = port_create(); assert(vws->dport >= 0); PTOK(pthread_create(&vws->thread, NULL, vws_thread, vws)); } /*--------------------------------------------------------------------*/ static void v_matchproto_(waiter_fini_f) vws_fini(struct waiter *w) { struct vws *vws; void *vp; CAST_OBJ_NOTNULL(vws, w->priv, VWS_MAGIC); vws->die = 1; closefd(&vws->dport); PTOK(pthread_join(vws->thread, &vp)); } /*--------------------------------------------------------------------*/ const struct waiter_impl waiter_ports = { .name = "ports", .init = vws_init, .fini = vws_fini, .enter = vws_enter, .size = sizeof(struct vws), }; #endif /* defined(HAVE_PORT_CREATE) */ varnish-7.5.0/bin/varnishd/waiter/mgt_waiter.c000066400000000000000000000036441457605730600214100ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #include "config.h" #include #include "mgt/mgt.h" #include "waiter/mgt_waiter.h" #include "common/heritage.h" static const struct choice waiter_choice[] = { #define WAITER(nm) { #nm, &waiter_##nm }, #include "tbl/waiters.h" { NULL, NULL} }; struct waiter_impl const *waiter; void Wait_config(const char *arg) { ASSERT_MGT(); if (arg != NULL) waiter = MGT_Pick(waiter_choice, arg, "waiter"); else waiter = waiter_choice[0].ptr; } varnish-7.5.0/bin/varnishd/waiter/mgt_waiter.h000066400000000000000000000033031457605730600214050ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Private interfaces */ struct waiter_impl; /* mgt_waiter.c */ extern struct waiter_impl const * waiter; #define WAITER(nm) extern const struct waiter_impl waiter_##nm; #include "tbl/waiters.h" void Wait_config(const char *arg); varnish-7.5.0/bin/varnishd/waiter/waiter.h000066400000000000000000000051231457605730600205400ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Waiters are herders of connections: They monitor a large number of * connections and react if data arrives, the connection is closed or * if nothing happens for a specified timeout period. * * The "poll" waiter should be portable to just about anything, but it * is not very efficient because it has to setup state on each call to * poll(2). Almost all kernels have made better facilities for that * reason, needless to say, each with its own NIH-controlled API: * * - kqueue on FreeBSD * - epoll on Linux * - ports on Solaris * * Public interfaces */ struct waited; struct waiter; enum wait_event { WAITER_REMCLOSE, WAITER_TIMEOUT, WAITER_ACTION, WAITER_CLOSE }; typedef void waiter_handle_f(struct waited *, enum wait_event, vtim_real now); struct waited { unsigned magic; #define WAITED_MAGIC 0x1743992d int fd; unsigned idx; void *priv1; const void *priv2; waiter_handle_f *func; vtim_dur tmo; vtim_real idle; }; /* cache_waiter.c */ int Wait_Enter(const struct waiter *, struct waited *); struct waiter *Waiter_New(void); void Waiter_Destroy(struct waiter **); const char *Waiter_GetName(void); varnish-7.5.0/bin/varnishd/waiter/waiter_priv.h000066400000000000000000000050751457605730600216060ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Private interfaces */ struct waited; struct vbh; struct waiter { unsigned magic; #define WAITER_MAGIC 0x17c399db const struct waiter_impl *impl; VTAILQ_ENTRY(waiter) list; VTAILQ_HEAD(,waited) waithead; void *priv; struct vbh *heap; }; typedef void waiter_init_f(struct waiter *); typedef void waiter_fini_f(struct waiter *); typedef int waiter_enter_f(void *priv, struct waited *); typedef void waiter_inject_f(const struct waiter *, struct waited *); typedef void waiter_evict_f(const struct waiter *, struct waited *); struct waiter_impl { const char *name; waiter_init_f *init; waiter_fini_f *fini; waiter_enter_f *enter; waiter_inject_f *inject; size_t size; }; static inline double Wait_When(const struct waited *wp) { CHECK_OBJ_NOTNULL(wp, WAITED_MAGIC); return (wp->idle + wp->tmo); } void Wait_Call(const struct waiter *, struct waited *, enum wait_event ev, double now); void Wait_HeapInsert(const struct waiter *, struct waited *); int Wait_HeapDelete(const struct waiter *, const struct waited *); double Wait_HeapDue(const struct waiter *, struct waited **); varnish-7.5.0/bin/varnishhist/000077500000000000000000000000001457605730600163245ustar00rootroot00000000000000varnish-7.5.0/bin/varnishhist/Makefile.am000066400000000000000000000005161457605730600203620ustar00rootroot00000000000000# AM_CPPFLAGS = \ -I$(top_srcdir)/include \ -I$(top_builddir)/include \ @CURSES_CFLAGS@ bin_PROGRAMS = varnishhist varnishhist_SOURCES = \ varnishhist.c \ varnishhist_options.h \ varnishhist_profiles.h varnishhist_LDADD = \ $(top_builddir)/lib/libvarnishapi/libvarnishapi.la \ -lm @CURSES_LIBS@ ${RT_LIBS} ${PTHREAD_LIBS} varnish-7.5.0/bin/varnishhist/flint.lnt000066400000000000000000000003531457605730600201600ustar00rootroot00000000000000// Copyright (c) 2017-2018 Varnish Software AS // SPDX-License-Identifier: BSD-2-Clause // See LICENSE file for full text of license -efile(451, "varnishhist_profiles.h") -efile(451, "varnishhist_options.h") -sem(profile_error, r_no) varnish-7.5.0/bin/varnishhist/flint.sh000066400000000000000000000003701457605730600177740ustar00rootroot00000000000000#!/bin/sh # # Copyright (c) 2017-2021 Varnish Software AS # SPDX-License-Identifier: BSD-2-Clause # See LICENSE file for full text of license FLOPS=' *.c ../../lib/libvarnishapi/flint.lnt ../../lib/libvarnishapi/*.c ' ../../tools/flint_skel.sh varnish-7.5.0/bin/varnishhist/varnishhist.c000066400000000000000000000334101457605730600210330ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * Author: Dag-Erling Smørgrav * Author: Guillaume Quintard * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Log tailer for Varnish */ #include "config.h" #include #include #include #include #include #include #include #define VOPT_DEFINITION #define VOPT_INC "varnishhist_options.h" #include "vdef.h" #include "vcurses.h" #include "vapi/vsl.h" #include "vapi/vsm.h" #include "vapi/voptget.h" #include "vapi/vsig.h" #include "vas.h" #include "vut.h" #include "vtim.h" #if 1 #define AC(x) assert((x) != ERR) #else #define AC(x) x #endif #define HIST_N 2000 /* how far back we remember */ #define HIST_RES 100 /* bucket resolution */ static struct VUT *vut; static int hist_low; static int hist_high; static int hist_range; static unsigned hist_buckets; static pthread_mutex_t mtx = PTHREAD_MUTEX_INITIALIZER; static int end_of_file = 0; static unsigned ms_delay = 1000; static unsigned rr_hist[HIST_N]; static unsigned nhist; static unsigned next_hist; static unsigned *bucket_miss; static unsigned *bucket_hit; static char *format; static int match_tag; static double timebend = 0, t0; static double vsl_t0 = 0, vsl_to, vsl_ts = 0; static pthread_cond_t timebend_cv; static double log_ten; static char *ident; static const unsigned scales[] = { 1, 2, 3, 4, 5, 10, 15, 20, 25, 50, 100, 250, 500, 1000, 2500, 5000, 10000, 25000, 50000, 100000, UINT_MAX }; struct profile { const char *name; char VSL_arg; enum VSL_tag_e tag; const char *prefix; int field; int hist_low; int hist_high; }; #define HIS_PROF(name,vsl_arg,tag,prefix,field,hist_low,high_high,doc) \ {name,vsl_arg,tag,prefix,field,hist_low,high_high}, #define HIS_NO_PREFIX NULL #define HIS_CLIENT 'c' #define HIS_BACKEND 'b' static const struct profile profiles[] = { #include "varnishhist_profiles.h" { NULL } }; #undef HIS_NO_PREFIX #undef HIS_BACKEND #undef HIS_CLIENT #undef HIS_PROF static const struct profile *active_profile; static void update(void) { char t[VTIM_FORMAT_SIZE]; const unsigned w = COLS / hist_range; const unsigned n = w * hist_range; unsigned bm[n], bh[n]; unsigned max; unsigned scale; int i, j; unsigned k, l; /* Draw horizontal axis */ for (k = 0; k < n; ++k) (void)mvaddch(LINES - 2, k, '-'); for (i = 0, j = hist_low; i < hist_range; ++i, ++j) { (void)mvaddch(LINES - 2, w * i, '+'); mvprintw(LINES - 1, w * i, "|1e%d", j); } if (end_of_file) mvprintw(0, 0, "%*s", COLS - 1, "EOF"); else mvprintw(0, 0, "%*s", COLS - 1, ident); /* count our flock */ memset(bm, 0, sizeof bm); memset(bh, 0, sizeof bh); for (k = 0, max = 1; k < hist_buckets; ++k) { l = k * n / hist_buckets; assert(l < n); bm[l] += bucket_miss[k]; bh[l] += bucket_hit[k]; max = vmax(max, bm[l] + bh[l]); } /* scale,time */ assert(LINES - 3 >= 0); for (i = 0; max / scales[i] > (unsigned)(LINES - 3); ++i) /* nothing */ ; scale = scales[i]; if (vsl_t0 > 0) { VTIM_format(vsl_ts, t); mvprintw(0, 0, "1:%u, n = %u, d = %g @ %s x %g", scale, nhist, 1e-3 * ms_delay, t, timebend); } else mvprintw(0, 0, "1:%u, n = %u, d = %g", scale, nhist, 1e-3 * ms_delay); for (j = 5; j < LINES - 2; j += 5) mvprintw((LINES - 2) - j, 0, "%u_", j * scale); /* show them */ for (k = 0; k < n; ++k) { for (l = 0; l < bm[k] / scale; ++l) (void)mvaddch((LINES - 3) - l, k, '#'); for (; l < (bm[k] + bh[k]) / scale; ++l) (void)mvaddch((LINES - 3) - l, k, '|'); } } inline static void upd_vsl_ts(const char *p) { if (timebend == 0) return; p = strchr(p, ' '); if (p == NULL) return; vsl_ts = vmax_t(double, vsl_ts, strtod(p + 1, NULL)); } static void delorean(void) { int i; double t = VTIM_mono(); if (vsl_t0 == 0) vsl_to = vsl_t0 = vsl_ts; assert(t > t0); vsl_to = vsl_t0 + (t - t0) * timebend; if (vsl_ts > vsl_to) { double when = VTIM_real() + vsl_ts - vsl_to; struct timespec ts = VTIM_timespec(when); i = pthread_cond_timedwait(&timebend_cv, &mtx, &ts); assert(i == 0 || i == ETIMEDOUT); } } static int v_matchproto_ (VSLQ_dispatch_f) accumulate(struct VSL_data *vsl, struct VSL_transaction * const pt[], void *priv) { int i, tag, skip, match, hit; unsigned u; double value = 0; struct VSL_transaction *tr; const char *tsp; enum vsl_status stat; (void)vsl; (void)priv; for (tr = pt[0]; tr != NULL; tr = *++pt) { if (VSIG_int || VSIG_term || VSIG_hup) return (-1); if (tr->reason == VSL_r_esi) { /* Skip ESI requests */ continue; } hit = 0; skip = 0; match = 0; tsp = NULL; while (skip == 0) { stat = VSL_Next(tr->c); if (stat == vsl_e_overrun) { /* need to skip forward */ PTOK(pthread_mutex_lock(&mtx)); vsl_to = vsl_t0 = vsl_ts = 0; t0 = VTIM_mono(); PTOK(pthread_mutex_unlock(&mtx)); break; } if (stat != vsl_more) break; /* get the value we want and register if it's a hit */ tag = VSL_TAG(tr->c->rec.ptr); if (VSL_tagflags[tag]) continue; switch (tag) { case SLT_Hit: hit = 1; break; case SLT_VCL_return: if (!strcasecmp(VSL_CDATA(tr->c->rec.ptr), "restart") || !strcasecmp(VSL_CDATA(tr->c->rec.ptr), "retry")) skip = 1; break; case SLT_Timestamp: tsp = VSL_CDATA(tr->c->rec.ptr); /* FALLTHROUGH */ default: if (tag != match_tag) break; if (active_profile->prefix && strncmp(VSL_CDATA(tr->c->rec.ptr), active_profile->prefix, strlen(active_profile->prefix)) != 0) break; i = sscanf(VSL_CDATA(tr->c->rec.ptr), format, &value); if (i != 1) break; match = 1; break; } } if (skip || !match || value <= 0) continue; /* select bucket */ i = vlimit_t(int, lround(HIST_RES * log(value) / log_ten), hist_low * HIST_RES, hist_high * HIST_RES - 1) - hist_low * HIST_RES; assert(i >= 0); assert((unsigned)i < hist_buckets); PTOK(pthread_mutex_lock(&mtx)); /* * only parse the last tsp seen in this transaction - * it should be the latest. */ if (tsp) upd_vsl_ts(tsp); /* phase out old data */ if (nhist == HIST_N) { u = rr_hist[next_hist]; if (u >= hist_buckets) { u -= hist_buckets; assert(u < hist_buckets); assert(bucket_hit[u] > 0); bucket_hit[u]--; } else { assert(bucket_miss[u] > 0); bucket_miss[u]--; } } else { ++nhist; } /* phase in new data */ if (hit) { bucket_hit[i]++; rr_hist[next_hist] = i + hist_buckets; } else { bucket_miss[i]++; rr_hist[next_hist] = i; } if (++next_hist == HIST_N) { next_hist = 0; } if (vsl_ts >= vsl_to) delorean(); PTOK(pthread_mutex_unlock(&mtx)); } return (0); } static void * v_matchproto_(pthread_t) do_curses(void *arg) { int ch; (void)arg; initscr(); AC(raw()); AC(noecho()); AC(nonl()); AC(intrflush(stdscr, FALSE)); AC(curs_set(0)); AC(erase()); while (!VSIG_int && !VSIG_term && !VSIG_hup) { AC(erase()); PTOK(pthread_mutex_lock(&mtx)); update(); PTOK(pthread_mutex_unlock(&mtx)); AC(refresh()); assert(ms_delay > 0); timeout(ms_delay); switch ((ch = getch())) { case ERR: break; #ifdef KEY_RESIZE case KEY_RESIZE: AC(erase()); break; #endif case '\014': /* Ctrl-L */ case '\024': /* Ctrl-T */ redrawwin(stdscr); AC(refresh()); break; case '\032': /* Ctrl-Z */ AC(endwin()); AZ(raise(SIGTSTP)); break; case '\003': /* Ctrl-C */ case '\021': /* Ctrl-Q */ case 'Q': case 'q': AZ(raise(SIGINT)); break; case '0': case '1': case '2': case '3': case '4': case '5': case '6': case '7': case '8': case '9': ms_delay = 1000U << (ch - '0'); break; case '+': ms_delay = vmax(ms_delay >> 1, 1U); break; case '-': ms_delay *= 2; break; case '>': case '<': /* see below */ break; default: AC(beep()); break; } if (ch == '<' || ch == '>') { PTOK(pthread_mutex_lock(&mtx)); vsl_to = vsl_t0 = vsl_ts; t0 = VTIM_mono(); if (timebend == 0) timebend = 1; else if (ch == '<') timebend /= 2; else timebend *= 2; PTOK(pthread_cond_broadcast(&timebend_cv)); PTOK(pthread_mutex_unlock(&mtx)); } } AC(endwin()); return (NULL); } /*--------------------------------------------------------------------*/ static void v_noreturn_ profile_error(const char *s) { fprintf(stderr, "-P: '%s' is not a valid" " profile name or definition\n", s); exit(1); } int main(int argc, char **argv) { int i; char *colon; const char *ptag, *profile = "responsetime"; pthread_t thr; int fnum; struct profile cli_p = {0}; vut = VUT_InitProg(argc, argv, &vopt_spec); AN(vut); PTOK(pthread_cond_init(&timebend_cv, NULL)); while ((i = getopt(argc, argv, vopt_spec.vopt_optstring)) != -1) { switch (i) { case 'h': /* Usage help */ VUT_Usage(vut, &vopt_spec, 0); case 'p': ms_delay = lround(1e3 * strtod(optarg, NULL)); if (ms_delay == 0) VUT_Error(vut, 1, "-p: invalid '%s'", optarg); break; case 'P': colon = strchr(optarg, ':'); /* no colon, take the profile as a name */ if (colon == NULL) { profile = optarg; break; } /* else check if valid definition */ if (colon == optarg + 1 && (*optarg == 'b' || *optarg == 'c' || *optarg == 'E')) { cli_p.VSL_arg = *optarg; ptag = colon + 1; colon = strchr(colon + 1, ':'); if (colon == NULL) profile_error(optarg); } else { ptag = optarg; cli_p.VSL_arg = 'c'; } assert(colon); match_tag = VSL_Name2Tag(ptag, colon - ptag); if (match_tag < 0) VUT_Error(vut, 1, "-P: '%s' is not a valid tag name", optarg); if (VSL_tagflags[match_tag]) VUT_Error(vut, 1, "-P: '%s' is an unsafe or binary record", optarg); cli_p.prefix = colon + 1; colon = strchr(colon + 1, ':'); if (colon == NULL) profile_error(optarg); *colon = '\0'; if (*cli_p.prefix == '\0') cli_p.prefix = NULL; if (sscanf(colon + 1, "%d", &cli_p.field) != 1) profile_error(optarg); cli_p.name = "custom"; cli_p.tag = (enum VSL_tag_e)match_tag; cli_p.hist_low = -6; cli_p.hist_high = 3; profile = NULL; active_profile = &cli_p; colon = strchr(colon + 1, ':'); if (colon == NULL) break; if (sscanf(colon + 1, "%d:%d", &cli_p.hist_low, &cli_p.hist_high) != 2) profile_error(optarg); break; case 'B': timebend = strtod(optarg, NULL); if (timebend == 0) VUT_Error(vut, 1, "-B: being able to bend time does not" " mean we can stop it" " (invalid factor '%s')", optarg); if (timebend < 0) VUT_Error(vut, 1, "-B: being able to bend time does not" " mean we can make it go backwards" " (invalid factor '%s')", optarg); break; default: if (!VUT_Arg(vut, i, optarg)) VUT_Usage(vut, &vopt_spec, 1); } } if (optind != argc) VUT_Usage(vut, &vopt_spec, 1); /* Check for valid grouping mode */ assert(vut->g_arg < VSL_g__MAX); if (vut->g_arg != VSL_g_vxid && vut->g_arg != VSL_g_request) VUT_Error(vut, 1, "Invalid grouping mode: %s" " (only vxid and request are supported)", VSLQ_grouping[vut->g_arg]); if (profile) { for (active_profile = profiles; active_profile->name; active_profile++) { if (strcmp(active_profile->name, profile) == 0) break; } } AN(active_profile); if (!active_profile->name) VUT_Error(vut, 1, "-P: No such profile '%s'", profile); assert(active_profile->VSL_arg == 'b' || active_profile->VSL_arg == 'c' || active_profile->VSL_arg == 'E'); assert(VUT_Arg(vut, active_profile->VSL_arg, NULL)); match_tag = active_profile->tag; fnum = active_profile->field; hist_low = active_profile->hist_low; hist_high = active_profile->hist_high; hist_range = hist_high - hist_low; hist_buckets = hist_range * HIST_RES; bucket_hit = calloc(hist_buckets, sizeof *bucket_hit); bucket_miss = calloc(hist_buckets, sizeof *bucket_miss); if (timebend > 0) t0 = VTIM_mono(); format = malloc(4L * fnum); AN(format); for (i = 0; i < fnum - 1; i++) strcpy(format + 4 * i, "%*s "); strcpy(format + 4 * (fnum - 1), "%lf"); log_ten = log(10.0); VUT_Setup(vut); if (vut->vsm) ident = VSM_Dup(vut->vsm, "Arg", "-i"); else ident = strdup(""); PTOK(pthread_create(&thr, NULL, do_curses, NULL)); vut->dispatch_f = accumulate; vut->dispatch_priv = NULL; (void)VUT_Main(vut); end_of_file = 1; PTOK(pthread_join(thr, NULL)); VUT_Fini(&vut); exit(0); } varnish-7.5.0/bin/varnishhist/varnishhist_options.h000066400000000000000000000070401457605730600226130ustar00rootroot00000000000000/*- * Copyright (c) 2014-2015 Varnish Software AS * All rights reserved. * * Author: Martin Blix Grydeland * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Option definitions for varnishhist */ #include "vapi/vapi_options.h" #include "vut_options.h" #define HIS_OPT_g \ VOPT("g:", "[-g ]", \ "Grouping mode (default: vxid)", \ "The grouping of the log records. The default is to group" \ " by vxid." \ ) #define HIS_OPT_p \ VOPT("p:", "[-p ]", "Refresh period", \ "Specified the number of seconds between screen refreshes." \ " Default is 1 second, and can be changed at runtime by" \ " pressing the [0-9] keys (powers of 2 in seconds" \ " or + and - (double/halve the speed)." \ ) #define HIS_OPT_P \ VOPT("P:", "[-P <[cb:]tag:[prefix]:field_num[:min:max]>]", \ "Custom profile definition", \ "Graph the given custom definition defined as: an optional" \ " (c)lient, (b)ackend or (E)SI filter (defaults to client),"\ " the tag we'll look for, a prefix to look for (can be" \ " empty, but must be terminated by a colon) and the field" \ " number of the value we are interested in. min and max are"\ " the boundaries of the graph in powers of ten and default" \ " to -6 and 3." \ ) #define HIS_OPT_B \ VOPT("B:", "[-B ]", \ "Time bending", \ "Factor to bend time by. Particularly useful when" \ " [-r]eading from a vsl file. =1 process in near real" \ " time, <1 slow-motion, >1 time-lapse (useless unless" \ " reading from a file). At runtime, < halves and" \ " > doubles." \ ) HIS_OPT_B VSL_OPT_C VUT_OPT_d HIS_OPT_g VUT_OPT_h VSL_OPT_L VUT_OPT_n HIS_OPT_p #define HIS_CLIENT "client" #define HIS_BACKEND "backend" #define HIS_NO_PREFIX "" #define HIS_PROF(name,cb,tg,prefix,fld,hist_low,high_high,doc) \ VOPT("P:", "[-P " name "]", \ "Predefined " cb " profile", \ "Predefined " cb " profile: " doc \ " (field " #fld " of " #tg " " prefix " VSL tag)." \ ) #include "varnishhist_profiles.h" #undef HIS_NO_PREFIX #undef HIS_BACKEND #undef HIS_CLIENT #undef HIS_PROF HIS_OPT_P VUT_OPT_Q VUT_OPT_q VUT_OPT_r VUT_OPT_t VSL_OPT_T VUT_GLOBAL_OPT_V varnish-7.5.0/bin/varnishhist/varnishhist_profiles.h000066400000000000000000000067261457605730600227550ustar00rootroot00000000000000/*- * Copyright 2016 UPLEX - Nils Goroll Systemoptimierung * All rights reserved. * * Author: Nils Goroll * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Profile definitions for varnishhist */ // client HIS_PROF( "responsetime", // name HIS_CLIENT, // HIS_CLIENT | HIS_BACKEND SLT_Timestamp, // tag "Process:", // prefix 3, // field -6, // hist_low 3, // hist_high "graph the total time from start of request processing" " (first byte received) until ready to deliver the" " client response" ) HIS_PROF( "reqbodytime", // name HIS_CLIENT, // HIS_CLIENT | HIS_BACKEND SLT_Timestamp, // tag "ReqBody:", // prefix 3, // field -6, // hist_low 3, // hist_high "graph the time for reading the request body" ) HIS_PROF( "size", // name HIS_CLIENT, // HIS_CLIENT | HIS_BACKEND SLT_ReqAcct, // tag HIS_NO_PREFIX, // prefix 5, // field 1, // hist_low 8, // hist_high "graph the size of responses" ) // backend HIS_PROF( "Bereqtime", // name HIS_BACKEND, // HIS_CLIENT | HIS_BACKEND SLT_Timestamp, // tag "Bereq:", // prefix 3, // field -6, // hist_low 3, // hist_high "graph the time from beginning of backend processing" " until a backend request is sent completely" ) HIS_PROF( "Beresptime", // name HIS_BACKEND, // HIS_CLIENT | HIS_BACKEND SLT_Timestamp, // tag "Beresp:", // prefix 3, // field -6, // hist_low 3, // hist_high "graph the time from beginning of backend processing" " until the response headers are being received completely" ) HIS_PROF( "BerespBodytime", // name HIS_BACKEND, // HIS_CLIENT | HIS_BACKEND SLT_Timestamp, // tag "BerespBody:", // prefix 3, // field -6, // hist_low 3, // hist_high "graph the time from beginning of backend processing" " until the response body has been received" ) HIS_PROF( "Besize", // name HIS_BACKEND, // HIS_CLIENT | HIS_BACKEND SLT_BereqAcct, // tag HIS_NO_PREFIX, // prefix 5, // field 1, // hist_low 8, // hist_high "graph the backend response body size" ) varnish-7.5.0/bin/varnishlog/000077500000000000000000000000001457605730600161365ustar00rootroot00000000000000varnish-7.5.0/bin/varnishlog/Makefile.am000066400000000000000000000004221457605730600201700ustar00rootroot00000000000000# AM_CPPFLAGS = \ -I$(top_srcdir)/include \ -I$(top_builddir)/include bin_PROGRAMS = varnishlog varnishlog_SOURCES = \ varnishlog.c \ varnishlog_options.h varnishlog_LDADD = \ $(top_builddir)/lib/libvarnishapi/libvarnishapi.la \ ${RT_LIBS} ${LIBM} ${PTHREAD_LIBS} varnish-7.5.0/bin/varnishlog/flint.lnt000066400000000000000000000011571457605730600177750ustar00rootroot00000000000000// Copyright (c) 2010-2018 Varnish Software AS // SPDX-License-Identifier: BSD-2-Clause // See LICENSE file for full text of license -e712 // 14 Info 712 Loss of precision (___) (___ to ___) -e747 // 16 Info 747 Significant prototype coercion (___) ___ to ___ -e763 // Redundant declaration for symbol '...' previously declared -efile(451, "varnishlog_options.h") -e732 // Loss of sign (arg. no. 2) (int to unsigned -e713 // Loss of precision (assignment) (unsigned long long to long long) -e788 // enum constant '___' not used within defaulted switch -e641 // Converting enum '___' to '___' varnish-7.5.0/bin/varnishlog/flint.sh000066400000000000000000000004251457605730600176070ustar00rootroot00000000000000#!/bin/sh # # Copyright (c) 2010-2021 Varnish Software AS # SPDX-License-Identifier: BSD-2-Clause # See LICENSE file for full text of license FLOPS=' -DVARNISH_STATE_DIR=\"foo\" *.c ../../lib/libvarnishapi/flint.lnt ../../lib/libvarnishapi/*.c ' ../../tools/flint_skel.sh varnish-7.5.0/bin/varnishlog/varnishlog.c000066400000000000000000000104221457605730600204550ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * Author: Martin Blix Grydeland * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Log tailer for Varnish */ #include "config.h" #include #include #include #include #include #include #include #define VOPT_DEFINITION #define VOPT_INC "varnishlog_options.h" #include "vdef.h" #include "vapi/vsm.h" #include "vapi/vsl.h" #include "vapi/voptget.h" #include "vas.h" #include "vut.h" #include "miniobj.h" static struct VUT *vut; static struct log { /* Options */ int a_opt; int A_opt; int u_opt; char *w_arg; /* State */ FILE *fo; } LOG; static void openout(int append) { AN(LOG.w_arg); if (LOG.A_opt) { if (!strcmp(LOG.w_arg, "-")) LOG.fo = stdout; else LOG.fo = fopen(LOG.w_arg, append ? "a" : "w"); } else LOG.fo = VSL_WriteOpen(vut->vsl, LOG.w_arg, append, LOG.u_opt); if (LOG.fo == NULL) VUT_Error(vut, 2, "Cannot open output file (%s)", LOG.A_opt ? strerror(errno) : VSL_Error(vut->vsl)); vut->dispatch_priv = LOG.fo; } static int v_matchproto_(VUT_cb_f) rotateout(struct VUT *v) { assert(v == vut); AN(LOG.w_arg); AN(LOG.fo); (void)fclose(LOG.fo); openout(1); AN(LOG.fo); return (0); } static int v_matchproto_(VUT_cb_f) flushout(struct VUT *v) { assert(v == vut); AN(LOG.fo); if (fflush(LOG.fo)) return (-5); return (0); } int main(int argc, char * const *argv) { int opt; vut = VUT_InitProg(argc, argv, &vopt_spec); AN(vut); memset(&LOG, 0, sizeof LOG); while ((opt = getopt(argc, argv, vopt_spec.vopt_optstring)) != -1) { switch (opt) { case 'a': /* Append to file */ LOG.a_opt = 1; break; case 'A': /* Text output */ LOG.A_opt = 1; break; case 'h': /* Usage help */ VUT_Usage(vut, &vopt_spec, 0); case 'u': /* Unbuffered output */ LOG.u_opt = 1; break; case 'w': /* Write to file */ REPLACE(LOG.w_arg, optarg); break; default: if (!VUT_Arg(vut, opt, optarg)) VUT_Usage(vut, &vopt_spec, 1); } } if (optind != argc) VUT_Usage(vut, &vopt_spec, 1); if (vut->D_opt && !LOG.w_arg) VUT_Error(vut, 1, "Missing -w option"); if (vut->D_opt && !strcmp(LOG.w_arg, "-")) VUT_Error(vut, 1, "Daemon cannot write to stdout"); /* Setup output */ if (LOG.A_opt || !LOG.w_arg) { vut->dispatch_f = VSL_PrintTransactions; } else { vut->dispatch_f = VSL_WriteTransactions; /* * inefficient but not crossing API layers * first x argument avoids initial suppression of all tags */ AN(VUT_Arg(vut, 'x', "Link")); AN(VUT_Arg(vut, 'i', "Link")); AN(VUT_Arg(vut, 'i', "Begin")); AN(VUT_Arg(vut, 'i', "End")); } if (LOG.w_arg) { openout(LOG.a_opt); AN(LOG.fo); if (vut->D_opt) vut->sighup_f = rotateout; } else LOG.fo = stdout; vut->idle_f = flushout; VUT_Setup(vut); (void)VUT_Main(vut); VUT_Fini(&vut); (void)flushout(NULL); exit(0); } varnish-7.5.0/bin/varnishlog/varnishlog_options.h000066400000000000000000000061151457605730600222410ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Martin Blix Grydeland * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include "vapi/vapi_options.h" #include "vut_options.h" #define LOG_NOTICE_w " This option has no effect without the -w option." #define LOG_OPT_a \ VOPT("a", "[-a]", "Append to file", \ "When writing output to a file with the -w option, append" \ " to it rather than overwrite it." LOG_NOTICE_w \ ) #define LOG_OPT_A \ VOPT("A", "[-A]", "Text output", \ "When writing output to a file with the -w option, output" \ " data in ascii format." LOG_NOTICE_w \ ) #define LOG_OPT_u \ VOPT("u", "[-u]", "Unbuffered output", \ "When writing output to a file with the -w option, output" \ " data is not buffered." LOG_NOTICE_w \ ) #define LOG_OPT_w \ VOPT("w:", "[-w ]", "Output filename", \ "Redirect output to file. The file will be overwritten" \ " unless the -a option was specified. If the application" \ " receives a SIGHUP in daemon mode the file will be " \ " reopened allowing the old one to be rotated away. The" \ " file can then be read by varnishlog and other tools with" \ " the -r option, unless the -A option was specified. This" \ " option is required when running in daemon mode. If the" \ " filename is -, varnishlog writes to the standard output" \ " and cannot work as a daemon." \ ) LOG_OPT_a LOG_OPT_A VSL_OPT_b VSL_OPT_c VSL_OPT_C VUT_OPT_d VUT_GLOBAL_OPT_D VSL_OPT_E VUT_OPT_g VUT_OPT_h VSL_OPT_i VSL_OPT_I VUT_OPT_k VSL_OPT_L VUT_OPT_n VUT_GLOBAL_OPT_P VUT_OPT_Q VUT_OPT_q VUT_OPT_r VSL_OPT_R VUT_OPT_t VSL_OPT_T LOG_OPT_u VSL_OPT_v VUT_GLOBAL_OPT_V LOG_OPT_w VSL_OPT_x VSL_OPT_X varnish-7.5.0/bin/varnishncsa/000077500000000000000000000000001457605730600163015ustar00rootroot00000000000000varnish-7.5.0/bin/varnishncsa/Makefile.am000066400000000000000000000004071457605730600203360ustar00rootroot00000000000000# AM_CPPFLAGS = \ -I$(top_srcdir)/include \ -I$(top_builddir)/include bin_PROGRAMS = varnishncsa varnishncsa_SOURCES = \ varnishncsa.c \ varnishncsa_options.h varnishncsa_LDADD = \ $(top_builddir)/lib/libvarnishapi/libvarnishapi.la \ ${RT_LIBS} ${LIBM} varnish-7.5.0/bin/varnishncsa/flint.lnt000066400000000000000000000002541457605730600201350ustar00rootroot00000000000000// Copyright (c) 2006-2018 Varnish Software AS // SPDX-License-Identifier: BSD-2-Clause // See LICENSE file for full text of license -efile(451, "varnishncsa_options.h") varnish-7.5.0/bin/varnishncsa/flint.sh000066400000000000000000000003701457605730600177510ustar00rootroot00000000000000#!/bin/sh # # Copyright (c) 2006-2021 Varnish Software AS # SPDX-License-Identifier: BSD-2-Clause # See LICENSE file for full text of license FLOPS=' *.c ../../lib/libvarnishapi/flint.lnt ../../lib/libvarnishapi/*.c ' ../../tools/flint_skel.sh varnish-7.5.0/bin/varnishncsa/varnishncsa.c000066400000000000000000000647571457605730600210070ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2016 Varnish Software AS * All rights reserved. * * Author: Anders Berg * Author: Poul-Henning Kamp * Author: Tollef Fog Heen * Author: Martin Blix Grydeland * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Obtain log data from the shared memory log, order it by session ID, and * display it in Apache / NCSA combined log format. * * See doc/sphinx/reference/varnishncsa.rst for the supported format * specifiers. * */ #include "config.h" #include #include #include #include #include #include #include #include #include #include #include #define VOPT_DEFINITION #define VOPT_INC "varnishncsa_options.h" #include "vdef.h" #include "vapi/vsl.h" #include "vapi/voptget.h" #include "vas.h" #include "venc.h" #include "vsb.h" #include "vut.h" #include "vqueue.h" #include "miniobj.h" #define TIME_FMT "[%d/%b/%Y:%T %z]" #define FORMAT "%h %l %u %t \"%r\" %s %b \"%{Referer}i\" \"%{User-agent}i\"" static struct VUT *vut; struct format; enum e_frag { F_H, /* %H Proto */ F_U, /* %U URL path */ F_q, /* %q Query string */ F_b, /* %b Body bytes sent */ F_h, /* %h Host name / IP Address */ F_m, /* %m Method */ F_s, /* %s Status */ F_I, /* %I Bytes received */ F_O, /* %O Bytes sent */ F_tstart, /* Time start */ F_tend, /* Time end */ F_ttfb, /* %{Varnish:time_firstbyte}x */ F_host, /* Host header */ F_auth, /* Authorization header */ F__MAX, }; struct fragment { unsigned gen; const char *b, *e; }; typedef int format_f(const struct format *format); struct format { unsigned magic; #define FORMAT_MAGIC 0xC3119CDA char time_type; VTAILQ_ENTRY(format) list; format_f *func; struct fragment *frag; char *string; const char *const *strptr; char *time_fmt; int64_t *int64; }; struct watch { unsigned magic; #define WATCH_MAGIC 0xA7D4005C VTAILQ_ENTRY(watch) list; char *key; int keylen; struct fragment frag; }; VTAILQ_HEAD(watch_head, watch); struct vsl_watch { unsigned magic; #define VSL_WATCH_MAGIC 0xE3E27D23 VTAILQ_ENTRY(vsl_watch) list; enum VSL_tag_e tag; int idx; char *prefix; int prefixlen; struct fragment frag; }; VTAILQ_HEAD(vsl_watch_head, vsl_watch); static struct ctx { /* Options */ int a_opt; char *w_arg; FILE *fo; struct vsb *vsb; unsigned gen; VTAILQ_HEAD(,format) format; int quote_how; char *missing_string; char *missing_int; /* State */ struct watch_head watch_vcl_log; struct watch_head watch_reqhdr; /* also bereqhdr */ struct watch_head watch_resphdr; /* also beresphdr */ struct vsl_watch_head watch_vsl; struct fragment frag[F__MAX]; const char *hitmiss; const char *handling; const char *side; int64_t vxid; } CTX; static void parse_format(const char *format); static void openout(int append) { AN(CTX.w_arg); if (!strcmp(CTX.w_arg, "-")) CTX.fo = stdout; else CTX.fo = fopen(CTX.w_arg, append ? "a" : "w"); if (CTX.fo == NULL) VUT_Error(vut, 1, "Can't open output file (%s)", strerror(errno)); } static int v_matchproto_(VUT_cb_f) rotateout(struct VUT *v) { assert(v == vut); AN(CTX.w_arg); AN(CTX.fo); (void)fclose(CTX.fo); openout(1); AN(CTX.fo); return (0); } static int v_matchproto_(VUT_cb_f) flushout(struct VUT *v) { assert(v == vut); AN(CTX.fo); if (fflush(CTX.fo)) return (-5); return (0); } static inline int vsb_fcat(struct vsb *vsb, const struct fragment *f, const char *dflt) { if (f->gen == CTX.gen) { assert(f->b <= f->e); VSB_quote(vsb, f->b, f->e - f->b, CTX.quote_how); } else if (dflt) VSB_quote(vsb, dflt, -1, CTX.quote_how); else return (-1); return (0); } static int v_matchproto_(format_f) format_string(const struct format *format) { CHECK_OBJ_NOTNULL(format, FORMAT_MAGIC); AN(format->string); AZ(VSB_cat(CTX.vsb, format->string)); return (1); } static int v_matchproto_(format_f) format_strptr(const struct format *format) { CHECK_OBJ_NOTNULL(format, FORMAT_MAGIC); AN(format->strptr); AN(*format->strptr); AZ(VSB_cat(CTX.vsb, *format->strptr)); return (1); } static int v_matchproto_(format_f) format_int64(const struct format *format) { CHECK_OBJ_NOTNULL(format, FORMAT_MAGIC); VSB_printf(CTX.vsb, "%jd", (intmax_t)*format->int64); return (1); } static int v_matchproto_(format_f) format_fragment(const struct format *format) { CHECK_OBJ_NOTNULL(format, FORMAT_MAGIC); AN(format->frag); if (format->frag->gen != CTX.gen) { if (format->string == NULL) return (-1); VSB_quote(CTX.vsb, format->string, -1, CTX.quote_how); return (0); } AZ(vsb_fcat(CTX.vsb, format->frag, NULL)); return (1); } static int v_matchproto_(format_f) format_time(const struct format *format) { double t_start, t_end, d; char *p; char buf[64]; time_t t; intmax_t l; struct tm tm; CHECK_OBJ_NOTNULL(format, FORMAT_MAGIC); if (CTX.frag[F_tstart].gen == CTX.gen) { t_start = strtod(CTX.frag[F_tstart].b, &p); if (p != CTX.frag[F_tstart].e) t_start = NAN; } else t_start = NAN; if (isnan(t_start)) { /* Missing t_start is a no go */ if (format->string == NULL) return (-1); AZ(VSB_cat(CTX.vsb, format->string)); return (0); } /* Missing t_end defaults to t_start */ if (CTX.frag[F_tend].gen == CTX.gen) { t_end = strtod(CTX.frag[F_tend].b, &p); if (p != CTX.frag[F_tend].e) t_end = t_start; } else t_end = t_start; AN(format->time_fmt); switch (format->time_type) { case 't': t = (intmax_t)floor(t_start); (void)localtime_r(&t, &tm); AN(strftime(buf, sizeof buf, format->time_fmt, &tm)); AZ(VSB_cat(CTX.vsb, buf)); return (1); case '3': l = (intmax_t)(modf(t_start, &d) * 1e3); break; case '6': l = (intmax_t)(modf(t_start, &d) * 1e6); break; case 'S': l = (intmax_t)t_start; break; case 'M': l = (intmax_t)(t_start * 1e3); break; case 'U': l = (intmax_t)(t_start * 1e6); break; case 's': l = (intmax_t)(t_end - t_start); break; case 'm': l = (intmax_t)((t_end - t_start) * 1e3); break; case 'u': l = (intmax_t)((t_end - t_start) * 1e6); break; default: WRONG("Time format specifier"); } #ifdef __FreeBSD__ assert(fmtcheck(format->time_fmt, "%jd") == format->time_fmt); #endif AZ(VSB_printf(CTX.vsb, format->time_fmt, l)); return (1); } static int v_matchproto_(format_f) format_requestline(const struct format *format) { (void)format; AZ(vsb_fcat(CTX.vsb, &CTX.frag[F_m], "-")); AZ(VSB_putc(CTX.vsb, ' ')); if (CTX.frag[F_host].gen == CTX.gen) { if (strncmp(CTX.frag[F_host].b, "http://", 7)) AZ(VSB_cat(CTX.vsb, "http://")); AZ(vsb_fcat(CTX.vsb, &CTX.frag[F_host], NULL)); } else AZ(VSB_cat(CTX.vsb, "http://localhost")); AZ(vsb_fcat(CTX.vsb, &CTX.frag[F_U], "")); AZ(vsb_fcat(CTX.vsb, &CTX.frag[F_q], "")); AZ(VSB_putc(CTX.vsb, ' ')); AZ(vsb_fcat(CTX.vsb, &CTX.frag[F_H], "HTTP/1.0")); return (1); } static int v_matchproto_(format_f) format_auth(const struct format *format) { struct vsb *vsb = VSB_new_auto(); AN(vsb); char *q; if (CTX.frag[F_auth].gen != CTX.gen || VENC_Decode_Base64(vsb, CTX.frag[F_auth].b, CTX.frag[F_auth].e)) { VSB_destroy(&vsb); if (format->string == NULL) return (-1); VSB_quote(CTX.vsb, format->string, -1, CTX.quote_how); return (0); } AZ(VSB_finish(vsb)); q = strchr(VSB_data(vsb), ':'); if (q != NULL) *q = '\0'; VSB_quote(CTX.vsb, VSB_data(vsb), -1, CTX.quote_how); VSB_destroy(&vsb); return (1); } static int print(void) { const struct format *f; int i, r = 1; VSB_clear(CTX.vsb); VTAILQ_FOREACH(f, &CTX.format, list) { CHECK_OBJ_NOTNULL(f, FORMAT_MAGIC); i = (f->func)(f); AZ(VSB_error(CTX.vsb)); if (r > i) r = i; } AZ(VSB_putc(CTX.vsb, '\n')); AZ(VSB_finish(CTX.vsb)); if (r >= 0) { i = fwrite(VSB_data(CTX.vsb), 1, VSB_len(CTX.vsb), CTX.fo); if (i != VSB_len(CTX.vsb)) return (-5); } return (0); } static void addf_string(const char *str) { struct format *f; AN(str); ALLOC_OBJ(f, FORMAT_MAGIC); AN(f); f->func = format_string; f->string = strdup(str); AN(f->string); VTAILQ_INSERT_TAIL(&CTX.format, f, list); } static void addf_strptr(const char *const *strptr) { struct format *f; AN(strptr); ALLOC_OBJ(f, FORMAT_MAGIC); AN(f); f->func = format_strptr; f->strptr = strptr; VTAILQ_INSERT_TAIL(&CTX.format, f, list); } static void addf_fragment(struct fragment *frag, const char *str) { struct format *f; AN(frag); ALLOC_OBJ(f, FORMAT_MAGIC); AN(f); f->func = format_fragment; f->frag = frag; if (str != NULL) { f->string = strdup(str); AN(f->string); } VTAILQ_INSERT_TAIL(&CTX.format, f, list); } static void addf_int64(int64_t *i) { struct format *f; AN(i); ALLOC_OBJ(f, FORMAT_MAGIC); AN(f); f->func = format_int64; f->int64 = i; VTAILQ_INSERT_TAIL(&CTX.format, f, list); } static void addf_time(char type, const char *fmt) { struct format *f; ALLOC_OBJ(f, FORMAT_MAGIC); AN(f); AN(fmt); f->func = format_time; f->time_type = type; f->time_fmt = strdup(fmt); if (f->time_type == 'T') { if (!strcmp(fmt, "s")) f->time_type = 's'; else if (!strcmp(fmt, "ms")) f->time_type = 'm'; else if (!strcmp(fmt, "us")) f->time_type = 'u'; else VUT_Error(vut, 1, "Unknown specifier: %%{%s}T", fmt); REPLACE(f->time_fmt, "%jd"); } else if (f->time_type == 't') { if (!strcmp(fmt, "sec")) { f->time_type = 'S'; REPLACE(f->time_fmt, "%jd"); } else if (!strncmp(fmt, "msec", 4)) { fmt += 4; if (!strcmp(fmt, "_frac")) { f->time_type = '3'; REPLACE(f->time_fmt, "%03jd"); } else if (*fmt == '\0') { f->time_type = 'M'; REPLACE(f->time_fmt, "%jd"); } } else if (!strncmp(fmt, "usec", 4)) { fmt += 4; if (!strcmp(fmt, "_frac")) { f->time_type = '6'; REPLACE(f->time_fmt, "%06jd"); } else if (*fmt == '\0') { f->time_type = 'U'; REPLACE(f->time_fmt, "%jd"); } } } AN(f->time_fmt); VTAILQ_INSERT_TAIL(&CTX.format, f, list); } static void addf_requestline(void) { struct format *f; ALLOC_OBJ(f, FORMAT_MAGIC); AN(f); f->func = format_requestline; VTAILQ_INSERT_TAIL(&CTX.format, f, list); } static void addf_vcl_log(const char *key) { struct watch *w; struct format *f; AN(key); ALLOC_OBJ(w, WATCH_MAGIC); AN(w); w->keylen = asprintf(&w->key, "%s:", key); assert(w->keylen > 0); VTAILQ_INSERT_TAIL(&CTX.watch_vcl_log, w, list); ALLOC_OBJ(f, FORMAT_MAGIC); AN(f); f->func = format_fragment; f->frag = &w->frag; f->string = strdup(""); AN(f->string); VTAILQ_INSERT_TAIL(&CTX.format, f, list); } static void addf_hdr(struct watch_head *head, const char *key) { struct watch *w; struct format *f; AN(head); AN(key); ALLOC_OBJ(w, WATCH_MAGIC); AN(w); w->keylen = asprintf(&w->key, "%s:", key); assert(w->keylen > 0); VTAILQ_INSERT_TAIL(head, w, list); ALLOC_OBJ(f, FORMAT_MAGIC); AN(f); f->func = format_fragment; f->frag = &w->frag; f->string = strdup(CTX.missing_string); AN(f->string); VTAILQ_INSERT_TAIL(&CTX.format, f, list); } static void addf_vsl(enum VSL_tag_e tag, long i, const char *prefix) { struct vsl_watch *w; ALLOC_OBJ(w, VSL_WATCH_MAGIC); AN(w); if (VSL_tagflags[tag] && CTX.quote_how != VSB_QUOTE_JSON) VUT_Error(vut, 1, "Tag %s can contain control characters", VSL_tags[tag]); w->tag = tag; assert(i <= INT_MAX); w->idx = i; if (prefix != NULL) { w->prefixlen = asprintf(&w->prefix, "%s:", prefix); assert(w->prefixlen > 0); } VTAILQ_INSERT_TAIL(&CTX.watch_vsl, w, list); addf_fragment(&w->frag, CTX.missing_string); } static void addf_auth(void) { struct format *f; ALLOC_OBJ(f, FORMAT_MAGIC); AN(f); f->func = format_auth; f->string = strdup("-"); AN(f->string); VTAILQ_INSERT_TAIL(&CTX.format, f, list); } static void parse_x_format(char *buf) { char *e, *r, *s; long lval; int slt; if (!strcmp(buf, "Varnish:time_firstbyte")) { addf_fragment(&CTX.frag[F_ttfb], CTX.missing_int); return; } if (!strcmp(buf, "Varnish:hitmiss")) { addf_strptr(&CTX.hitmiss); return; } if (!strcmp(buf, "Varnish:handling")) { addf_strptr(&CTX.handling); return; } if (!strcmp(buf, "Varnish:side")) { addf_strptr(&CTX.side); return; } if (!strcmp(buf, "Varnish:vxid")) { addf_int64(&CTX.vxid); return; } if (!strncmp(buf, "VCL_Log:", 8)) { addf_vcl_log(buf + 8); return; } if (!strncmp(buf, "VSL:", 4)) { buf += 4; e = buf; while (*e != '\0') e++; if (e == buf) VUT_Error(vut, 1, "Missing tag in VSL:"); if (e[-1] == ']') { r = e - 1; while (r > buf && *r != '[') r--; if (r == buf || r[1] == ']') VUT_Error(vut, 1, "Syntax error: VSL:%s", buf); e[-1] = '\0'; lval = strtol(r + 1, &s, 10); if (s != e - 1) VUT_Error(vut, 1, "Syntax error: VSL:%s]", buf); if (lval <= 0 || lval > 255) { VUT_Error(vut, 1, "Syntax error. Field specifier must be" " between 1 and 255: %s]", buf); } *r = '\0'; } else lval = 0; r = buf; while (r < e && *r != ':') r++; if (r != e) { slt = VSL_Name2Tag(buf, r - buf); r++; } else { slt = VSL_Name2Tag(buf, -1); r = NULL; } if (slt == -2) VUT_Error(vut, 1, "Tag not unique: %s", buf); if (slt == -1) VUT_Error(vut, 1, "Unknown log tag: %s", buf); assert(slt >= 0); addf_vsl((enum VSL_tag_e)slt, lval, r); return; } if (!strcmp(buf, "Varnish:default_format")) { parse_format(FORMAT); return; } VUT_Error(vut, 1, "Unknown formatting extension: %s", buf); } static void parse_format(const char *format) { const char *p, *q; struct vsb *vsb; char buf[256]; int b; if (format == NULL) format = FORMAT; vsb = VSB_new_auto(); AN(vsb); for (p = format; *p != '\0'; p++) { /* Allow the most essential escape sequences in format */ if (*p == '\\' && p[1] != '\0') { if (*++p == 't') AZ(VSB_putc(vsb, '\t')); else if (*p == 'n') AZ(VSB_putc(vsb, '\n')); else AZ(VSB_putc(vsb, *p)); continue; } if (*p != '%') { AZ(VSB_putc(vsb, *p)); continue; } if (VSB_len(vsb) > 0) { AZ(VSB_finish(vsb)); addf_string(VSB_data(vsb)); VSB_clear(vsb); } p++; switch (*p) { case 'b': /* Body bytes sent */ addf_fragment(&CTX.frag[F_b], CTX.missing_int); break; case 'D': /* Float request time */ addf_time('T', "us"); break; case 'h': /* Client host name / IP Address */ addf_fragment(&CTX.frag[F_h], CTX.missing_string); break; case 'H': /* Protocol */ addf_fragment(&CTX.frag[F_H], "HTTP/1.0"); break; case 'I': /* Bytes received */ addf_fragment(&CTX.frag[F_I], CTX.missing_int); break; case 'l': /* Client user ID (identd) always '-' */ AZ(VSB_putc(vsb, '-')); break; case 'm': /* Method */ addf_fragment(&CTX.frag[F_m], CTX.missing_string); break; case 'O': /* Bytes sent */ addf_fragment(&CTX.frag[F_O], CTX.missing_int); break; case 'q': /* Query string */ addf_fragment(&CTX.frag[F_q], ""); break; case 'r': /* Request line */ addf_requestline(); break; case 's': /* Status code */ addf_fragment(&CTX.frag[F_s], CTX.missing_int); break; case 't': /* strftime */ addf_time(*p, TIME_FMT); break; case 'T': /* Int request time */ addf_time(*p, "s"); break; case 'u': /* Remote user from auth */ addf_auth(); break; case 'U': /* URL */ addf_fragment(&CTX.frag[F_U], CTX.missing_string); break; case '{': p++; q = p; b = 1; while (*q) { if (*q == '{') b++; else if (*q == '}') if (--b == 0) break; q++; } if (b > 0) VUT_Error(vut, 1, "Unmatched bracket at: %s", p - 2); assert((unsigned)(q - p) < sizeof buf - 1); strncpy(buf, p, q - p); buf[q - p] = '\0'; q++; switch (*q) { case 'i': addf_hdr(&CTX.watch_reqhdr, buf); break; case 'o': addf_hdr(&CTX.watch_resphdr, buf); break; case 't': addf_time(*q, buf); break; case 'T': addf_time(*q, buf); break; case 'x': parse_x_format(buf); break; default: VUT_Error(vut, 1, "Unknown format specifier at: %s", p - 2); } p = q; break; default: VUT_Error(vut, 1, "Unknown format specifier at: %s", p - 1); } } if (VSB_len(vsb) > 0) { /* Add any remaining static */ AZ(VSB_finish(vsb)); addf_string(VSB_data(vsb)); VSB_clear(vsb); } VSB_destroy(&vsb); } static int isprefix(const char *prefix, size_t len, const char *b, const char *e, const char **next) { assert(len > 0); if (e < b + len || strncasecmp(b, prefix, len)) return (0); b += len; if (next) { while (b < e && *b == ' ') b++; *next = b; } return (1); } static void frag_fields(int force, const char *b, const char *e, ...) { va_list ap; const char *p, *q; int n, field; struct fragment *frag; AN(b); AN(e); va_start(ap, e); n = 0; while (1) { field = va_arg(ap, int); frag = va_arg(ap, struct fragment *); if (field == 0) { AZ(frag); break; } p = q = NULL; while (n < field) { while (b < e && isspace(*b)) b++; p = b; while (b < e && !isspace(*b)) b++; q = b; n++; } assert(p != NULL && q != NULL); if (p >= e || q <= p) continue; if (frag->gen != CTX.gen || force) { /* We only grab the same matching field once */ frag->gen = CTX.gen; frag->b = p; frag->e = q; } } va_end(ap); } static void frag_line(int force, const char *b, const char *e, struct fragment *f) { if (f->gen == CTX.gen && !force) /* We only grab the same matching record once */ return; /* Skip leading space */ while (b < e && isspace(*b)) ++b; /* Skip trailing space */ while (e > b && isspace(e[-1])) --e; f->gen = CTX.gen; f->b = b; f->e = e; } static void process_hdr(const struct watch_head *head, const char *b, const char *e) { struct watch *w; const char *p; VTAILQ_FOREACH(w, head, list) { CHECK_OBJ_NOTNULL(w, WATCH_MAGIC); if (!isprefix(w->key, w->keylen, b, e, &p)) continue; frag_line(1, p, e, &w->frag); } } static void process_vsl(const struct vsl_watch_head *head, enum VSL_tag_e tag, const char *b, const char *e) { struct vsl_watch *w; const char *p; VTAILQ_FOREACH(w, head, list) { CHECK_OBJ_NOTNULL(w, VSL_WATCH_MAGIC); if (tag != w->tag) continue; p = b; if (w->prefixlen > 0 && !isprefix(w->prefix, w->prefixlen, b, e, &p)) continue; if (w->idx == 0) frag_line(0, p, e, &w->frag); else frag_fields(0, p, e, w->idx, &w->frag, 0, NULL); } } static int v_matchproto_(VSLQ_dispatch_f) dispatch_f(struct VSL_data *vsl, struct VSL_transaction * const pt[], void *priv) { struct VSL_transaction *t; enum VSL_tag_e tag; const char *b, *e, *p; struct watch *w; int i, skip; (void)vsl; (void)priv; for (t = pt[0]; t != NULL; t = *++pt) { CTX.gen++; if (t->type == VSL_t_req) { CTX.side = "c"; } else if (t->type == VSL_t_bereq) { CTX.side = "b"; } else continue; CTX.hitmiss = "-"; CTX.handling = "-"; CTX.vxid = t->vxid; skip = 0; while (skip == 0 && 1 == VSL_Next(t->c)) { tag = VSL_TAG(t->c->rec.ptr); if (VSL_tagflags[tag] && CTX.quote_how != VSB_QUOTE_JSON) continue; b = VSL_CDATA(t->c->rec.ptr); e = b + VSL_LEN(t->c->rec.ptr); if (!VSL_tagflags[tag]) { while (e > b && e[-1] == '\0') e--; } switch (tag) { case SLT_HttpGarbage: skip = 1; break; case SLT_PipeAcct: frag_fields(0, b, e, 3, &CTX.frag[F_I], 4, &CTX.frag[F_O], 0, NULL); break; case SLT_BackendOpen: frag_fields(1, b, e, 3, &CTX.frag[F_h], 0, NULL); break; case SLT_ReqStart: frag_fields(0, b, e, 1, &CTX.frag[F_h], 0, NULL); break; case SLT_BereqMethod: case SLT_ReqMethod: frag_line(0, b, e, &CTX.frag[F_m]); break; case SLT_BereqURL: case SLT_ReqURL: p = memchr(b, '?', e - b); if (p == NULL) p = e; frag_line(0, b, p, &CTX.frag[F_U]); frag_line(0, p, e, &CTX.frag[F_q]); break; case SLT_BereqProtocol: case SLT_ReqProtocol: frag_line(0, b, e, &CTX.frag[F_H]); break; case SLT_BerespStatus: case SLT_RespStatus: frag_line(1, b, e, &CTX.frag[F_s]); break; case SLT_BereqAcct: case SLT_ReqAcct: frag_fields(0, b, e, 3, &CTX.frag[F_I], 5, &CTX.frag[F_b], 6, &CTX.frag[F_O], 0, NULL); break; case SLT_Timestamp: #define ISPREFIX(a, b, c, d) isprefix(a, strlen(a), b, c, d) if (ISPREFIX("Start:", b, e, &p)) { frag_fields(0, p, e, 1, &CTX.frag[F_tstart], 0, NULL); } else if (ISPREFIX("Resp:", b, e, &p) || ISPREFIX("PipeSess:", b, e, &p) || ISPREFIX("BerespBody:", b, e, &p)) { frag_fields(0, p, e, 1, &CTX.frag[F_tend], 0, NULL); } else if (ISPREFIX("Process:", b, e, &p) || ISPREFIX("Pipe:", b, e, &p) || ISPREFIX("Beresp:", b, e, &p)) { frag_fields(0, p, e, 2, &CTX.frag[F_ttfb], 0, NULL); } break; case SLT_BereqHeader: case SLT_ReqHeader: process_hdr(&CTX.watch_reqhdr, b, e); if (ISPREFIX("Authorization:", b, e, &p) && ISPREFIX("basic ", p, e, &p)) frag_line(0, p, e, &CTX.frag[F_auth]); else if (ISPREFIX("Host:", b, e, &p)) frag_line(0, p, e, &CTX.frag[F_host]); #undef ISPREFIX break; case SLT_BerespHeader: case SLT_RespHeader: process_hdr(&CTX.watch_resphdr, b, e); break; case SLT_VCL_call: if (!strcasecmp(b, "recv")) { CTX.hitmiss = "-"; CTX.handling = "-"; } else if (!strcasecmp(b, "hit")) { CTX.hitmiss = "hit"; CTX.handling = "hit"; } else if (!strcasecmp(b, "miss")) { CTX.hitmiss = "miss"; CTX.handling = "miss"; } else if (!strcasecmp(b, "pass")) { CTX.hitmiss = "miss"; CTX.handling = "pass"; } else if (!strcasecmp(b, "synth")) { /* Arguably, synth isn't a hit or a miss, but miss is less wrong */ CTX.hitmiss = "miss"; CTX.handling = "synth"; } break; case SLT_VCL_return: if (!strcasecmp(b, "pipe")) { CTX.hitmiss = "miss"; CTX.handling = "pipe"; } else if (!strcasecmp(b, "restart")) skip = 1; break; case SLT_VCL_Log: VTAILQ_FOREACH(w, &CTX.watch_vcl_log, list) { CHECK_OBJ_NOTNULL(w, WATCH_MAGIC); if (e - b < w->keylen || strncmp(b, w->key, w->keylen)) continue; p = b + w->keylen; frag_line(0, p, e, &w->frag); } break; default: break; } process_vsl(&CTX.watch_vsl, tag, b, e); } if (skip) continue; i = print(); if (i) return (i); } return (0); } static char * read_format(const char *formatfile) { FILE *fmtfile; size_t len = 0; int fmtlen; char *fmt = NULL; fmtfile = fopen(formatfile, "r"); if (fmtfile == NULL) VUT_Error(vut, 1, "Can't open format file (%s)", strerror(errno)); AN(fmtfile); fmtlen = getline(&fmt, &len, fmtfile); if (fmtlen == -1) { free(fmt); if (feof(fmtfile)) VUT_Error(vut, 1, "Empty format file"); VUT_Error(vut, 1, "Can't read format from file (%s)", strerror(errno)); } AZ(fclose(fmtfile)); if (fmt[fmtlen - 1] == '\n') fmt[fmtlen - 1] = '\0'; return (fmt); } int main(int argc, char * const *argv) { signed char opt; char *format = NULL; int mode_opt = 0; vut = VUT_InitProg(argc, argv, &vopt_spec); AN(vut); memset(&CTX, 0, sizeof CTX); VTAILQ_INIT(&CTX.format); VTAILQ_INIT(&CTX.watch_vcl_log); VTAILQ_INIT(&CTX.watch_reqhdr); VTAILQ_INIT(&CTX.watch_resphdr); VTAILQ_INIT(&CTX.watch_vsl); CTX.vsb = VSB_new_auto(); AN(CTX.vsb); CTX.quote_how = VSB_QUOTE_ESCHEX; REPLACE(CTX.missing_string, "-"); REPLACE(CTX.missing_int, "-"); tzset(); // We use localtime_r(3) while ((opt = getopt(argc, argv, vopt_spec.vopt_optstring)) != -1) { switch (opt) { case 'a': /* Append to file */ CTX.a_opt = 1; break; case 'b': /* backend mode */ case 'c': /* client mode */ case 'E': /* show ESI */ AN(VUT_Arg(vut, opt, NULL)); mode_opt = 1; break; case 'F': if (format != NULL) VUT_Error(vut, 1, "Format already set"); REPLACE(format, optarg); break; case 'f': if (format != NULL) VUT_Error(vut, 1, "Format already set"); /* Format string from file */ format = read_format(optarg); AN(format); break; case 'h': /* Usage help */ VUT_Usage(vut, &vopt_spec, 0); break; case 'j': REPLACE(CTX.missing_string, ""); REPLACE(CTX.missing_int, "0"); CTX.quote_how = VSB_QUOTE_JSON; break; case 'w': /* Write to file */ REPLACE(CTX.w_arg, optarg); break; default: if (!VUT_Arg(vut, opt, optarg)) VUT_Usage(vut, &vopt_spec, 1); } } /* default is client mode: */ if (!mode_opt) AN(VUT_Arg(vut, 'c', NULL)); if (optind != argc) VUT_Usage(vut, &vopt_spec, 1); if (vut->D_opt && !CTX.w_arg) VUT_Error(vut, 1, "Missing -w option"); if (vut->D_opt && !strcmp(CTX.w_arg, "-")) VUT_Error(vut, 1, "Daemon cannot write to stdout"); /* Check for valid grouping mode */ assert(vut->g_arg < VSL_g__MAX); if (vut->g_arg != VSL_g_vxid && vut->g_arg != VSL_g_request) VUT_Error(vut, 1, "Invalid grouping mode: %s", VSLQ_grouping[vut->g_arg]); /* Prepare output format */ parse_format(format); REPLACE(format, NULL); /* Setup output */ vut->dispatch_f = dispatch_f; vut->dispatch_priv = NULL; if (CTX.w_arg) { openout(CTX.a_opt); AN(CTX.fo); if (vut->D_opt) vut->sighup_f = rotateout; } else CTX.fo = stdout; vut->idle_f = flushout; VUT_Setup(vut); (void)VUT_Main(vut); VUT_Fini(&vut); exit(0); } varnish-7.5.0/bin/varnishncsa/varnishncsa_options.h000066400000000000000000000077441457605730600225600ustar00rootroot00000000000000/*- * Copyright (c) 2013 Varnish Software AS * All rights reserved. * * Author: Martin Blix Grydeland * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include "vapi/vapi_options.h" #include "vut_options.h" #define NCSA_OPT_a \ VOPT("a", "[-a]", "Append to file", \ "When writing output to a file, append to it rather than" \ " overwrite it. This option has no effect without the -w" \ " option." \ ) #define NCSA_OPT_F \ VOPT("F:", "[-F ]", "Set output format", \ "Set the output log format string." \ ) #define NCSA_OPT_f \ VOPT("f:", "[-f ]", "Read output format from file", \ "Read output format from a file. Will read a single line" \ " from the specified file, and use that line as the" \ " format." \ ) #define NCSA_OPT_g \ VOPT("g:", "[-g ]", "Grouping mode (default: vxid)", \ "The grouping of the log records. The default is to group" \ " by vxid." \ ) #define NCSA_OPT_w \ VOPT("w:", "[-w ]", "Output filename", \ "Redirect output to file. The file will be overwritten" \ " unless the -a option was specified. If the application" \ " receives a SIGHUP in daemon mode the file will be" \ " reopened allowing the old one to be rotated away. This" \ " option is required when running in daemon mode. If the" \ " filename is -, varnishncsa writes to the standard output" \ " and cannot work as a daemon." \ ) #define NCSA_OPT_b \ VOPT("b", "[-b]", "Backend mode", \ "Log backend requests. If -c is not specified, then only" \ " backend requests will trigger log lines." \ ) #define NCSA_OPT_c \ VOPT("c", "[-c]", "Client mode", \ "Log client requests. This is the default. If -b is" \ " specified, then -c is needed to also log client requests" \ ) #define NCSA_OPT_E \ VOPT("E", "[-E]", "Show ESI requests", \ "Show ESI requests, implies client mode." \ ) #define NCSA_OPT_j \ VOPT("j", "[-j]", "Make output JSON-compatible", \ "Make format-specifier replacements JSON-compatible. When" \ " escaping characters, use JSON-style \\\\uXXXX escape" \ " sequences instead of C-style \\\\xXX sequences. Empty" \ " strings will be replaced with \"\" instead of \"-\", and" \ " empty integers will be replaced with null. Use -F or -f" \ " in combination with -j to write JSON logs." \ ) NCSA_OPT_a NCSA_OPT_b NCSA_OPT_c VSL_OPT_C VUT_OPT_d VUT_GLOBAL_OPT_D NCSA_OPT_E NCSA_OPT_F NCSA_OPT_f NCSA_OPT_g VUT_OPT_h NCSA_OPT_j VUT_OPT_k VSL_OPT_L VUT_OPT_n VUT_GLOBAL_OPT_P VUT_OPT_Q VUT_OPT_q VUT_OPT_r VSL_OPT_R VUT_OPT_t VUT_GLOBAL_OPT_V NCSA_OPT_w varnish-7.5.0/bin/varnishstat/000077500000000000000000000000001457605730600163305ustar00rootroot00000000000000varnish-7.5.0/bin/varnishstat/Makefile.am000066400000000000000000000014611457605730600203660ustar00rootroot00000000000000# AM_CPPFLAGS = \ -I$(top_srcdir)/include \ -I$(top_builddir)/include \ @CURSES_CFLAGS@ bin_PROGRAMS = varnishstat varnishstat_help_gen varnishstat_SOURCES = \ varnishstat.h \ varnishstat.c \ varnishstat_bindings.h \ varnishstat_curses.c \ varnishstat_options.h nodist_varnishstat_SOURCES = \ varnishstat_curses_help.c BUILT_SOURCES = $(nodist_varnishstat_SOURCES) DISTCLEANFILES = $(nodist_varnishstat_SOURCES) varnishstat_help_gen_SOURCES = \ varnishstat_help_gen.c \ varnishstat_bindings.h varnishstat_curses_help.c: varnishstat_help_gen $(AM_V_GEN) ./varnishstat_help_gen >$@_ @mv $@_ $@ varnishstat_LDADD = \ $(top_builddir)/lib/libvarnishapi/libvarnishapi.la \ @CURSES_LIBS@ ${RT_LIBS} ${LIBM} ${PTHREAD_LIBS} varnishstat_help_gen_LDADD = \ $(top_builddir)/lib/libvarnish/libvarnish.la varnish-7.5.0/bin/varnishstat/flint.lnt000066400000000000000000000004141457605730600201620ustar00rootroot00000000000000// Copyright (c) 2010-2019 Varnish Software AS // SPDX-License-Identifier: BSD-2-Clause // See LICENSE file for full text of license // +libh mgt_event.h -efile(451, varnishstat_options.h) // No include guard -efile(451, varnishstat_bindings.h) // No include guard varnish-7.5.0/bin/varnishstat/flint.sh000066400000000000000000000004641457605730600200040ustar00rootroot00000000000000#!/bin/sh # # Copyright (c) 2010-2021 Varnish Software AS # SPDX-License-Identifier: BSD-2-Clause # See LICENSE file for full text of license FLOPS=' varnishstat.c varnishstat_curses.c varnishstat_curses_help.c ../../lib/libvarnishapi/flint.lnt ../../lib/libvarnishapi/*.c ' ../../tools/flint_skel.sh varnish-7.5.0/bin/varnishstat/varnishstat.c000066400000000000000000000176521457605730600210550ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * Author: Dag-Erling Smørgrav * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Statistics output program */ #include "config.h" #include #include #include #include #include #include #include #include #define VOPT_DEFINITION #define VOPT_INC "varnishstat_options.h" #include "vapi/voptget.h" #include "vapi/vsl.h" #include "vdef.h" #include "vut.h" #include "varnishstat.h" static struct VUT *vut; int has_f = 0; /*--------------------------------------------------------------------*/ static int v_matchproto_(VSC_iter_f) do_xml_cb(void *priv, const struct VSC_point * const pt) { uint64_t val; (void)priv; if (pt == NULL) return (0); AZ(strcmp(pt->ctype, "uint64_t")); val = VSC_Value(pt); printf("\t\n"); printf("\t\t%s\n", pt->name); printf("\t\t%ju\n", (uintmax_t)val); printf("\t\t%c\n", pt->semantics); printf("\t\t%c\n", pt->format); printf("\t\t%s\n", pt->sdesc); printf("\t\n"); return (0); } static void do_xml(struct vsm *vsm, struct vsc *vsc) { char time_stamp[20]; time_t now; printf("\n"); now = time(NULL); (void)strftime(time_stamp, 20, "%Y-%m-%dT%H:%M:%S", localtime(&now)); printf("\n", time_stamp); (void)VSC_Iter(vsc, vsm, do_xml_cb, NULL); printf("\n"); } /*--------------------------------------------------------------------*/ static int v_matchproto_(VSC_iter_f) do_json_cb(void *priv, const struct VSC_point * const pt) { const char **sep; uintmax_t val; if (pt == NULL) return (0); AZ(strcmp(pt->ctype, "uint64_t")); val = (uintmax_t)VSC_Value(pt); sep = priv; printf( "%s" " \"%s\": {\n" " \"description\": \"%s\",\n" " \"flag\": \"%c\",\n" " \"format\": \"%c\",\n" " \"value\": %ju\n" " }", *sep, pt->name, pt->sdesc, pt->semantics, pt->format, val); *sep = ",\n"; return (0); } static void do_json(struct vsm *vsm, struct vsc *vsc) { const char *sep; char time_stamp[20]; time_t now; sep = ""; now = time(NULL); (void)strftime(time_stamp, 20, "%Y-%m-%dT%H:%M:%S", localtime(&now)); printf( "{\n" " \"version\": 1,\n" " \"timestamp\": \"%s\",\n" " \"counters\": {\n", time_stamp); (void)VSC_Iter(vsc, vsm, do_json_cb, &sep); printf( "\n" " }\n" "}\n"); } /*--------------------------------------------------------------------*/ struct once_priv { double up; int pad; }; static int v_matchproto_(VSC_iter_f) do_once_cb_first(void *priv, const struct VSC_point * const pt) { struct once_priv *op; uint64_t val; if (pt == NULL) return (0); op = priv; AZ(strcmp(pt->ctype, "uint64_t")); if (strcmp(pt->name, "MAIN.uptime")) return (0); val = VSC_Value(pt); op->up = (double)val; return (1); } static int v_matchproto_(VSC_iter_f) do_once_cb(void *priv, const struct VSC_point * const pt) { struct once_priv *op; uint64_t val; int i; if (pt == NULL) return (0); op = priv; AZ(strcmp(pt->ctype, "uint64_t")); val = VSC_Value(pt); i = 0; i += printf("%s", pt->name); if (i >= op->pad) op->pad = i + 1; printf("%*.*s", op->pad - i, op->pad - i, ""); if (pt->semantics == 'c') printf("%12ju %12.2f %s\n", (uintmax_t)val, op->up ? val / op->up : 0, pt->sdesc); else printf("%12ju %12s %s\n", (uintmax_t)val, ". ", pt->sdesc); return (0); } static void do_once(struct vsm *vsm, struct vsc *vsc) { struct vsc *vsconce = VSC_New(); struct once_priv op; AN(vsconce); AN(VSC_Arg(vsconce, 'f', "MAIN.uptime")); memset(&op, 0, sizeof op); op.pad = 18; (void)VSC_Iter(vsconce, vsm, do_once_cb_first, &op); VSC_Destroy(&vsconce, vsm); (void)VSC_Iter(vsc, vsm, do_once_cb, &op); } /*--------------------------------------------------------------------*/ static int v_matchproto_(VSC_iter_f) do_list_cb(void *priv, const struct VSC_point * const pt) { int i; (void)priv; if (pt == NULL) return (0); i = 0; i += printf("%s", pt->name); if (i < 30) printf("%*s", i - 30, ""); printf(" %s\n", pt->sdesc); return (0); } static void list_fields(struct vsm *vsm, struct vsc *vsc) { printf("Varnishstat -f option fields:\n"); printf("Field name Description\n"); printf("---------- -----------\n"); (void)VSC_Iter(vsc, vsm, do_list_cb, NULL); } /*--------------------------------------------------------------------*/ static void v_noreturn_ usage(int status) { const char **opt; fprintf(stderr, "Usage: %s \n\n", vut->progname); fprintf(stderr, "Options:\n"); for (opt = vopt_spec.vopt_usage; *opt != NULL; opt +=2) fprintf(stderr, " %-25s %s\n", *opt, *(opt + 1)); exit(status); } static int key_bindings(void) { #define BINDING_KEY(chr, name, next) \ printf("<%s>" next, name); #define BINDING(name, desc) \ printf("\n%s\n\n", desc); #include "varnishstat_bindings.h" return (0); } int main(int argc, char * const *argv) { struct vsm *vd; int once = 0, xml = 0, json = 0, f_list = 0, curses = 0; signed char opt; int i; struct vsc *vsc; if (argc == 2 && !strcmp(argv[1], "--bindings")) exit(key_bindings()); vut = VUT_InitProg(argc, argv, &vopt_spec); AN(vut); vd = VSM_New(); AN(vd); vsc = VSC_New(); AN(vsc); while ((opt = getopt(argc, argv, vopt_spec.vopt_optstring)) != -1) { switch (opt) { case '1': once = 1; break; case 'h': /* Usage help */ usage(0); break; case 'l': f_list = 1; break; case 'x': xml = 1; break; case 'j': json = 1; break; case 'I': case 'X': case 'f': AN(VSC_Arg(vsc, opt, optarg)); has_f = 1; break; case 'r': AN(VSC_Arg(vsc, opt, optarg)); break; case 'V': AN(VUT_Arg(vut, opt, optarg)); break; default: i = VSM_Arg(vd, opt, optarg); if (i < 0) VUT_Error(vut, 1, "%s", VSM_Error(vd)); if (!i) usage(1); } } if (optind != argc) usage(1); if (!(xml || json || once || f_list)) curses = 1; if (VSM_Attach(vd, STDERR_FILENO)) VUT_Error(vut, 1, "%s", VSM_Error(vd)); if (curses) { if (has_f) { AN(VSC_Arg(vsc, 'R', "MGT.uptime")); AN(VSC_Arg(vsc, 'R', "MAIN.uptime")); AN(VSC_Arg(vsc, 'R', "MAIN.cache_hit")); AN(VSC_Arg(vsc, 'R', "MAIN.cache_miss")); } do_curses(vd, vsc); } else if (xml) do_xml(vd, vsc); else if (json) do_json(vd, vsc); else if (once) do_once(vd, vsc); else if (f_list) list_fields(vd, vsc); else WRONG("undefined varnishstat mode"); exit(0); } varnish-7.5.0/bin/varnishstat/varnishstat.h000066400000000000000000000033721457605730600210540ustar00rootroot00000000000000/*- * Copyright (c) 2010-2014 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #include #include "vapi/vsm.h" #include "vapi/vsc.h" #include "vas.h" #include "vcs.h" /* varnishstat.c */ extern int has_f; /* varnishstat_curses.c */ void do_curses(struct vsm *, struct vsc *); /* varnishstat_curses_help.c */ extern const char *const bindings_help[]; extern const int bindings_help_len; varnish-7.5.0/bin/varnishstat/varnishstat.xsd000066400000000000000000000011721457605730600214170ustar00rootroot00000000000000 varnish-7.5.0/bin/varnishstat/varnishstat_bindings.h000066400000000000000000000074561457605730600227400ustar00rootroot00000000000000/*- * Copyright (c) 2019 Varnish Software AS * All rights reserved. * * Author: Dridi Boukelmoune * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ /*lint -save -e525 -e539 */ #ifndef BINDING_KEY # define BINDING_KEY(key, name, next) #endif #define BINDING_CTRL(c) ((c) & 0x1f) BINDING_KEY('h', "h",) BINDING(HELP, "\tToggle the help screen.") BINDING_KEY(KEY_UP, "UP", " or ") BINDING_KEY('k', "k",) BINDING(UP, "\tNavigate the counter list one line up.") BINDING_KEY(KEY_DOWN, "DOWN", " or ") BINDING_KEY('j', "j",) BINDING(DOWN, "\tNavigate the counter list one line down.") BINDING_KEY(KEY_PPAGE, "PAGEUP", " or ") BINDING_KEY('b', "b", " or ") BINDING_KEY(BINDING_CTRL('b'), "CTRL-B",) BINDING(PAGEUP, "\tNavigate the counter list one page up.") BINDING_KEY(KEY_NPAGE, "PAGEDOWN", " or ") BINDING_KEY(' ', "SPACE", " or ") BINDING_KEY(BINDING_CTRL('f'), "CTRL-F",) BINDING(PAGEDOWN, "\tNavigate the counter list one page down.") BINDING_KEY(KEY_HOME, "HOME", " or ") BINDING_KEY('g', "g",) BINDING(TOP, "\tNavigate the counter list to the top.") BINDING_KEY(KEY_END, "END", " or ") BINDING_KEY('G', "G",) BINDING(BOTTOM, "\tNavigate the counter list to the bottom.") BINDING_KEY('d', "d",) BINDING(UNSEEN, "\tToggle between showing and hiding unseen counters. Unseen\n" "\tcounters are those that has been zero for the entire runtime\n" "\tof varnishstat. Defaults to hide unseen counters." ) BINDING_KEY('r', "r",) BINDING(RAW, "\tToggle between showing raw and adjusted gauges. When a gauge\n" "\tis decremented faster than it is incremented, it may appear as\n" "\ta large integer with its most significant bit set. By default\n" "\tsuch values are adjusted to zero." ) BINDING_KEY('e', "e",) BINDING(SCALE, "\tToggle scaling of values.") BINDING_KEY('v', "v",) BINDING(VERBOSE, "\tIncrease verbosity. Defaults to only showing informational\n" "\tcounters." ) BINDING_KEY('V', "V",) BINDING(QUIET, "\tDecrease verbosity. Defaults to only showing informational\n" "\tcounters." ) BINDING_KEY('q', "q",) BINDING(QUIT, "\tQuit.") BINDING_KEY(BINDING_CTRL('t'), "CTRL+T",) BINDING(SAMPLE, "\tSample now.") BINDING_KEY('+', "+",) BINDING(ACCEL, "\tIncrease refresh interval.") BINDING_KEY('-', "-",) BINDING(DECEL, "\tDecrease refresh interval.") #ifdef BINDING_SIG BINDING_KEY(BINDING_CTRL('c'), "CTRL+C",) BINDING(SIG_INT, "") BINDING_KEY(BINDING_CTRL('z'), "CTRL+Z",) BINDING(SIG_TSTP, "") # undef BINDING_SIG #endif #undef BINDING_KEY #undef BINDING_CTRL #undef BINDING /*lint -restore */ varnish-7.5.0/bin/varnishstat/varnishstat_curses.c000066400000000000000000000556041457605730600224400ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * Author: Dag-Erling Smørgrav * Author: Martin Blix Grydeland * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Statistics output program */ #include "config.h" #include #include #include #include #include #include "vdef.h" #include "vas.h" #include "miniobj.h" #include "vqueue.h" #include "vtim.h" #include "vapi/vsig.h" #include "varnishstat.h" #include "vcurses.h" #define LINES_STATUS 3 #define LINES_BAR_T 1 #define LINES_BAR_B 1 #define LINES_INFO 3 #define LINES_POINTS_MIN 3 #define N_COL 6 #define COLW 14 #define COLW_NAME_MIN 24 #define VALUE_MAX 999999999999 #define REBUILD_NEXT (1u << 0) #define REBUILD_FIRST (1u << 1) enum kb_e { #define BINDING(name, desc) KB_ ## name, #define BINDING_SIG #include "varnishstat_bindings.h" }; struct ma { unsigned n, nmax; double acc; }; struct pt { unsigned magic; #define PT_MAGIC 0x41698E4F VTAILQ_ENTRY(pt) list; const struct VSC_point *vpt; char seen; uint64_t cur, last; double t_cur, t_last; double chg, avg; struct ma ma_10, ma_100, ma_1000; }; struct hitrate { uint64_t lhit, lmiss; struct ma hr_10; struct ma hr_100; struct ma hr_1000; }; static struct hitrate hitrate; static VTAILQ_HEAD(, pt) ptlist = VTAILQ_HEAD_INITIALIZER(ptlist); static int n_ptlist = 0; static int n_ptarray = 0; static struct pt **ptarray = NULL; static const volatile uint64_t *mgt_uptime; static const volatile uint64_t *main_uptime; static const volatile uint64_t *main_cache_hit; static const volatile uint64_t *main_cache_miss; static int l_status, l_bar_t, l_points, l_bar_b, l_info; static unsigned colw_name = COLW_NAME_MIN; static WINDOW *w_status = NULL; static WINDOW *w_bar_t = NULL; static WINDOW *w_points = NULL; static WINDOW *w_bar_b = NULL; static WINDOW *w_info = NULL; static const struct VSC_level_desc *verbosity; static int show_help = 0; static int help_line = 0; static int keep_running = 1; static int hide_unseen = 1; static int raw_vsc = 0; static int page_start = 0; static int current = 0; static int rebuild = 0; static int redraw = 0; static int sample = 0; static int scale = 1; static double t_sample = 0.; static double interval = 1.; static unsigned vsm_status = 0; #define NOTIF_MAXLEN 256 static char notification_message[NOTIF_MAXLEN] = ""; static vtim_mono notification_eol = 0.0; static void init_hitrate(void) { memset(&hitrate, 0, sizeof (struct hitrate)); if (main_cache_hit != NULL) { hitrate.lhit = *main_cache_hit; hitrate.lmiss = *main_cache_miss; } hitrate.hr_10.nmax = 10; hitrate.hr_100.nmax = 100; hitrate.hr_1000.nmax = 1000; } static void update_ma(struct ma *ma, double val) { AN(ma); AN(ma->nmax); if (ma->n < ma->nmax) ma->n++; ma->acc += (val - ma->acc) / (double)ma->n; } static void update_position(void) { int old_current, old_page_start; old_current = current; old_page_start = page_start; if (n_ptarray == 0) { current = 0; page_start = 0; } else { current = vlimit_t(int, current, 0, n_ptarray - 1); page_start = vmin(page_start, current); if (current > page_start + (l_points - 1)) page_start = current - (l_points - 1); page_start = vlimit_t(int, page_start, 0, n_ptarray - 1); } if (current != old_current || page_start != old_page_start) redraw = 1; } static void delete_pt_array(void) { if (ptarray != NULL) free(ptarray); ptarray = NULL; n_ptarray = 0; update_position(); } static void build_pt_array(void) { int i; struct pt *pt; struct pt *pt_current = NULL; int current_line = 0; if (current < n_ptarray) { pt_current = ptarray[current]; current_line = current - page_start; } if (ptarray != NULL) delete_pt_array(); AZ(n_ptarray); ptarray = calloc(n_ptlist, sizeof *ptarray); AN(ptarray); VTAILQ_FOREACH(pt, &ptlist, list) { CHECK_OBJ_NOTNULL(pt, PT_MAGIC); if (pt->vpt->level > verbosity) { if (has_f && (rebuild & REBUILD_FIRST)) verbosity = VSC_ChangeLevel(verbosity, pt->vpt->level - verbosity); else continue; } if (!pt->seen && hide_unseen) continue; assert(n_ptarray < n_ptlist); ptarray[n_ptarray++] = pt; } assert(n_ptarray <= n_ptlist); for (i = 0; pt_current != NULL && i < n_ptarray; i++) if (ptarray[i] == pt_current) break; current = i; page_start = current - current_line; update_position(); rebuild = 0; redraw = 1; } static void sample_points(void) { struct pt *pt; uint64_t v; VTAILQ_FOREACH(pt, &ptlist, list) { AN(pt->vpt); AN(pt->vpt->ptr); v = VSC_Value(pt->vpt); if (v == 0 && !pt->seen) continue; if (!pt->seen) { pt->seen = 1; rebuild = REBUILD_NEXT; } pt->last = pt->cur; pt->cur = v; pt->t_last = pt->t_cur; pt->t_cur = VTIM_mono(); if (pt->t_last) pt->chg = ((int64_t)pt->cur - (int64_t)pt->last) / (pt->t_cur - pt->t_last); if (pt->vpt->semantics == 'g') { pt->avg = 0.; update_ma(&pt->ma_10, (int64_t)pt->cur); update_ma(&pt->ma_100, (int64_t)pt->cur); update_ma(&pt->ma_1000, (int64_t)pt->cur); } else if (pt->vpt->semantics == 'c') { if (main_uptime != NULL && *main_uptime) pt->avg = pt->cur / (double)*main_uptime; else pt->avg = 0.; if (pt->t_last) { update_ma(&pt->ma_10, pt->chg); update_ma(&pt->ma_100, pt->chg); update_ma(&pt->ma_1000, pt->chg); } } } } static void sample_hitrate(void) { double hr, mr, ratio; uint64_t hit, miss; if (main_cache_hit == NULL) return; hit = *main_cache_hit; miss = *main_cache_miss; hr = hit - hitrate.lhit; mr = miss - hitrate.lmiss; hitrate.lhit = hit; hitrate.lmiss = miss; if (hr + mr != 0) ratio = hr / (hr + mr); else ratio = 0; update_ma(&hitrate.hr_10, ratio); update_ma(&hitrate.hr_100, ratio); update_ma(&hitrate.hr_1000, ratio); } static void sample_data(void) { t_sample = VTIM_mono(); sample = 0; redraw = 1; sample_points(); sample_hitrate(); } static void destroy_window(WINDOW **w) { AN(w); if (*w == NULL) return; assert(delwin(*w) != ERR); *w = NULL; } static void make_windows(void) { int Y, X; int y; int y_status, y_bar_t, y_points, y_bar_b, y_info; destroy_window(&w_status); destroy_window(&w_bar_t); destroy_window(&w_points); destroy_window(&w_bar_b); destroy_window(&w_info); Y = LINES; X = COLS; l_status = LINES_STATUS; l_bar_t = LINES_BAR_T; l_bar_b = LINES_BAR_B; l_info = LINES_INFO; l_points = Y - (l_status + l_bar_t + l_bar_b + l_info); if (l_points < LINES_POINTS_MIN) { l_points += l_info; l_info = 0; } l_points = vmax(l_points, LINES_POINTS_MIN); y = 0; y_status = y; y += l_status; y_bar_t = y; y += l_bar_t; y_points = y; y += l_points; y_bar_b = y; y += l_bar_b; y_info = y; y += l_info; assert(y >= Y); w_status = newwin(l_status, X, y_status, 0); AN(w_status); nodelay(w_status, 1); keypad(w_status, 1); wnoutrefresh(w_status); w_bar_t = newwin(l_bar_t, X, y_bar_t, 0); AN(w_bar_t); wbkgd(w_bar_t, A_REVERSE); wnoutrefresh(w_bar_t); w_points = newwin(l_points, X, y_points, 0); AN(w_points); wnoutrefresh(w_points); w_bar_b = newwin(l_bar_b, X, y_bar_b, 0); AN(w_bar_b); wbkgd(w_bar_b, A_REVERSE); wnoutrefresh(w_bar_b); if (l_info) { w_info = newwin(l_info, X, y_info, 0); AN(w_info); wnoutrefresh(w_info); } if (X - COLW_NAME_MIN > N_COL * COLW) colw_name = X - (N_COL * COLW); else colw_name = COLW_NAME_MIN; redraw = 1; } static void print_duration(WINDOW *w, uint64_t t) { wprintw(w, "%4ju+%02ju:%02ju:%02ju", (uintmax_t)t / 86400, (uintmax_t)(t % 86400) / 3600, (uintmax_t)(t % 3600) / 60, (uintmax_t)t % 60); } static void running(WINDOW *w, uint64_t up, int flg) { if (vsm_status & flg) { print_duration(w_status, up); } else { wattron(w, A_STANDOUT); wprintw(w, " Not Running"); wattroff(w, A_STANDOUT); } } static void draw_status(void) { uint64_t up_mgt = 0; uint64_t up_chld = 0; AN(w_status); werase(w_status); if (mgt_uptime != NULL) up_mgt = *mgt_uptime; if (main_uptime != NULL) up_chld = *main_uptime; mvwprintw(w_status, 0, 0, "Uptime mgt: "); running(w_status, up_mgt, VSM_MGT_RUNNING); mvwprintw(w_status, 1, 0, "Uptime child: "); running(w_status, up_chld, VSM_WRK_RUNNING); mvwprintw(w_status, 2, 0, "Press to toggle help screen"); if (VTIM_mono() < notification_eol) mvwaddstr(w_status, 2, 0, notification_message); if (COLS > 70) { mvwprintw(w_status, 0, getmaxx(w_status) - 37, "Hitrate n: %8u %8u %8u", hitrate.hr_10.n, hitrate.hr_100.n, hitrate.hr_1000.n); mvwprintw(w_status, 1, getmaxx(w_status) - 37, " avg(n): %8.4f %8.4f %8.4f", hitrate.hr_10.acc, hitrate.hr_100.acc, hitrate.hr_1000.acc); } wnoutrefresh(w_status); } static void draw_bar_t(void) { int X, x; enum { COL_CUR, COL_CHG, COL_AVG, COL_MA10, COL_MA100, COL_MA1000, COL_LAST } col; AN(w_bar_t); X = getmaxx(w_bar_t); x = 0; werase(w_bar_t); if (page_start > 0) mvwprintw(w_bar_t, 0, x, "^^^"); x += 4; mvwprintw(w_bar_t, 0, x, "%.*s", colw_name - 4, "NAME"); x += colw_name - 4; col = COL_CUR; while (col < COL_LAST) { if (X - x < COLW) break; switch (col) { case COL_CUR: mvwprintw(w_bar_t, 0, x, " %12.12s", "CURRENT"); break; case COL_CHG: mvwprintw(w_bar_t, 0, x, " %12.12s", "CHANGE"); break; case COL_AVG: mvwprintw(w_bar_t, 0, x, " %12.12s", "AVERAGE"); break; case COL_MA10: mvwprintw(w_bar_t, 0, x, " %12.12s", "AVG_10"); break; case COL_MA100: mvwprintw(w_bar_t, 0, x, " %12.12s", "AVG_100"); break; case COL_MA1000: mvwprintw(w_bar_t, 0, x, " %12.12s", "AVG_1000"); break; default: break; } x += COLW; col++; } wnoutrefresh(w_bar_t); } static void draw_line_default(WINDOW *w, int y, int x, int X, const struct pt *pt) { enum { COL_CUR, COL_CHG, COL_AVG, COL_MA10, COL_MA100, COL_MA1000, COL_LAST } col; AN(w); AN(pt); col = COL_CUR; while (col < COL_LAST) { if (X - x < COLW) break; switch (col) { case COL_CUR: mvwprintw(w, y, x, " %12ju", (uintmax_t)pt->cur); break; case COL_CHG: if (pt->t_last) mvwprintw(w, y, x, " %12.2f", pt->chg); else mvwprintw(w, y, x, " %12s", ". "); break; case COL_AVG: if (pt->avg) mvwprintw(w, y, x, " %12.2f", pt->avg); else mvwprintw(w, y, x, " %12s", ". "); break; case COL_MA10: mvwprintw(w, y, x, " %12.2f", pt->ma_10.acc); break; case COL_MA100: mvwprintw(w, y, x, " %12.2f", pt->ma_100.acc); break; case COL_MA1000: mvwprintw(w, y, x, " %12.2f", pt->ma_1000.acc); break; default: break; } x += COLW; col++; } } static double scale_bytes(double val, char *q) { const char *p; for (p = " KMGTPEZY"; *p; p++) { if (fabs(val) < 1024.) break; val /= 1024.; } *q = *p; return (val); } static void print_bytes(WINDOW *w, double val) { char q = ' '; if (scale) val = scale_bytes(val, &q); wprintw(w, " %12.2f%c", val, q); } static void print_trunc(WINDOW *w, uintmax_t val) { if (val > VALUE_MAX) { while (val > VALUE_MAX) val /= 1000; wprintw(w, " %9ju...", val); } else wprintw(w, " %12ju", val); } static void draw_line_bytes(WINDOW *w, int y, int x, int X, const struct pt *pt) { enum { COL_CUR, COL_CHG, COL_AVG, COL_MA10, COL_MA100, COL_MA1000, COL_LAST } col; AN(w); AN(pt); col = COL_CUR; while (col < COL_LAST) { if (X - x < COLW) break; wmove(w, y, x); switch (col) { case COL_CUR: if (scale && pt->cur > 1024) print_bytes(w, (double)pt->cur); else print_trunc(w, (uintmax_t)pt->cur); break; case COL_CHG: if (pt->t_last) print_bytes(w, pt->chg); else wprintw(w, " %12s", ". "); break; case COL_AVG: if (pt->avg) print_bytes(w, pt->avg); else wprintw(w, " %12s", ". "); break; case COL_MA10: print_bytes(w, pt->ma_10.acc); break; case COL_MA100: print_bytes(w, pt->ma_100.acc); break; case COL_MA1000: print_bytes(w, pt->ma_1000.acc); break; default: break; } x += COLW; col++; } } static void draw_line_bitmap(WINDOW *w, int y, int x, int X, const struct pt *pt) { unsigned ch; enum { COL_VAL, COL_MAP, COL_LAST } col; AN(w); AN(pt); assert(pt->vpt->format == 'b'); col = COL_VAL; while (col < COL_LAST) { switch (col) { case COL_VAL: if (X - x < COLW) return; mvwprintw(w, y, x, " %10.10jx", (uintmax_t)((pt->cur >> 24) & 0xffffffffffLL)); x += COLW; break; case COL_MAP: if (X - x < 2 * COLW) return; x += (2 * COLW) - 24; for (ch = 0x800000; ch; ch >>= 1) { if (pt->cur & ch) mvwaddch(w, y, x, 'V'); else mvwaddch(w, y, x, '_'); x++; } break; default: break; } col++; } } static void draw_line_duration(WINDOW *w, int y, int x, int X, const struct pt *pt) { enum { COL_DUR, COL_LAST } col; AN(w); AN(pt); col = COL_DUR; while (col < COL_LAST) { if (X - x < COLW) break; switch (col) { case COL_DUR: wmove(w, y, x); if (scale) print_duration(w, pt->cur); else wprintw(w, " %12ju", (uintmax_t)pt->cur); break; default: break; } x += COLW; col++; } } static void draw_line(WINDOW *w, int y, const struct pt *pt) { int x, X; assert(colw_name >= COLW_NAME_MIN); X = getmaxx(w); x = 0; if (strlen(pt->vpt->name) > colw_name) mvwprintw(w, y, x, "%.*s...", colw_name - 3, pt->vpt->name); else mvwprintw(w, y, x, "%.*s", colw_name, pt->vpt->name); x += colw_name; switch (pt->vpt->format) { case 'b': draw_line_bitmap(w, y, x, X, pt); break; case 'B': draw_line_bytes(w, y, x, X, pt); break; case 'd': draw_line_duration(w, y, x, X, pt); break; default: draw_line_default(w, y, x, X, pt); break; } } static void draw_points(void) { int line; int n; AN(w_points); werase(w_points); if (n_ptarray == 0) { wnoutrefresh(w_points); return; } assert(current >= 0); assert(current < n_ptarray); assert(page_start >= 0); assert(page_start < n_ptarray); assert(current >= page_start); assert(current - page_start < l_points); for (line = 0; line < l_points; line++) { n = line + page_start; if (n >= n_ptarray) break; if (n == current) wattron(w_points, A_BOLD); draw_line(w_points, line, ptarray[n]); if (n == current) wattroff(w_points, A_BOLD); } wnoutrefresh(w_points); } static void draw_help(void) { const char *const *p; int l, y, X; if (l_points >= bindings_help_len) { assert(help_line == 0); l = bindings_help_len; } else { assert(help_line >= 0); assert(help_line <= bindings_help_len - l_points); l = l_points; } X = getmaxx(w_points); werase(w_points); for (y = 0, p = bindings_help + help_line; y < l; y++, p++) { if (**p == '\t') { mvwprintw(w_points, y, 0, " %.*s", X - 4, *p + 1); } else { wattron(w_points, A_BOLD); mvwprintw(w_points, y, 0, "%.*s", X, *p); wattroff(w_points, A_BOLD); } } wnoutrefresh(w_points); } static void draw_bar_b(void) { int x, X; char buf[64]; AN(w_bar_b); x = 0; X = getmaxx(w_bar_b); werase(w_bar_b); if (page_start + l_points < n_ptarray) mvwprintw(w_bar_b, 0, x, "vvv"); x += 4; if (current < n_ptarray) mvwprintw(w_bar_b, 0, x, "%s", ptarray[current]->vpt->name); bprintf(buf, "%d-%d/%d", page_start + 1, page_start + l_points < n_ptarray ? page_start + l_points : n_ptarray, n_ptarray); mvwprintw(w_bar_b, 0, X - strlen(buf), "%s", buf); X -= strlen(buf) + 2; if (verbosity != NULL) { mvwprintw(w_bar_b, 0, X - strlen(verbosity->label), "%s", verbosity->label); X -= strlen(verbosity->label) + 2; } if (!hide_unseen) { mvwprintw(w_bar_b, 0, X - 6, "%s", "UNSEEN"); X -= 8; } if (raw_vsc) mvwprintw(w_bar_b, 0, X - 3, "%s", "RAW"); wnoutrefresh(w_bar_b); } static void draw_info(void) { if (w_info == NULL) return; werase(w_info); if (current < n_ptarray) { /* XXX: Word wrapping, and overflow handling? */ mvwprintw(w_info, 0, 0, "%s:", ptarray[current]->vpt->sdesc); mvwprintw(w_info, 1, 0, "%s", ptarray[current]->vpt->ldesc); } wnoutrefresh(w_info); } static void draw_screen(void) { draw_status(); if (show_help) { werase(w_bar_t); werase(w_bar_b); werase(w_info); wnoutrefresh(w_bar_t); wnoutrefresh(w_bar_b); wnoutrefresh(w_info); draw_help(); } else { draw_bar_t(); draw_points(); draw_bar_b(); draw_info(); } doupdate(); redraw = 0; } static void handle_common_keypress(enum kb_e kb) { switch (kb) { case KB_QUIT: keep_running = 0; return; case KB_SIG_INT: AZ(raise(SIGINT)); return; case KB_SIG_TSTP: AZ(raise(SIGTSTP)); return; default: WRONG("unexpected key binding"); } } static void handle_points_keypress(struct vsc *vsc, enum kb_e kb) { switch (kb) { case KB_HELP: show_help = 1; help_line = 0; redraw = 1; return; case KB_UP: if (current == 0) return; current--; break; case KB_DOWN: if (current == n_ptarray - 1) return; current++; break; case KB_PAGEUP: current -= l_points; page_start -= l_points; break; case KB_PAGEDOWN: current += l_points; if (page_start + l_points < n_ptarray - 1) page_start += l_points; break; case KB_TOP: current = 0; break; case KB_BOTTOM: current = n_ptarray - 1; break; case KB_UNSEEN: hide_unseen = 1 - hide_unseen; rebuild = REBUILD_NEXT; break; case KB_RAW: AN(VSC_Arg(vsc, 'r', NULL)); raw_vsc = VSC_IsRaw(vsc); rebuild = REBUILD_NEXT; break; case KB_SCALE: scale = 1 - scale; rebuild = REBUILD_NEXT; break; case KB_ACCEL: interval += 0.1; (void)snprintf(notification_message, NOTIF_MAXLEN, "Refresh interval set to %.1f seconds.", interval); notification_eol = VTIM_mono() + 1.25; break; case KB_DECEL: interval -= 0.1; if (interval < 0.1) interval = 0.1; (void)snprintf(notification_message, NOTIF_MAXLEN, "Refresh interval set to %.1f seconds.", interval); notification_eol = VTIM_mono() + 1.25; break; case KB_VERBOSE: verbosity = VSC_ChangeLevel(verbosity, 1); rebuild = REBUILD_NEXT; break; case KB_QUIET: verbosity = VSC_ChangeLevel(verbosity, -1); rebuild = REBUILD_NEXT; break; case KB_SAMPLE: sample = 1; return; case KB_QUIT: case KB_SIG_INT: case KB_SIG_TSTP: handle_common_keypress(kb); return; default: WRONG("unhandled key binding"); } update_position(); redraw = 1; } static void handle_help_keypress(enum kb_e kb) { int hl = help_line; switch (kb) { case KB_HELP: show_help = 0; redraw = 1; return; case KB_UP: help_line--; break; case KB_DOWN: help_line++; break; case KB_PAGEUP: help_line -= l_points; break; case KB_PAGEDOWN: help_line += l_points; break; case KB_TOP: help_line = 0; break; case KB_BOTTOM: help_line = bindings_help_len; break; case KB_UNSEEN: case KB_RAW: case KB_SCALE: case KB_ACCEL: case KB_DECEL: case KB_VERBOSE: case KB_QUIET: case KB_SAMPLE: break; case KB_QUIT: case KB_SIG_INT: case KB_SIG_TSTP: handle_common_keypress(kb); return; default: WRONG("unhandled key binding"); } help_line = vlimit_t(int, help_line, 0, bindings_help_len - l_points); redraw = (help_line != hl); } static void handle_keypress(struct vsc *vsc, int ch) { enum kb_e kb; switch (ch) { #define BINDING_KEY(chr, name, or) \ case chr: #define BINDING(name, desc) \ kb = KB_ ## name; \ break; #define BINDING_SIG #include "varnishstat_bindings.h" default: return; } if (show_help) handle_help_keypress(kb); else handle_points_keypress(vsc, kb); } static void * v_matchproto_(VSC_new_f) newpt(void *priv, const struct VSC_point *const vpt) { struct pt *pt; AZ(priv); ALLOC_OBJ(pt, PT_MAGIC); rebuild |= REBUILD_NEXT; AN(pt); pt->vpt = vpt; pt->last = VSC_Value(vpt); pt->ma_10.nmax = 10; pt->ma_100.nmax = 100; pt->ma_1000.nmax = 1000; VTAILQ_INSERT_TAIL(&ptlist, pt, list); n_ptlist++; AZ(strcmp(vpt->ctype, "uint64_t")); if (!strcmp(vpt->name, "MGT.uptime")) mgt_uptime = vpt->ptr; if (!strcmp(vpt->name, "MAIN.uptime")) main_uptime = vpt->ptr; if (!strcmp(vpt->name, "MAIN.cache_hit")) main_cache_hit = vpt->ptr; if (!strcmp(vpt->name, "MAIN.cache_miss")) main_cache_miss = vpt->ptr; return (pt); } static void v_matchproto_(VSC_destroy_f) delpt(void *priv, const struct VSC_point *const vpt) { struct pt *pt; AZ(priv); CAST_OBJ_NOTNULL(pt, vpt->priv, PT_MAGIC); rebuild |= REBUILD_NEXT; VTAILQ_REMOVE(&ptlist, pt, list); n_ptlist--; FREE_OBJ(pt); if (vpt->ptr == mgt_uptime) mgt_uptime = NULL; if (vpt->ptr == main_uptime) main_uptime = NULL; if (vpt->ptr == main_cache_hit) main_cache_hit = NULL; if (vpt->ptr == main_cache_miss) main_cache_miss = NULL; } void do_curses(struct vsm *vsm, struct vsc *vsc) { long t; int ch; double now; verbosity = VSC_ChangeLevel(NULL, 0); initscr(); raw(); noecho(); nonl(); curs_set(0); make_windows(); doupdate(); VSC_State(vsc, newpt, delpt, NULL); raw_vsc = VSC_IsRaw(vsc); rebuild |= REBUILD_FIRST; (void)VSC_Iter(vsc, vsm, NULL, NULL); build_pt_array(); init_hitrate(); while (keep_running && !VSIG_int && !VSIG_term && !VSIG_hup) { (void)VSC_Iter(vsc, vsm, NULL, NULL); vsm_status = VSM_Status(vsm); if (vsm_status & (VSM_MGT_RESTARTED|VSM_WRK_RESTARTED)) init_hitrate(); if (rebuild) build_pt_array(); now = VTIM_mono(); if (now - t_sample > interval) sample = 1; if (sample) sample_data(); if (redraw) draw_screen(); t = (long)((t_sample + interval - now) * 1000); wtimeout(w_status, t); ch = wgetch(w_status); switch (ch) { case ERR: break; #ifdef KEY_RESIZE /* sigh, Solaris lacks this.. */ case KEY_RESIZE: make_windows(); update_position(); break; #endif default: handle_keypress(vsc, ch); break; } } VSC_Destroy(&vsc, vsm); AN(VTAILQ_EMPTY(&ptlist)); VSM_Destroy(&vsm); AZ(endwin()); } varnish-7.5.0/bin/varnishstat/varnishstat_help_gen.c000066400000000000000000000050641457605730600227100ustar00rootroot00000000000000/*- * Copyright (c) 2020 Varnish Software AS * All rights reserved. * * Author: Dridi Boukelmoune * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include #include #include #include #include #include static const char help[] = "\n\n" #define BINDING_KEY(key, name, next) "<" name ">" next #define BINDING(name, desc) "\n\n" desc "\n\n" #include "varnishstat_bindings.h" ; int main(void) { struct vsb *vsb; const char *p, *n; unsigned u; vsb = VSB_new_auto(); AN(vsb); VSB_cat(vsb, "/*\n" " * NB: This file is machine generated, DO NOT EDIT!\n" " *\n" " * Edit varnishstat_bindings.h and run make instead\n" " */\n" "\n" "#include \n" "#include \"vdef.h\"\n" "#include \"varnishstat.h\"\n" "\n" "const char *const bindings_help[] = {\n"); n = help; u = 0; do { p = n + 1; n = strchr(p, '\n'); if (n != NULL && n > p) { VSB_putc(vsb, '\t'); VSB_quote(vsb, p, (int)(n - p), VSB_QUOTE_CSTR); VSB_cat(vsb, ",\n"); u++; } } while (n != NULL); VSB_printf(vsb, "\tNULL\n" "};\n" "\n" "const int bindings_help_len = %u;\n", u); AZ(VSB_finish(vsb)); AZ(VSB_tofile(vsb, STDOUT_FILENO)); VSB_destroy(&vsb); return (0); } varnish-7.5.0/bin/varnishstat/varnishstat_options.h000066400000000000000000000046651457605730600226350ustar00rootroot00000000000000/*- * Copyright (c) 2016 Varnish Software AS * All rights reserved. * * Author: Federico G. Schwindt * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Option definitions for varnishstat */ #include "vapi/vapi_options.h" #include "vut_options.h" #define STAT_OPT_1 \ VOPT("1", "[-1]", "Print the statistics to stdout", \ "Instead of presenting a continuously updated display," \ " print the statistics to stdout." \ ) #define STAT_OPT_j \ VOPT("j", "[-j]", "Print statistics to stdout as JSON", \ "Print statistics to stdout as JSON." \ ) #define STAT_OPT_l \ VOPT("l", "[-l]", \ "Lists the available fields to use with the -f option", \ "Lists the available fields to use with the -f option." \ ) #define STAT_OPT_r \ VOPT("r", "[-r]", "Toggle raw or adjusted gauges", \ "Toggle raw or adjusted gauges, adjusted is the default." \ ) #define STAT_OPT_x \ VOPT("x", "[-x]", "Print statistics to stdout as XML", \ "Print statistics to stdout as XML." \ ) STAT_OPT_1 VSC_OPT_f VUT_OPT_h VSC_OPT_I STAT_OPT_j STAT_OPT_l VUT_OPT_n STAT_OPT_r VUT_OPT_t VUT_GLOBAL_OPT_V VSC_OPT_X STAT_OPT_x varnish-7.5.0/bin/varnishtest/000077500000000000000000000000001457605730600163345ustar00rootroot00000000000000varnish-7.5.0/bin/varnishtest/Makefile.am000066400000000000000000000035621457605730600203760ustar00rootroot00000000000000# TESTS = @VTC_TESTS@ include $(top_srcdir)/vtc.am DISTCLEANFILES = _.ok AM_CPPFLAGS = \ -I$(top_srcdir)/include \ -I$(top_builddir)/include \ -I$(top_srcdir)/lib/libvgz bin_PROGRAMS = varnishtest varnishtest_SOURCES = \ hpack.h \ cmds.h \ vtc.h \ teken.c \ teken.h \ teken_scs.h \ teken_subr.h \ teken_subr_compat.h \ teken_wcwidth.h \ vtc.c \ vtc_barrier.c \ vtc_client.c \ vtc_gzip.c \ vtc_haproxy.c \ vtc_h2_dectbl.h \ vtc_h2_enctbl.h \ vtc_h2_hpack.c \ vtc_h2_priv.h \ vtc_h2_stattbl.h \ vtc_h2_tbl.c \ vtc_http.c \ vtc_http.h \ vtc_http2.c \ vtc_log.h \ vtc_log.c \ vtc_logexp.c \ vtc_misc.c \ vtc_main.c \ vtc_process.c \ vtc_proxy.c \ vtc_server.c \ vtc_sess.c \ vtc_subr.c \ vtc_syslog.c \ vtc_tunnel.c \ vtc_varnish.c varnishtest_LDADD = \ $(top_builddir)/lib/libvarnishapi/libvarnishapi.la \ $(top_builddir)/lib/libvarnish/libvarnish.la \ $(top_builddir)/lib/libvgz/libvgz.la \ ${PTHREAD_LIBS} ${NET_LIBS} ${LIBM} varnishtest_CFLAGS = \ -DVTEST_WITH_VTC_LOGEXPECT \ -DVTEST_WITH_VTC_VARNISH \ -DTOP_BUILDDIR='"${top_builddir}"' EXTRA_DIST = $(top_srcdir)/bin/varnishtest/tests/*.vtc \ $(top_srcdir)/bin/varnishtest/tests/common.pem \ $(top_srcdir)/bin/varnishtest/tests/README \ $(top_srcdir)/bin/varnishtest/gensequences \ $(top_srcdir)/bin/varnishtest/sequences \ $(top_srcdir)/bin/varnishtest/teken.3 \ huffman_gen.py teken.c: teken_state.h teken_state.h: $(srcdir)/sequences $(srcdir)/gensequences awk -f $(srcdir)/gensequences $(srcdir)/sequences \ > $(builddir)/teken_state.h vtc_h2_hpack.c: vtc_h2_dectbl.h vtc_h2_dectbl.h: huffman_gen.py $(top_srcdir)/include/tbl/vhp_huffman.h $(PYTHON) $(srcdir)/huffman_gen.py \ $(top_srcdir)/include/tbl/vhp_huffman.h > $@_ mv $@_ $@ BUILT_SOURCES = vtc_h2_dectbl.h CLEANFILES = \ $(builddir)/teken_state.h \ $(BUILT_SOURCES) varnish-7.5.0/bin/varnishtest/cmds.h000066400000000000000000000037311457605730600174370ustar00rootroot00000000000000/*- * Copyright (c) 2018 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ /*lint -save -e525 -e539 */ #ifndef CMD_GLOBAL #define CMD_GLOBAL(x) #endif CMD_GLOBAL(barrier) CMD_GLOBAL(delay) CMD_GLOBAL(shell) CMD_GLOBAL(include) #undef CMD_GLOBAL #ifndef CMD_TOP #define CMD_TOP(x) #endif CMD_TOP(client) CMD_TOP(feature) CMD_TOP(filewrite) CMD_TOP(haproxy) #ifdef VTEST_WITH_VTC_LOGEXPECT CMD_TOP(logexpect) #endif CMD_TOP(process) CMD_TOP(server) CMD_TOP(setenv) CMD_TOP(syslog) CMD_TOP(tunnel) #ifdef VTEST_WITH_VTC_VARNISH CMD_TOP(varnish) #endif CMD_TOP(varnishtest) CMD_TOP(vtest) #undef CMD_TOP /*lint -restore */ varnish-7.5.0/bin/varnishtest/flint.lnt000066400000000000000000000032411457605730600201670ustar00rootroot00000000000000// Copyright (c) 2008-2018 Varnish Software AS // SPDX-License-Identifier: BSD-2-Clause // See LICENSE file for full text of license +libh(teken.h) // Tell FlexeLint when these don't return -function(exit, vtc_fatal) -function(__assert(1), vtc_log(2)) -function(__assert(1), vtc_dump(2)) -function(__assert(1), vtc_hexdump(2)) -emacro({779}, ENC) // String constant in comparison operator '!=' -emacro({506}, CHKFRAME) // Constant value Boolean -sem(http_process_cleanup, custodial(1)) -esym(522, teken_subr_*) -esym(850, av) -esym(534, snprintf) // Only for varnishtest, and not really nice -esym(765, http_cmds) // No, cannot be made static -e712 // 14 Info 712 Loss of precision (___) (___ to ___) -e747 // 16 Info 747 Significant prototype coercion (___) ___ to ___ -e445 // Reuse of for loop variable '___' at '___' could cause chaos -e850 // for loop index variable '___' whose type category is '___' is modified in body of the for loop that began at '___' -e443 // for clause irregularity: variable '___' initialized in 1st expression does not match '___' modified in 3rd -emacro({506,774},FEATURE) -emacro({506,774},STRTOU32_CHECK) -e679 // Suspicious Truncation in arithmetic expression combining with pointer -e763 // Redundant declaration for symbol '...' previously declared // -e732 // Loss of sign (arg. no. 2) (int to unsigned -e713 // Loss of precision (assignment) (unsigned long long to long long) -emacro(835, STRTOU32_CHECK) // A zero has been given as ___ argument to operator '___' -e788 // enum value not used in defaulted switch -efile(451, cmds.h) -efile(451, vmods.h) -efile(451, programs.h) -efile(451, vtc_h2_stattbl.h) varnish-7.5.0/bin/varnishtest/flint.sh000066400000000000000000000004301457605730600200010ustar00rootroot00000000000000#!/bin/sh # # Copyright (c) 2008-2021 Varnish Software AS # SPDX-License-Identifier: BSD-2-Clause # See LICENSE file for full text of license FLOPS=' -DVTEST_WITH_VTC_LOGEXPECT -DVTEST_WITH_VTC_VARNISH -DTOP_BUILDDIR="foo" -I../../lib/libvgz *.c ' ../../tools/flint_skel.sh varnish-7.5.0/bin/varnishtest/gensequences000066400000000000000000000105631457605730600207510ustar00rootroot00000000000000#!/usr/bin/awk -f #- # Copyright (c) 2008-2009 Ed Schouten # All rights reserved. # # SPDX-License-Identifier: BSD-2-Clause # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # 1. Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # 2. Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in the # documentation and/or other materials provided with the distribution. # # THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND # ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE # ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS # OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) # HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY # OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF # SUCH DAMAGE. # # $FreeBSD: head/sys/teken/gensequences 333925 2018-05-20 14:21:20Z dumbbell $ function die(msg) { print msg; exit 1; } function cchar(str) { if (str == "^[") return "\\x1B"; if (str == "SP") return " "; return str; } function csequence(str) { if (str == "SP") return " "; return str; } BEGIN { FS = "\t+" while (getline > 0) { if (NF == 0 || $1 ~ /^#/) continue; if (NF != 3 && NF != 4) die("Invalid line layout: " NF " columns"); split($3, sequence, " +"); nsequences = 0; for (s in sequence) nsequences++; prefix = ""; l_prefix_name[""] = "teken_state_init"; for (i = 1; i < nsequences; i++) { n = prefix csequence(sequence[i]); l_prefix_parent[n] = prefix; l_prefix_suffix[n] = sequence[i]; if (!l_prefix_name[n]) l_prefix_name[n] = "teken_state_" "" ++npr; prefix = n; } suffix = sequence[nsequences]; cmd = prefix suffix; # Fill lists if (l_cmd_name[cmd] != "") die(cmd " already exists"); l_cmd_prefix[cmd] = prefix; l_cmd_suffix[cmd] = suffix; l_cmd_args[cmd] = $4; l_cmd_abbr[cmd] = $1; l_cmd_name[cmd] = $2; l_cmd_c_name[cmd] = "teken_subr_" tolower($2); gsub(" ", "_", l_cmd_c_name[cmd]); if ($4 != "") l_prefix_numbercmds[prefix]++; } print "/* Generated file. Do not edit. */"; print ""; for (p in l_prefix_name) { if (l_prefix_name[p] != "teken_state_init") print "static teken_state_t " l_prefix_name[p] ";"; } for (p in l_prefix_name) { print ""; print "/* '" p "' */"; print "static void"; print l_prefix_name[p] "(teken_t *t, teken_char_t c)"; print "{"; if (l_prefix_numbercmds[p] > 0) { print ""; print "\tif (teken_state_numbers(t, c))"; print "\t\treturn;"; } print ""; print "\tswitch (c) {"; for (c in l_cmd_prefix) { if (l_cmd_prefix[c] != p) continue; print "\tcase '" cchar(l_cmd_suffix[c]) "': /* " l_cmd_abbr[c] ": " l_cmd_name[c] " */"; if (l_cmd_args[c] == "v") { print "\t\t" l_cmd_c_name[c] "(t, t->t_curnum, t->t_nums);"; } else { printf "\t\t%s(t", l_cmd_c_name[c]; split(l_cmd_args[c], args, " "); for (a = 1; args[a] != ""; a++) { if (args[a] == "n") printf ", (t->t_curnum < %d || t->t_nums[%d] == 0) ? 1 : t->t_nums[%d]", a, (a - 1), (a - 1); else if (args[a] == "r") printf ", t->t_curnum < %d ? 0 : t->t_nums[%d]", a, (a - 1); else die("Invalid argument type: " args[a]); } print ");"; } print "\t\tbreak;"; } for (pc in l_prefix_parent) { if (l_prefix_parent[pc] != p) continue; print "\tcase '" cchar(l_prefix_suffix[pc]) "':"; print "\t\tteken_state_switch(t, " l_prefix_name[pc] ");"; print "\t\treturn;"; } print "\tdefault:"; if (l_prefix_name[p] == "teken_state_init") { print "\t\tteken_subr_regular_character(t, c);"; } else { print "\t\tteken_printf(\"Unsupported sequence in " l_prefix_name[p] ": %u\\n\", (unsigned int)c);"; } print "\t\tbreak;"; print "\t}"; if (l_prefix_name[p] != "teken_state_init") { print ""; print "\tt->t_last = 0;"; print "\tteken_state_switch(t, teken_state_init);"; } print "}"; } } varnish-7.5.0/bin/varnishtest/hpack.h000066400000000000000000000050561457605730600176010ustar00rootroot00000000000000/*- * Copyright (c) 2008-2016 Varnish Software AS * All rights reserved. * * Author: Guillaume Quintard * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include enum hpk_result{ hpk_more = 0, hpk_done, hpk_err, }; enum hpk_indexed { hpk_unset = 0, hpk_idx, hpk_inc, hpk_not, hpk_never, }; struct hpk_txt { char *ptr; int len; int huff; }; struct hpk_hdr { struct hpk_txt key; struct hpk_txt value; enum hpk_indexed t; unsigned i; }; struct hpk_ctx; struct hpk_iter; struct hpk_ctx * HPK_NewCtx(uint32_t tblsize); void HPK_FreeCtx(struct hpk_ctx *ctx); struct hpk_iter * HPK_NewIter(struct hpk_ctx *ctx, void *buf, int size); void HPK_FreeIter(struct hpk_iter *iter); enum hpk_result HPK_DecHdr(struct hpk_iter *iter, struct hpk_hdr *header); enum hpk_result HPK_EncHdr(struct hpk_iter *iter, const struct hpk_hdr *header); int gethpk_iterLen(const struct hpk_iter *iter); enum hpk_result HPK_ResizeTbl(struct hpk_ctx *ctx, uint32_t num); const struct hpk_hdr * HPK_GetHdr(const struct hpk_ctx *ctx, uint32_t index); uint32_t HPK_GetTblSize(const struct hpk_ctx *ctx); uint32_t HPK_GetTblMaxSize(const struct hpk_ctx *ctx); uint32_t HPK_GetTblLength(const struct hpk_ctx *ctx); #if 0 /* DEBUG */ void dump_dyn_tbl(const struct hpk_ctx *ctx); #endif varnish-7.5.0/bin/varnishtest/huffman_gen.py000077500000000000000000000043321457605730600211700ustar00rootroot00000000000000#!/usr/bin/env python3 import re import sys #HPH(0x30, 0x00000000, 5) regex = re.compile("^HPH\((.{4}), (.{10}), +(.{1,3})\)") if len(sys.argv) != 2: print("{} takes one and only one argument".format(sys.argv[0])) sys.exit(2) class sym: def __init__(self, bigval, bigvall, chr=0, esc=None): self.vall = bigvall % 8 if bigvall % 8 else 8 self.val = bigval & ((1 << self.vall) - 1) self.pfx = (bigval >> self.vall)# & 0xff self.chr = chr self.esc = esc tbls = {} msl = {} # max sym length f = open(sys.argv[1]) for l in f: grp = 1 match = regex.match(l) if not match: continue char = int(match.group(grp), 16) grp += 1 val = int(match.group(grp), 16) grp += 1 vall = int(match.group(grp)) s = sym(val, vall, char) if s.pfx not in tbls: tbls[s.pfx] = {} if s.val in tbls[s.pfx]: assert tbls[s.pfx][s.val].e tbls[s.pfx][s.val] = s # add the escape entry in the "previous" table if s.pfx: pp = s.pfx >> 8 pv = s.pfx & 0xff if pp not in tbls: tbls[pp] = {} tbls[pp][pv] = sym(pv, 8, 0, "&tbl_{:x}".format(s.pfx)) f.close() # add the EOS case s = sym(63, 6, 0) tbls[0xffffff][63] = s print('''/* NB: This file is machine generated, DO NOT EDIT! * edit 'huffman_input' instead */ struct stbl; struct ssym { uint8_t csm; /* bits consumed */ uint8_t chr; /* character */ struct stbl *nxt; /* next table */ }; struct stbl { unsigned msk; struct ssym *syms; }; ''') for pfx in sorted(tbls.keys(), reverse=True): msl = max([x.vall for x in tbls[pfx].values()]) for s in tbls[pfx].values(): s.val = s.val << (msl - s.vall) tbl = sorted(tbls[pfx].values(), key=lambda x: x.val) print("\nstatic struct ssym sym_{:x}_array[] = {{".format(pfx)) for s in tbl: for j in range(2 ** (msl - s.vall)): print("{} {{{}, {:3d}, {}}},".format( "\t " if j else "/* idx {:3d} */".format(s.val + j), s.vall, s.chr % 256, s.esc if s.esc else "NULL")) print('''}}; static struct stbl tbl_{:x} = {{ {}, sym_{:x}_array }};'''.format(pfx, msl, pfx)) varnish-7.5.0/bin/varnishtest/huffman_input000066400000000000000000000425641457605730600211350ustar00rootroot00000000000000# For Copyright information see RFC7541 [BSD3] ( 0) |11111111|11000 1ff8 [13] ( 1) |11111111|11111111|1011000 7fffd8 [23] ( 2) |11111111|11111111|11111110|0010 fffffe2 [28] ( 3) |11111111|11111111|11111110|0011 fffffe3 [28] ( 4) |11111111|11111111|11111110|0100 fffffe4 [28] ( 5) |11111111|11111111|11111110|0101 fffffe5 [28] ( 6) |11111111|11111111|11111110|0110 fffffe6 [28] ( 7) |11111111|11111111|11111110|0111 fffffe7 [28] ( 8) |11111111|11111111|11111110|1000 fffffe8 [28] ( 9) |11111111|11111111|11101010 ffffea [24] ( 10) |11111111|11111111|11111111|111100 3ffffffc [30] ( 11) |11111111|11111111|11111110|1001 fffffe9 [28] ( 12) |11111111|11111111|11111110|1010 fffffea [28] ( 13) |11111111|11111111|11111111|111101 3ffffffd [30] ( 14) |11111111|11111111|11111110|1011 fffffeb [28] ( 15) |11111111|11111111|11111110|1100 fffffec [28] ( 16) |11111111|11111111|11111110|1101 fffffed [28] ( 17) |11111111|11111111|11111110|1110 fffffee [28] ( 18) |11111111|11111111|11111110|1111 fffffef [28] ( 19) |11111111|11111111|11111111|0000 ffffff0 [28] ( 20) |11111111|11111111|11111111|0001 ffffff1 [28] ( 21) |11111111|11111111|11111111|0010 ffffff2 [28] ( 22) |11111111|11111111|11111111|111110 3ffffffe [30] ( 23) |11111111|11111111|11111111|0011 ffffff3 [28] ( 24) |11111111|11111111|11111111|0100 ffffff4 [28] ( 25) |11111111|11111111|11111111|0101 ffffff5 [28] ( 26) |11111111|11111111|11111111|0110 ffffff6 [28] ( 27) |11111111|11111111|11111111|0111 ffffff7 [28] ( 28) |11111111|11111111|11111111|1000 ffffff8 [28] ( 29) |11111111|11111111|11111111|1001 ffffff9 [28] ( 30) |11111111|11111111|11111111|1010 ffffffa [28] ( 31) |11111111|11111111|11111111|1011 ffffffb [28] ' ' ( 32) |010100 14 [ 6] '!' ( 33) |11111110|00 3f8 [10] '"' ( 34) |11111110|01 3f9 [10] '#' ( 35) |11111111|1010 ffa [12] '$' ( 36) |11111111|11001 1ff9 [13] '%' ( 37) |010101 15 [ 6] '&' ( 38) |11111000 f8 [ 8] ''' ( 39) |11111111|010 7fa [11] '(' ( 40) |11111110|10 3fa [10] ')' ( 41) |11111110|11 3fb [10] '*' ( 42) |11111001 f9 [ 8] '+' ( 43) |11111111|011 7fb [11] ',' ( 44) |11111010 fa [ 8] '-' ( 45) |010110 16 [ 6] '.' ( 46) |010111 17 [ 6] '/' ( 47) |011000 18 [ 6] '0' ( 48) |00000 0 [ 5] '1' ( 49) |00001 1 [ 5] '2' ( 50) |00010 2 [ 5] '3' ( 51) |011001 19 [ 6] '4' ( 52) |011010 1a [ 6] '5' ( 53) |011011 1b [ 6] '6' ( 54) |011100 1c [ 6] '7' ( 55) |011101 1d [ 6] '8' ( 56) |011110 1e [ 6] '9' ( 57) |011111 1f [ 6] ':' ( 58) |1011100 5c [ 7] ';' ( 59) |11111011 fb [ 8] '<' ( 60) |11111111|1111100 7ffc [15] '=' ( 61) |100000 20 [ 6] '>' ( 62) |11111111|1011 ffb [12] '?' ( 63) |11111111|00 3fc [10] '@' ( 64) |11111111|11010 1ffa [13] 'A' ( 65) |100001 21 [ 6] 'B' ( 66) |1011101 5d [ 7] 'C' ( 67) |1011110 5e [ 7] 'D' ( 68) |1011111 5f [ 7] 'E' ( 69) |1100000 60 [ 7] 'F' ( 70) |1100001 61 [ 7] 'G' ( 71) |1100010 62 [ 7] 'H' ( 72) |1100011 63 [ 7] 'I' ( 73) |1100100 64 [ 7] 'J' ( 74) |1100101 65 [ 7] 'K' ( 75) |1100110 66 [ 7] 'L' ( 76) |1100111 67 [ 7] 'M' ( 77) |1101000 68 [ 7] 'N' ( 78) |1101001 69 [ 7] 'O' ( 79) |1101010 6a [ 7] 'P' ( 80) |1101011 6b [ 7] 'Q' ( 81) |1101100 6c [ 7] 'R' ( 82) |1101101 6d [ 7] 'S' ( 83) |1101110 6e [ 7] 'T' ( 84) |1101111 6f [ 7] 'U' ( 85) |1110000 70 [ 7] 'V' ( 86) |1110001 71 [ 7] 'W' ( 87) |1110010 72 [ 7] 'X' ( 88) |11111100 fc [ 8] 'Y' ( 89) |1110011 73 [ 7] 'Z' ( 90) |11111101 fd [ 8] '[' ( 91) |11111111|11011 1ffb [13] '\' ( 92) |11111111|11111110|000 7fff0 [19] ']' ( 93) |11111111|11100 1ffc [13] '^' ( 94) |11111111|111100 3ffc [14] '_' ( 95) |100010 22 [ 6] '`' ( 96) |11111111|1111101 7ffd [15] 'a' ( 97) |00011 3 [ 5] 'b' ( 98) |100011 23 [ 6] 'c' ( 99) |00100 4 [ 5] 'd' (100) |100100 24 [ 6] 'e' (101) |00101 5 [ 5] 'f' (102) |100101 25 [ 6] 'g' (103) |100110 26 [ 6] 'h' (104) |100111 27 [ 6] 'i' (105) |00110 6 [ 5] 'j' (106) |1110100 74 [ 7] 'k' (107) |1110101 75 [ 7] 'l' (108) |101000 28 [ 6] 'm' (109) |101001 29 [ 6] 'n' (110) |101010 2a [ 6] 'o' (111) |00111 7 [ 5] 'p' (112) |101011 2b [ 6] 'q' (113) |1110110 76 [ 7] 'r' (114) |101100 2c [ 6] 's' (115) |01000 8 [ 5] 't' (116) |01001 9 [ 5] 'u' (117) |101101 2d [ 6] 'v' (118) |1110111 77 [ 7] 'w' (119) |1111000 78 [ 7] 'x' (120) |1111001 79 [ 7] 'y' (121) |1111010 7a [ 7] 'z' (122) |1111011 7b [ 7] '{' (123) |11111111|1111110 7ffe [15] '|' (124) |11111111|100 7fc [11] '}' (125) |11111111|111101 3ffd [14] '~' (126) |11111111|11101 1ffd [13] (127) |11111111|11111111|11111111|1100 ffffffc [28] (128) |11111111|11111110|0110 fffe6 [20] (129) |11111111|11111111|010010 3fffd2 [22] (130) |11111111|11111110|0111 fffe7 [20] (131) |11111111|11111110|1000 fffe8 [20] (132) |11111111|11111111|010011 3fffd3 [22] (133) |11111111|11111111|010100 3fffd4 [22] (134) |11111111|11111111|010101 3fffd5 [22] (135) |11111111|11111111|1011001 7fffd9 [23] (136) |11111111|11111111|010110 3fffd6 [22] (137) |11111111|11111111|1011010 7fffda [23] (138) |11111111|11111111|1011011 7fffdb [23] (139) |11111111|11111111|1011100 7fffdc [23] (140) |11111111|11111111|1011101 7fffdd [23] (141) |11111111|11111111|1011110 7fffde [23] (142) |11111111|11111111|11101011 ffffeb [24] (143) |11111111|11111111|1011111 7fffdf [23] (144) |11111111|11111111|11101100 ffffec [24] (145) |11111111|11111111|11101101 ffffed [24] (146) |11111111|11111111|010111 3fffd7 [22] (147) |11111111|11111111|1100000 7fffe0 [23] (148) |11111111|11111111|11101110 ffffee [24] (149) |11111111|11111111|1100001 7fffe1 [23] (150) |11111111|11111111|1100010 7fffe2 [23] (151) |11111111|11111111|1100011 7fffe3 [23] (152) |11111111|11111111|1100100 7fffe4 [23] (153) |11111111|11111110|11100 1fffdc [21] (154) |11111111|11111111|011000 3fffd8 [22] (155) |11111111|11111111|1100101 7fffe5 [23] (156) |11111111|11111111|011001 3fffd9 [22] (157) |11111111|11111111|1100110 7fffe6 [23] (158) |11111111|11111111|1100111 7fffe7 [23] (159) |11111111|11111111|11101111 ffffef [24] (160) |11111111|11111111|011010 3fffda [22] (161) |11111111|11111110|11101 1fffdd [21] (162) |11111111|11111110|1001 fffe9 [20] (163) |11111111|11111111|011011 3fffdb [22] (164) |11111111|11111111|011100 3fffdc [22] (165) |11111111|11111111|1101000 7fffe8 [23] (166) |11111111|11111111|1101001 7fffe9 [23] (167) |11111111|11111110|11110 1fffde [21] (168) |11111111|11111111|1101010 7fffea [23] (169) |11111111|11111111|011101 3fffdd [22] (170) |11111111|11111111|011110 3fffde [22] (171) |11111111|11111111|11110000 fffff0 [24] (172) |11111111|11111110|11111 1fffdf [21] (173) |11111111|11111111|011111 3fffdf [22] (174) |11111111|11111111|1101011 7fffeb [23] (175) |11111111|11111111|1101100 7fffec [23] (176) |11111111|11111111|00000 1fffe0 [21] (177) |11111111|11111111|00001 1fffe1 [21] (178) |11111111|11111111|100000 3fffe0 [22] (179) |11111111|11111111|00010 1fffe2 [21] (180) |11111111|11111111|1101101 7fffed [23] (181) |11111111|11111111|100001 3fffe1 [22] (182) |11111111|11111111|1101110 7fffee [23] (183) |11111111|11111111|1101111 7fffef [23] (184) |11111111|11111110|1010 fffea [20] (185) |11111111|11111111|100010 3fffe2 [22] (186) |11111111|11111111|100011 3fffe3 [22] (187) |11111111|11111111|100100 3fffe4 [22] (188) |11111111|11111111|1110000 7ffff0 [23] (189) |11111111|11111111|100101 3fffe5 [22] (190) |11111111|11111111|100110 3fffe6 [22] (191) |11111111|11111111|1110001 7ffff1 [23] (192) |11111111|11111111|11111000|00 3ffffe0 [26] (193) |11111111|11111111|11111000|01 3ffffe1 [26] (194) |11111111|11111110|1011 fffeb [20] (195) |11111111|11111110|001 7fff1 [19] (196) |11111111|11111111|100111 3fffe7 [22] (197) |11111111|11111111|1110010 7ffff2 [23] (198) |11111111|11111111|101000 3fffe8 [22] (199) |11111111|11111111|11110110|0 1ffffec [25] (200) |11111111|11111111|11111000|10 3ffffe2 [26] (201) |11111111|11111111|11111000|11 3ffffe3 [26] (202) |11111111|11111111|11111001|00 3ffffe4 [26] (203) |11111111|11111111|11111011|110 7ffffde [27] (204) |11111111|11111111|11111011|111 7ffffdf [27] (205) |11111111|11111111|11111001|01 3ffffe5 [26] (206) |11111111|11111111|11110001 fffff1 [24] (207) |11111111|11111111|11110110|1 1ffffed [25] (208) |11111111|11111110|010 7fff2 [19] (209) |11111111|11111111|00011 1fffe3 [21] (210) |11111111|11111111|11111001|10 3ffffe6 [26] (211) |11111111|11111111|11111100|000 7ffffe0 [27] (212) |11111111|11111111|11111100|001 7ffffe1 [27] (213) |11111111|11111111|11111001|11 3ffffe7 [26] (214) |11111111|11111111|11111100|010 7ffffe2 [27] (215) |11111111|11111111|11110010 fffff2 [24] (216) |11111111|11111111|00100 1fffe4 [21] (217) |11111111|11111111|00101 1fffe5 [21] (218) |11111111|11111111|11111010|00 3ffffe8 [26] (219) |11111111|11111111|11111010|01 3ffffe9 [26] (220) |11111111|11111111|11111111|1101 ffffffd [28] (221) |11111111|11111111|11111100|011 7ffffe3 [27] (222) |11111111|11111111|11111100|100 7ffffe4 [27] (223) |11111111|11111111|11111100|101 7ffffe5 [27] (224) |11111111|11111110|1100 fffec [20] (225) |11111111|11111111|11110011 fffff3 [24] (226) |11111111|11111110|1101 fffed [20] (227) |11111111|11111111|00110 1fffe6 [21] (228) |11111111|11111111|101001 3fffe9 [22] (229) |11111111|11111111|00111 1fffe7 [21] (230) |11111111|11111111|01000 1fffe8 [21] (231) |11111111|11111111|1110011 7ffff3 [23] (232) |11111111|11111111|101010 3fffea [22] (233) |11111111|11111111|101011 3fffeb [22] (234) |11111111|11111111|11110111|0 1ffffee [25] (235) |11111111|11111111|11110111|1 1ffffef [25] (236) |11111111|11111111|11110100 fffff4 [24] (237) |11111111|11111111|11110101 fffff5 [24] (238) |11111111|11111111|11111010|10 3ffffea [26] (239) |11111111|11111111|1110100 7ffff4 [23] (240) |11111111|11111111|11111010|11 3ffffeb [26] (241) |11111111|11111111|11111100|110 7ffffe6 [27] (242) |11111111|11111111|11111011|00 3ffffec [26] (243) |11111111|11111111|11111011|01 3ffffed [26] (244) |11111111|11111111|11111100|111 7ffffe7 [27] (245) |11111111|11111111|11111101|000 7ffffe8 [27] (246) |11111111|11111111|11111101|001 7ffffe9 [27] (247) |11111111|11111111|11111101|010 7ffffea [27] (248) |11111111|11111111|11111101|011 7ffffeb [27] (249) |11111111|11111111|11111111|1110 ffffffe [28] (250) |11111111|11111111|11111101|100 7ffffec [27] (251) |11111111|11111111|11111101|101 7ffffed [27] (252) |11111111|11111111|11111101|110 7ffffee [27] (253) |11111111|11111111|11111101|111 7ffffef [27] (254) |11111111|11111111|11111110|000 7fffff0 [27] (255) |11111111|11111111|11111011|10 3ffffee [26] EOS (256) |11111111|11111111|11111111|111111 3fffffff [30] varnish-7.5.0/bin/varnishtest/sequences000066400000000000000000000112001457605730600202440ustar00rootroot00000000000000#- # Copyright (c) 2008-2009 Ed Schouten # All rights reserved. # # SPDX-License-Identifier: BSD-2-Clause # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # 1. Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # 2. Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in the # documentation and/or other materials provided with the distribution. # # THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND # ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE # ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS # OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) # HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY # OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF # SUCH DAMAGE. # # $FreeBSD: head/sys/teken/sequences 334316 2018-05-29 08:41:44Z dumbbell $ # File format is as follows: # Abbr Abbreviation of sequence name # Name Sequence name (will be converted to C function name) # Sequence Bytes that form the sequence # Args Standard value of arguments passed to this sequence # - `n' non-zero number (0 gets converted to 1) # - `r' regular numeric argument # - `v' means a variable number of arguments # Abbr Name Sequence Args CBT Cursor Backward Tabulation ^[ [ Z n CHT Cursor Forward Tabulation ^[ [ I n CNL Cursor Next Line ^[ [ E n CPL Cursor Previous Line ^[ [ F n CPR Cursor Position Report ^[ [ n r CUB Cursor Backward ^[ [ D n CUD Cursor Down ^[ [ B n CUD Cursor Down ^[ [ e n CUF Cursor Forward ^[ [ C n CUF Cursor Forward ^[ [ a n CUP Cursor Position ^[ [ H n n CUP Cursor Position ^[ [ f n n CUU Cursor Up ^[ [ A n DA1 Primary Device Attributes ^[ [ c r DA2 Secondary Device Attributes ^[ [ > c r DC Delete character ^[ [ P n DCS Device Control String ^[ P DECALN Alignment test ^[ # 8 DECDHL Double Height Double Width Line Top ^[ # 3 DECDHL Double Height Double Width Line Bottom ^[ # 4 DECDWL Single Height Double Width Line ^[ # 6 DECKPAM Keypad application mode ^[ = DECKPNM Keypad numeric mode ^[ > DECRC Restore cursor ^[ 8 DECRC Restore cursor ^[ [ u DECRM Reset DEC mode ^[ [ ? l r DECSC Save cursor ^[ 7 DECSC Save cursor ^[ [ s DECSCUSR Set Cursor Style ^[ [ SP q r DECSM Set DEC mode ^[ [ ? h r DECSTBM Set top and bottom margins ^[ [ r r r DECSWL Single Height Single Width Line ^[ # 5 DL Delete line ^[ [ M n DSR Device Status Report ^[ [ ? n r ECH Erase character ^[ [ X n ED Erase display ^[ [ J r EL Erase line ^[ [ K r G0SCS0 G0 SCS Special Graphics ^[ ( 0 G0SCS1 G0 SCS US ASCII ^[ ( 1 G0SCS2 G0 SCS Special Graphics ^[ ( 2 G0SCSA G0 SCS UK National ^[ ( A G0SCSB G0 SCS US ASCII ^[ ( B G1SCS0 G1 SCS Special Graphics ^[ ) 0 G1SCS1 G1 SCS US ASCII ^[ ) 1 G1SCS2 G1 SCS Special Graphics ^[ ) 2 G1SCSA G1 SCS UK National ^[ ) A G1SCSB G1 SCS US ASCII ^[ ) B HPA Horizontal Position Absolute ^[ [ G n HPA Horizontal Position Absolute ^[ [ ` n HTS Horizontal Tab Set ^[ H ICH Insert character ^[ [ @ n IL Insert line ^[ [ L n IND Index ^[ D NEL Next line ^[ E OSC Operating System Command ^[ ] RI Reverse index ^[ M RIS Reset to Initial State ^[ c RM Reset Mode ^[ [ l r SD Pan Up ^[ [ T n SGR Set Graphic Rendition ^[ [ m v SM Set Mode ^[ [ h r ST String Terminator ^[ \\ SU Pan Down ^[ [ S n TBC Tab Clear ^[ [ g r VPA Vertical Position Absolute ^[ [ d n # Cons25 compatibility sequences C25BLPD Cons25 set bell pitch duration ^[ [ = B r r C25BORD Cons25 set border ^[ [ = A r C25DBG Cons25 set default background ^[ [ = G r C25DFG Cons25 set default foreground ^[ [ = F r C25GCS Cons25 set global cursor shape ^[ [ = C v C25LCT Cons25 set local cursor type ^[ [ = S r C25MODE Cons25 set terminal mode ^[ [ = T r C25SGR Cons25 set graphic rendition ^[ [ x r r C25VTSW Cons25 switch virtual terminal ^[ [ z r # VT52 compatibility #DECID VT52 DECID ^[ Z # ECMA-48 REP Repeat last graphic char ^[ [ b n varnish-7.5.0/bin/varnishtest/teken.3000066400000000000000000000157621457605730600175410ustar00rootroot00000000000000.\" Copyright (c) 2011 Ed Schouten .\" All rights reserved. .\" .\" SPDX-License-Identifier: BSD-2-Clause .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" 1. Redistributions of source code must retain the above copyright .\" notice, this list of conditions and the following disclaimer. .\" 2. Redistributions in binary form must reproduce the above copyright .\" notice, this list of conditions and the following disclaimer in the .\" documentation and/or other materials provided with the distribution. .\" .\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE .\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE .\" ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE .\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL .\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS .\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) .\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT .\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY .\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF .\" SUCH DAMAGE. .\" .\" $FreeBSD: head/sys/teken/libteken/teken.3 315418 2017-03-16 16:40:54Z bde $ .\" .Dd Mar 13, 2017 .Dt TEKEN 3 .Os .Sh NAME .Nm teken .Nd xterm-like terminal emulation interface .Sh LIBRARY .Lb libteken .Sh SYNOPSIS .In teken.h .Ft void .Fn teken_init "teken_t *t" "const teken_funcs_t *funcs" "void *thunk" .Ft void .Fn teken_input "teken_t *t" "const void *buf" "size_t nbytes" .Ft const teken_pos_t * .Fn teken_get_winsize "teken_t *t" .Ft void .Fn teken_set_winsize "teken_t *t" "const teken_pos_t *size" .Ft const teken_pos_t * .Fn teken_get_cursor "teken_t *t" .Ft void .Fn teken_set_cursor "teken_t *t" "const teken_pos_t *pos" .Ft const teken_attr_t * .Fn teken_get_curattr "teken_t *t" .Ft void .Fn teken_set_curattr "teken_t *t" "const teken_attr_t *attr" .Ft const teken_attr_t * .Fn teken_get_defattr "teken_t *t" .Ft void .Fn teken_set_defattr "teken_t *t" "const teken_attr_t *attr" .Ft const char * .Fn teken_get_sequence "teken_t *t" "unsigned int id" .Ft teken_color_t .Fn teken_256to16 "teken_color_t color" .Ft teken_color_t .Fn teken_256to8 "teken_color_t color" .Ft void .Fn teken_get_defattr_cons25 "teken_t *t" "int *fg" "int *bg" .Ft void .Fn teken_set_8bit "teken_t *t" .Ft void .Fn teken_set_cons25 "teken_t *t" .Sh DESCRIPTION The .Nm library implements the input parser of a 256-color xterm-like terminal. It converts a stream of UTF-8 encoded characters into a series of primitive drawing instructions that can be used by a console driver or terminal emulator to render a terminal application. .Pp The .Fn teken_init function is used to initialize terminal state object .Fa t , having type .Vt teken_t . The supplied .Vt teken_funcs_t structure .Fa funcs contains a set of callback functions, which are called when supplying data to .Fn teken_input . The .Fa thunk argument stores an arbitrary pointer, which is passed to each invocation of the callback functions. .Pp The .Vt teken_funcs_t structure stores the following callbacks: .Bd -literal -offset indent typedef struct { tf_bell_t *tf_bell; /* Audible/visible bell. */ tf_cursor_t *tf_cursor; /* Move cursor to x/y. */ tf_putchar_t *tf_putchar; /* Put Unicode character at x/y. */ tf_fill_t *tf_fill; /* Fill rectangle with character. */ tf_copy_t *tf_copy; /* Copy rectangle to new location. */ tf_param_t *tf_param; /* Miscellaneous options. */ tf_respond_t *tf_respond; /* Send response string to user. */ } teken_funcs_t; .Ed .Pp All callbacks must be provided, though unimplemented callbacks may some times be sufficient. The actual types of these callbacks can be found in .In teken.h . .Pp By default, .Fn teken_init initializes the .Vt teken_t structure to emulate a terminal having 24 rows and 80 columns. The .Fn teken_get_winsize and .Fn teken_set_winsize functions can be used to obtain and modify the dimensions of the terminal. .Pp Even though the cursor position is normally controlled by input of data through .Fn teken_input and returned by the .Fn tf_cursor callback, it can be obtained and modified manually using the .Fn teken_get_cursor and .Fn teken_set_cursor functions. The same holds for .Fn teken_get_curattr and .Fn teken_set_curattr , which can be used to change the currently selected font attributes and foreground and background color. .Pp By default, .Nm emulates a white-on-black terminal, which means the default foreground color is white, while the background color is black. These defaults can be modified using .Fn teken_get_defattr and .Fn teken_set_defattr . .Pp The .Fn teken_get_sequence function is a utility function that can be used to obtain escape sequences of special keyboard keys, generated by user input. The .Fa id parameter must be one of the .Dv TKEY_* parameters listed in .In teken.h . .Sh LEGACY FEATURES This library also provides a set of functions that shouldn't be used in any modern applications. .Pp The .Fn teken_256to16 function converts an xterm-256 256-color code to an xterm 16-color code whose color with default palettes is as similar as possible (not very similar). The lower 3 bits of the result are the ANSI color and the next lowest bit is brightness. Other layers (hardare and software) that only support 16 colors can use this to avoid knowing the details of 256-color codes. .Pp The .Fn teken_256to8 function is similar to .Fn teken_256to16 except it converts to an ANSI 8-color code. This is more accurate than discarding the brigtness bit in the result of .Fn teken_256to16 . .Pp The .Fn teken_get_defattr_cons25 function obtains the default terminal attributes as a pair of foreground and background colors, using ANSI color numbering. .Pp The .Fn teken_set_8bit function disables UTF-8 processing and switches to 8-bit character mode, which can be used to support character sets like CP437 and ISO-8859-1. .Pp The .Fn teken_set_cons25 function switches terminal emulation to .Dv cons25 , which is used by versions of .Fx prior to 9.0. .Sh SEE ALSO .Xr ncurses 3 , .Xr termcap 3 , .Xr syscons 4 .Sh HISTORY The .Nm library appeared in .Fx 8.0 , though it was only available and used inside the kernel. In .Fx 9.0 , the .Nm library appeared in userspace. .Sh AUTHORS .An Ed Schouten Aq ed@FreeBSD.org .Sh SECURITY CONSIDERATIONS The .Fn tf_respond callback is used to respond to device status requests commands generated by an application. In the past, there have been various security issues, where a malicious application sends a device status request before termination, causing the generated response to be interpreted by applications such as .Xr sh 1 . .Pp .Nm only implements a small subset of responses which are unlikely to cause any harm. Still, it is advised to leave .Fn tf_respond unimplemented. varnish-7.5.0/bin/varnishtest/teken.c000066400000000000000000000421311457605730600176070ustar00rootroot00000000000000/*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (c) 2008-2009 Ed Schouten * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * $FreeBSD: head/sys/teken/teken.c 333683 2018-05-16 18:12:49Z cem $ */ #include "config.h" #include #include #include #include #include #define teken_assert(x) assert(x) #include "vdef.h" #include "vas.h" /* debug messages */ #define teken_printf(...) /* Private flags for t_stateflags. */ #define TS_FIRSTDIGIT 0x0001 /* First numeric digit in escape sequence. */ #define TS_INSERT 0x0002 /* Insert mode. */ #define TS_AUTOWRAP 0x0004 /* Autowrap. */ #define TS_ORIGIN 0x0008 /* Origin mode. */ #define TS_WRAPPED 0x0010 /* Next character should be printed on col 0. */ #define TS_8BIT 0x0020 /* UTF-8 disabled. */ #define TS_CONS25 0x0040 /* cons25 emulation. */ #define TS_INSTRING 0x0080 /* Inside string. */ #define TS_CURSORKEYS 0x0100 /* Cursor keys mode. */ /* Character that blanks a cell. */ #define BLANK ' ' #include "teken.h" #include "teken_wcwidth.h" #include "teken_scs.h" static teken_state_t teken_state_init; /* * Wrappers for hooks. */ static inline void teken_funcs_bell(const teken_t *t) { if (t->t_funcs->tf_bell != NULL) t->t_funcs->tf_bell(t->t_softc); } static inline void teken_funcs_cursor(const teken_t *t) { teken_assert(t->t_cursor.tp_row < t->t_winsize.tp_row); teken_assert(t->t_cursor.tp_col < t->t_winsize.tp_col); teken_assert(t->t_funcs->tf_cursor != NULL); t->t_funcs->tf_cursor(t->t_softc, &t->t_cursor); } static inline void teken_funcs_putchar(const teken_t *t, const teken_pos_t *p, teken_char_t c, const teken_attr_t *a) { teken_assert(p->tp_row < t->t_winsize.tp_row); teken_assert(p->tp_col < t->t_winsize.tp_col); teken_assert(t->t_funcs->tf_putchar != NULL); t->t_funcs->tf_putchar(t->t_softc, p, c, a); } static inline void teken_funcs_fill(const teken_t *t, const teken_rect_t *r, const teken_char_t c, const teken_attr_t *a) { teken_assert(r->tr_end.tp_row > r->tr_begin.tp_row); teken_assert(r->tr_end.tp_row <= t->t_winsize.tp_row); teken_assert(r->tr_end.tp_col > r->tr_begin.tp_col); teken_assert(r->tr_end.tp_col <= t->t_winsize.tp_col); teken_assert(t->t_funcs->tf_fill != NULL); t->t_funcs->tf_fill(t->t_softc, r, c, a); } static inline void teken_funcs_copy(const teken_t *t, const teken_rect_t *r, const teken_pos_t *p) { teken_assert(r->tr_end.tp_row > r->tr_begin.tp_row); teken_assert(r->tr_end.tp_row <= t->t_winsize.tp_row); teken_assert(r->tr_end.tp_col > r->tr_begin.tp_col); teken_assert(r->tr_end.tp_col <= t->t_winsize.tp_col); teken_assert(p->tp_row + (r->tr_end.tp_row - r->tr_begin.tp_row) <= t->t_winsize.tp_row); teken_assert(p->tp_col + (r->tr_end.tp_col - r->tr_begin.tp_col) <= t->t_winsize.tp_col); teken_assert(t->t_funcs->tf_copy != NULL); t->t_funcs->tf_copy(t->t_softc, r, p); } static inline void teken_funcs_pre_input(const teken_t *t) { if (t->t_funcs->tf_pre_input != NULL) t->t_funcs->tf_pre_input(t->t_softc); } static inline void teken_funcs_post_input(const teken_t *t) { if (t->t_funcs->tf_post_input != NULL) t->t_funcs->tf_post_input(t->t_softc); } static inline void teken_funcs_param(const teken_t *t, int cmd, unsigned int value) { teken_assert(t->t_funcs->tf_param != NULL); t->t_funcs->tf_param(t->t_softc, cmd, value); } static inline void teken_funcs_respond(const teken_t *t, const void *buf, size_t len) { teken_assert(t->t_funcs->tf_respond != NULL); t->t_funcs->tf_respond(t->t_softc, buf, len); } #include "teken_subr.h" #include "teken_subr_compat.h" /* * Programming interface. */ void teken_init(teken_t *t, const teken_funcs_t *tf, void *softc) { teken_pos_t tp = { .tp_row = 24, .tp_col = 80 }; t->t_funcs = tf; t->t_softc = softc; t->t_nextstate = teken_state_init; t->t_stateflags = 0; t->t_utf8_left = 0; t->t_defattr.ta_format = 0; t->t_defattr.ta_fgcolor = TC_WHITE; t->t_defattr.ta_bgcolor = TC_BLACK; teken_subr_do_reset(t); teken_set_winsize(t, &tp); } static void teken_input_char(teken_t *t, teken_char_t c) { /* * There is no support for DCS and OSC. Just discard strings * until we receive characters that may indicate string * termination. */ if (t->t_stateflags & TS_INSTRING) { switch (c) { case '\x1B': t->t_stateflags &= ~TS_INSTRING; break; case '\a': t->t_stateflags &= ~TS_INSTRING; return; default: return; } } switch (c) { case '\0': break; case '\a': teken_subr_bell(t); break; case '\b': teken_subr_backspace(t); break; case '\n': case '\x0B': teken_subr_newline(t); break; case '\x0C': teken_subr_newpage(t); break; case '\x0E': if (t->t_stateflags & TS_CONS25) t->t_nextstate(t, c); else t->t_curscs = 1; break; case '\x0F': if (t->t_stateflags & TS_CONS25) t->t_nextstate(t, c); else t->t_curscs = 0; break; case '\r': teken_subr_carriage_return(t); break; case '\t': teken_subr_horizontal_tab(t); break; default: t->t_nextstate(t, c); break; } /* Post-processing assertions. */ teken_assert(t->t_cursor.tp_row >= t->t_originreg.ts_begin); teken_assert(t->t_cursor.tp_row < t->t_originreg.ts_end); teken_assert(t->t_cursor.tp_row < t->t_winsize.tp_row); teken_assert(t->t_cursor.tp_col < t->t_winsize.tp_col); teken_assert(t->t_saved_cursor.tp_row < t->t_winsize.tp_row); teken_assert(t->t_saved_cursor.tp_col < t->t_winsize.tp_col); teken_assert(t->t_scrollreg.ts_end <= t->t_winsize.tp_row); teken_assert(t->t_scrollreg.ts_begin < t->t_scrollreg.ts_end); /* Origin region has to be window size or the same as scrollreg. */ teken_assert((t->t_originreg.ts_begin == t->t_scrollreg.ts_begin && t->t_originreg.ts_end == t->t_scrollreg.ts_end) || (t->t_originreg.ts_begin == 0 && t->t_originreg.ts_end == t->t_winsize.tp_row)); } static void teken_input_byte(teken_t *t, unsigned char c) { /* * UTF-8 handling. */ if ((c & 0x80) == 0x00 || t->t_stateflags & TS_8BIT) { /* One-byte sequence. */ t->t_utf8_left = 0; teken_input_char(t, c); } else if ((c & 0xe0) == 0xc0) { /* Two-byte sequence. */ t->t_utf8_left = 1; t->t_utf8_partial = c & 0x1f; } else if ((c & 0xf0) == 0xe0) { /* Three-byte sequence. */ t->t_utf8_left = 2; t->t_utf8_partial = c & 0x0f; } else if ((c & 0xf8) == 0xf0) { /* Four-byte sequence. */ t->t_utf8_left = 3; t->t_utf8_partial = c & 0x07; } else if ((c & 0xc0) == 0x80) { if (t->t_utf8_left == 0) return; t->t_utf8_left--; t->t_utf8_partial = (t->t_utf8_partial << 6) | (c & 0x3f); if (t->t_utf8_left == 0) { teken_printf("Got UTF-8 char %x\n", t->t_utf8_partial); teken_input_char(t, t->t_utf8_partial); } } } void teken_input(teken_t *t, const void *buf, size_t len) { const char *c = buf; teken_funcs_pre_input(t); while (len-- > 0) teken_input_byte(t, *c++); teken_funcs_post_input(t); } const teken_pos_t * teken_get_cursor(const teken_t *t) { return (&t->t_cursor); } void teken_set_cursor(teken_t *t, const teken_pos_t *p) { /* XXX: bounds checking with originreg! */ teken_assert(p->tp_row < t->t_winsize.tp_row); teken_assert(p->tp_col < t->t_winsize.tp_col); t->t_cursor = *p; } const teken_attr_t * teken_get_curattr(const teken_t *t) { return (&t->t_curattr); } void teken_set_curattr(teken_t *t, const teken_attr_t *a) { t->t_curattr = *a; } const teken_attr_t * teken_get_defattr(const teken_t *t) { return (&t->t_defattr); } void teken_set_defattr(teken_t *t, const teken_attr_t *a) { t->t_curattr = t->t_saved_curattr = t->t_defattr = *a; } const teken_pos_t * teken_get_winsize(const teken_t *t) { return (&t->t_winsize); } static void teken_trim_cursor_pos(teken_t *t, const teken_pos_t *new) { const teken_pos_t *cur; cur = &t->t_winsize; if (cur->tp_row < new->tp_row || cur->tp_col < new->tp_col) return; if (t->t_cursor.tp_row >= new->tp_row) t->t_cursor.tp_row = new->tp_row - 1; if (t->t_cursor.tp_col >= new->tp_col) t->t_cursor.tp_col = new->tp_col - 1; } void teken_set_winsize(teken_t *t, const teken_pos_t *p) { teken_trim_cursor_pos(t, p); t->t_winsize = *p; teken_subr_do_reset(t); } void teken_set_winsize_noreset(teken_t *t, const teken_pos_t *p) { teken_trim_cursor_pos(t, p); t->t_winsize = *p; teken_subr_do_resize(t); } void teken_set_8bit(teken_t *t) { t->t_stateflags |= TS_8BIT; } void teken_set_cons25(teken_t *t) { t->t_stateflags |= TS_CONS25; } /* * State machine. */ static void teken_state_switch(teken_t *t, teken_state_t *s) { t->t_nextstate = s; t->t_curnum = 0; t->t_stateflags |= TS_FIRSTDIGIT; } static int teken_state_numbers(teken_t *t, teken_char_t c) { teken_assert(t->t_curnum < T_NUMSIZE); if (c >= '0' && c <= '9') { if (t->t_stateflags & TS_FIRSTDIGIT) { /* First digit. */ t->t_stateflags &= ~TS_FIRSTDIGIT; t->t_nums[t->t_curnum] = c - '0'; } else if (t->t_nums[t->t_curnum] < UINT_MAX / 100) { /* * There is no need to continue parsing input * once the value exceeds the size of the * terminal. It would only allow for integer * overflows when performing arithmetic on the * cursor position. * * Ignore any further digits if the value is * already UINT_MAX / 100. */ t->t_nums[t->t_curnum] = t->t_nums[t->t_curnum] * 10 + c - '0'; } return (1); } else if (c == ';') { if (t->t_stateflags & TS_FIRSTDIGIT) t->t_nums[t->t_curnum] = 0; /* Only allow a limited set of arguments. */ if (++t->t_curnum == T_NUMSIZE) { teken_state_switch(t, teken_state_init); return (1); } t->t_stateflags |= TS_FIRSTDIGIT; return (1); } else { if (t->t_stateflags & TS_FIRSTDIGIT && t->t_curnum > 0) { /* Finish off the last empty argument. */ t->t_nums[t->t_curnum] = 0; t->t_curnum++; } else if ((t->t_stateflags & TS_FIRSTDIGIT) == 0) { /* Also count the last argument. */ t->t_curnum++; } } return (0); } #define k TC_BLACK #define b TC_BLUE #define y TC_BROWN #define c TC_CYAN #define g TC_GREEN #define m TC_MAGENTA #define r TC_RED #define w TC_WHITE #define K (TC_BLACK | TC_LIGHT) #define B (TC_BLUE | TC_LIGHT) #define Y (TC_BROWN | TC_LIGHT) #define C (TC_CYAN | TC_LIGHT) #define G (TC_GREEN | TC_LIGHT) #define M (TC_MAGENTA | TC_LIGHT) #define R (TC_RED | TC_LIGHT) #define W (TC_WHITE | TC_LIGHT) /** * The xterm-256 color map has steps of 0x28 (in the range 0-0xff), except * for the first step which is 0x5f. Scale to the range 0-6 by dividing * by 0x28 and rounding down. The range of 0-5 cannot represent the * larger first step. * * This table is generated by the follow rules: * - if all components are equal, the result is black for (0, 0, 0) and * (2, 2, 2), else white; otherwise: * - subtract the smallest component from all components * - if this gives only one nonzero component, then that is the color * - else if one component is 2 or more larger than the other nonzero one, * then that component gives the color * - else there are 2 nonzero components. The color is that of a small * equal mixture of these components (cyan, yellow or magenta). E.g., * (0, 5, 6) (Turquoise2) is a much purer cyan than (0, 2, 3) * (DeepSkyBlue4), but we map both to cyan since we can't represent * delicate shades of either blue or cyan and blue would be worse. * Here it is important that components of 1 never occur. Blue would * be twice as large as green in (0, 1, 2). */ static const teken_color_t teken_256to8tab[] = { /* xterm normal colors: */ k, r, g, y, b, m, c, w, /* xterm bright colors: */ k, r, g, y, b, m, c, w, /* Red0 submap. */ k, b, b, b, b, b, g, c, c, b, b, b, g, c, c, c, b, b, g, g, c, c, c, b, g, g, g, c, c, c, g, g, g, g, c, c, /* Red2 submap. */ r, m, m, b, b, b, y, k, b, b, b, b, y, g, c, c, b, b, g, g, c, c, c, b, g, g, g, c, c, c, g, g, g, g, c, c, /* Red3 submap. */ r, m, m, m, b, b, y, r, m, m, b, b, y, y, w, b, b, b, y, y, g, c, c, b, g, g, g, c, c, c, g, g, g, g, c, c, /* Red4 submap. */ r, r, m, m, m, b, r, r, m, m, m, b, y, y, r, m, m, b, y, y, y, w, b, b, y, y, y, g, c, c, g, g, g, g, c, c, /* Red5 submap. */ r, r, r, m, m, m, r, r, r, m, m, m, r, r, r, m, m, m, y, y, y, r, m, m, y, y, y, y, w, b, y, y, y, y, g, c, /* Red6 submap. */ r, r, r, r, m, m, r, r, r, r, m, m, r, r, r, r, m, m, r, r, r, r, m, m, y, y, y, y, r, m, y, y, y, y, y, w, /* Grey submap. */ k, k, k, k, k, k, k, k, k, k, k, k, w, w, w, w, w, w, w, w, w, w, w, w, }; /* * This table is generated from the previous one by setting TC_LIGHT for * entries whose luminosity in the xterm256 color map is 60% or larger. * Thus the previous table is currently not really needed. It will be * used for different fine tuning of the tables. */ static const teken_color_t teken_256to16tab[] = { /* xterm normal colors: */ k, r, g, y, b, m, c, w, /* xterm bright colors: */ K, R, G, Y, B, M, C, W, /* Red0 submap. */ k, b, b, b, b, b, g, c, c, b, b, b, g, c, c, c, b, b, g, g, c, c, c, b, g, g, g, c, c, c, g, g, g, g, c, c, /* Red2 submap. */ r, m, m, b, b, b, y, K, b, b, B, B, y, g, c, c, B, B, g, g, c, c, C, B, g, G, G, C, C, C, g, G, G, G, C, C, /* Red3 submap. */ r, m, m, m, b, b, y, r, m, m, B, B, y, y, w, B, B, B, y, y, G, C, C, B, g, G, G, C, C, C, g, G, G, G, C, C, /* Red4 submap. */ r, r, m, m, m, b, r, r, m, m, M, B, y, y, R, M, M, B, y, y, Y, W, B, B, y, Y, Y, G, C, C, g, G, G, G, C, C, /* Red5 submap. */ r, r, r, m, m, m, r, R, R, M, M, M, r, R, R, M, M, M, y, Y, Y, R, M, M, y, Y, Y, Y, W, B, y, Y, Y, Y, G, C, /* Red6 submap. */ r, r, r, r, m, m, r, R, R, R, M, M, r, R, R, R, M, M, r, R, R, R, M, M, y, Y, Y, Y, R, M, y, Y, Y, Y, Y, W, /* Grey submap. */ k, k, k, k, k, k, K, K, K, K, K, K, w, w, w, w, w, w, W, W, W, W, W, W, }; #undef k #undef b #undef y #undef c #undef g #undef m #undef r #undef w #undef K #undef B #undef Y #undef C #undef G #undef M #undef R #undef W teken_color_t teken_256to8(teken_color_t c) { return (teken_256to8tab[c % 256]); } teken_color_t teken_256to16(teken_color_t c) { return (teken_256to16tab[c % 256]); } static const char * const special_strings_cons25[] = { [TKEY_UP] = "\x1B[A", [TKEY_DOWN] = "\x1B[B", [TKEY_LEFT] = "\x1B[D", [TKEY_RIGHT] = "\x1B[C", [TKEY_HOME] = "\x1B[H", [TKEY_END] = "\x1B[F", [TKEY_INSERT] = "\x1B[L", [TKEY_DELETE] = "\x7F", [TKEY_PAGE_UP] = "\x1B[I", [TKEY_PAGE_DOWN] = "\x1B[G", [TKEY_F1] = "\x1B[M", [TKEY_F2] = "\x1B[N", [TKEY_F3] = "\x1B[O", [TKEY_F4] = "\x1B[P", [TKEY_F5] = "\x1B[Q", [TKEY_F6] = "\x1B[R", [TKEY_F7] = "\x1B[S", [TKEY_F8] = "\x1B[T", [TKEY_F9] = "\x1B[U", [TKEY_F10] = "\x1B[V", [TKEY_F11] = "\x1B[W", [TKEY_F12] = "\x1B[X", }; static const char * const special_strings_ckeys[] = { [TKEY_UP] = "\x1BOA", [TKEY_DOWN] = "\x1BOB", [TKEY_LEFT] = "\x1BOD", [TKEY_RIGHT] = "\x1BOC", [TKEY_HOME] = "\x1BOH", [TKEY_END] = "\x1BOF", }; static const char * const special_strings_normal[] = { [TKEY_UP] = "\x1B[A", [TKEY_DOWN] = "\x1B[B", [TKEY_LEFT] = "\x1B[D", [TKEY_RIGHT] = "\x1B[C", [TKEY_HOME] = "\x1B[H", [TKEY_END] = "\x1B[F", [TKEY_INSERT] = "\x1B[2~", [TKEY_DELETE] = "\x1B[3~", [TKEY_PAGE_UP] = "\x1B[5~", [TKEY_PAGE_DOWN] = "\x1B[6~", [TKEY_F1] = "\x1BOP", [TKEY_F2] = "\x1BOQ", [TKEY_F3] = "\x1BOR", [TKEY_F4] = "\x1BOS", [TKEY_F5] = "\x1B[15~", [TKEY_F6] = "\x1B[17~", [TKEY_F7] = "\x1B[18~", [TKEY_F8] = "\x1B[19~", [TKEY_F9] = "\x1B[20~", [TKEY_F10] = "\x1B[21~", [TKEY_F11] = "\x1B[23~", [TKEY_F12] = "\x1B[24~", }; const char * teken_get_sequence(const teken_t *t, unsigned int k) { /* Cons25 mode. */ if (t->t_stateflags & TS_CONS25 && k < sizeof special_strings_cons25 / sizeof(char *)) return (special_strings_cons25[k]); /* Cursor keys mode. */ if (t->t_stateflags & TS_CURSORKEYS && k < sizeof special_strings_ckeys / sizeof(char *)) return (special_strings_ckeys[k]); /* Default xterm sequences. */ if (k < sizeof special_strings_normal / sizeof(char *)) return (special_strings_normal[k]); return (NULL); } #include "teken_state.h" varnish-7.5.0/bin/varnishtest/teken.h000066400000000000000000000143511457605730600176170ustar00rootroot00000000000000/*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (c) 2008-2009 Ed Schouten * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * $FreeBSD: head/sys/teken/teken.h 333669 2018-05-16 09:01:02Z dumbbell $ */ #ifndef _TEKEN_H_ #define _TEKEN_H_ #include /* * libteken: terminal emulation library. * * This library converts an UTF-8 stream of bytes to terminal drawing * commands. */ typedef uint32_t teken_char_t; typedef unsigned short teken_unit_t; typedef unsigned char teken_format_t; #define TF_BOLD 0x01 /* Bold character. */ #define TF_UNDERLINE 0x02 /* Underline character. */ #define TF_BLINK 0x04 /* Blinking character. */ #define TF_REVERSE 0x08 /* Reverse rendered character. */ #define TF_CJK_RIGHT 0x10 /* Right-hand side of CJK character. */ typedef unsigned char teken_color_t; #define TC_BLACK 0 #define TC_RED 1 #define TC_GREEN 2 #define TC_BROWN 3 #define TC_BLUE 4 #define TC_MAGENTA 5 #define TC_CYAN 6 #define TC_WHITE 7 #define TC_NCOLORS 8 #define TC_LIGHT 8 /* ORed with the others. */ typedef struct { teken_unit_t tp_row; teken_unit_t tp_col; } teken_pos_t; typedef struct { teken_pos_t tr_begin; teken_pos_t tr_end; } teken_rect_t; typedef struct { teken_format_t ta_format; teken_color_t ta_fgcolor; teken_color_t ta_bgcolor; } teken_attr_t; typedef struct { teken_unit_t ts_begin; teken_unit_t ts_end; } teken_span_t; typedef struct __teken teken_t; typedef void teken_state_t(teken_t *, teken_char_t); /* * Drawing routines supplied by the user. */ typedef void tf_bell_t(void *); typedef void tf_cursor_t(void *, const teken_pos_t *); typedef void tf_putchar_t(void *, const teken_pos_t *, teken_char_t, const teken_attr_t *); typedef void tf_fill_t(void *, const teken_rect_t *, teken_char_t, const teken_attr_t *); typedef void tf_copy_t(void *, const teken_rect_t *, const teken_pos_t *); typedef void tf_pre_input_t(void *); typedef void tf_post_input_t(void *); typedef void tf_param_t(void *, int, unsigned int); #define TP_SHOWCURSOR 0 #define TP_KEYPADAPP 1 #define TP_AUTOREPEAT 2 #define TP_SWITCHVT 3 #define TP_132COLS 4 #define TP_SETBELLPD 5 #define TP_SETBELLPD_PITCH(pd) ((pd) >> 16) #define TP_SETBELLPD_DURATION(pd) ((pd) & 0xffff) #define TP_MOUSE 6 #define TP_SETBORDER 7 #define TP_SETLOCALCURSOR 8 #define TP_SETGLOBALCURSOR 9 typedef void tf_respond_t(void *, const void *, size_t); typedef struct { tf_bell_t *tf_bell; tf_cursor_t *tf_cursor; tf_putchar_t *tf_putchar; tf_fill_t *tf_fill; tf_copy_t *tf_copy; tf_pre_input_t *tf_pre_input; tf_post_input_t *tf_post_input; tf_param_t *tf_param; tf_respond_t *tf_respond; } teken_funcs_t; typedef teken_char_t teken_scs_t(const teken_t *, teken_char_t); /* * Terminal state. */ struct __teken { const teken_funcs_t *t_funcs; void *t_softc; teken_state_t *t_nextstate; unsigned int t_stateflags; #define T_NUMSIZE 8 unsigned int t_nums[T_NUMSIZE]; unsigned int t_curnum; teken_pos_t t_cursor; teken_attr_t t_curattr; teken_pos_t t_saved_cursor; teken_attr_t t_saved_curattr; teken_attr_t t_defattr; teken_pos_t t_winsize; /* For DECSTBM. */ teken_span_t t_scrollreg; /* For DECOM. */ teken_span_t t_originreg; #define T_NUMCOL 160 unsigned int t_tabstops[T_NUMCOL / (sizeof(unsigned int) * 8)]; unsigned int t_utf8_left; teken_char_t t_utf8_partial; teken_char_t t_last; unsigned int t_curscs; teken_scs_t *t_saved_curscs; teken_scs_t *t_scs[2]; }; /* Initialize teken structure. */ void teken_init(teken_t *, const teken_funcs_t *, void *); /* Deliver character input. */ void teken_input(teken_t *, const void *, size_t); /* Get/set teken attributes. */ const teken_pos_t *teken_get_cursor(const teken_t *); const teken_attr_t *teken_get_curattr(const teken_t *); const teken_attr_t *teken_get_defattr(const teken_t *); void teken_get_defattr_cons25(const teken_t *, int *, int *); const teken_pos_t *teken_get_winsize(const teken_t *); void teken_set_cursor(teken_t *, const teken_pos_t *); void teken_set_curattr(teken_t *, const teken_attr_t *); void teken_set_defattr(teken_t *, const teken_attr_t *); void teken_set_winsize(teken_t *, const teken_pos_t *); void teken_set_winsize_noreset(teken_t *, const teken_pos_t *); /* Key input escape sequences. */ #define TKEY_UP 0x00 #define TKEY_DOWN 0x01 #define TKEY_LEFT 0x02 #define TKEY_RIGHT 0x03 #define TKEY_HOME 0x04 #define TKEY_END 0x05 #define TKEY_INSERT 0x06 #define TKEY_DELETE 0x07 #define TKEY_PAGE_UP 0x08 #define TKEY_PAGE_DOWN 0x09 #define TKEY_F1 0x0a #define TKEY_F2 0x0b #define TKEY_F3 0x0c #define TKEY_F4 0x0d #define TKEY_F5 0x0e #define TKEY_F6 0x0f #define TKEY_F7 0x10 #define TKEY_F8 0x11 #define TKEY_F9 0x12 #define TKEY_F10 0x13 #define TKEY_F11 0x14 #define TKEY_F12 0x15 const char *teken_get_sequence(const teken_t *, unsigned int); /* Legacy features. */ void teken_set_8bit(teken_t *); void teken_set_cons25(teken_t *); /* Color conversion. */ teken_color_t teken_256to16(teken_color_t); teken_color_t teken_256to8(teken_color_t); #endif /* !_TEKEN_H_ */ varnish-7.5.0/bin/varnishtest/teken_scs.h000066400000000000000000000054141457605730600204670ustar00rootroot00000000000000/*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (c) 2009 Ed Schouten * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * $FreeBSD: head/sys/teken/teken_scs.h 332297 2018-04-08 19:23:50Z phk $ */ static inline teken_char_t teken_scs_process(const teken_t *t, teken_char_t c) { return (t->t_scs[t->t_curscs](t, c)); } /* Unicode points for VT100 box drawing. */ static const uint16_t teken_boxdrawing_unicode[31] = { 0x25c6, 0x2592, 0x2409, 0x240c, 0x240d, 0x240a, 0x00b0, 0x00b1, 0x2424, 0x240b, 0x2518, 0x2510, 0x250c, 0x2514, 0x253c, 0x23ba, 0x23bb, 0x2500, 0x23bc, 0x23bd, 0x251c, 0x2524, 0x2534, 0x252c, 0x2502, 0x2264, 0x2265, 0x03c0, 0x2260, 0x00a3, 0x00b7 }; /* ASCII points for VT100 box drawing. */ static const uint8_t teken_boxdrawing_8bit[31] = { '?', '?', 'H', 'F', 'C', 'L', '?', '?', 'N', 'V', '+', '+', '+', '+', '+', '-', '-', '-', '-', '-', '+', '+', '+', '+', '|', '?', '?', '?', '?', '?', '?', }; static teken_char_t teken_scs_special_graphics(const teken_t *t, teken_char_t c) { /* Box drawing. */ if (c >= '`' && c <= '~') return (t->t_stateflags & TS_8BIT ? teken_boxdrawing_8bit[c - '`'] : teken_boxdrawing_unicode[c - '`']); return (c); } static teken_char_t teken_scs_uk_national(const teken_t *t, teken_char_t c) { /* Pound sign. */ if (c == '#') return (t->t_stateflags & TS_8BIT ? 0x9c : 0xa3); return (c); } static teken_char_t teken_scs_us_ascii(const teken_t *t, teken_char_t c) { /* No processing. */ (void)t; return (c); } varnish-7.5.0/bin/varnishtest/teken_subr.h000066400000000000000000000750211457605730600206530ustar00rootroot00000000000000/*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (c) 2008-2009 Ed Schouten * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * $FreeBSD: head/sys/teken/teken_subr.h 333995 2018-05-21 20:35:16Z dumbbell $ */ static void teken_subr_cursor_up(teken_t *, unsigned int); static void teken_subr_erase_line(const teken_t *, unsigned int); static void teken_subr_regular_character(teken_t *, teken_char_t); static void teken_subr_reset_to_initial_state(teken_t *); static void teken_subr_save_cursor(teken_t *); static inline int teken_tab_isset(const teken_t *t, unsigned int col) { unsigned int b, o; if (col >= T_NUMCOL) return ((col % 8) == 0); b = col / (sizeof(unsigned int) * 8); o = col % (sizeof(unsigned int) * 8); return (t->t_tabstops[b] & (1U << o)); } static inline void teken_tab_clear(teken_t *t, unsigned int col) { unsigned int b, o; if (col >= T_NUMCOL) return; b = col / (sizeof(unsigned int) * 8); o = col % (sizeof(unsigned int) * 8); t->t_tabstops[b] &= ~(1U << o); } static inline void teken_tab_set(teken_t *t, unsigned int col) { unsigned int b, o; if (col >= T_NUMCOL) return; b = col / (sizeof(unsigned int) * 8); o = col % (sizeof(unsigned int) * 8); t->t_tabstops[b] |= 1U << o; } static void teken_tab_default(teken_t *t) { unsigned int i; memset(t->t_tabstops, 0, T_NUMCOL / 8); for (i = 8; i < T_NUMCOL; i += 8) teken_tab_set(t, i); } static void teken_subr_do_scroll(const teken_t *t, int amount) { teken_rect_t tr; teken_pos_t tp; teken_assert(t->t_cursor.tp_row <= t->t_winsize.tp_row); teken_assert(t->t_scrollreg.ts_end <= t->t_winsize.tp_row); teken_assert(amount != 0); /* Copy existing data 1 line up. */ if (amount > 0) { /* Scroll down. */ /* Copy existing data up. */ if (t->t_scrollreg.ts_begin + amount < t->t_scrollreg.ts_end) { tr.tr_begin.tp_row = t->t_scrollreg.ts_begin + amount; tr.tr_begin.tp_col = 0; tr.tr_end.tp_row = t->t_scrollreg.ts_end; tr.tr_end.tp_col = t->t_winsize.tp_col; tp.tp_row = t->t_scrollreg.ts_begin; tp.tp_col = 0; teken_funcs_copy(t, &tr, &tp); tr.tr_begin.tp_row = t->t_scrollreg.ts_end - amount; } else { tr.tr_begin.tp_row = t->t_scrollreg.ts_begin; } /* Clear the last lines. */ tr.tr_begin.tp_col = 0; tr.tr_end.tp_row = t->t_scrollreg.ts_end; tr.tr_end.tp_col = t->t_winsize.tp_col; teken_funcs_fill(t, &tr, BLANK, &t->t_curattr); } else { /* Scroll up. */ amount = -amount; /* Copy existing data down. */ if (t->t_scrollreg.ts_begin + amount < t->t_scrollreg.ts_end) { tr.tr_begin.tp_row = t->t_scrollreg.ts_begin; tr.tr_begin.tp_col = 0; tr.tr_end.tp_row = t->t_scrollreg.ts_end - amount; tr.tr_end.tp_col = t->t_winsize.tp_col; tp.tp_row = t->t_scrollreg.ts_begin + amount; tp.tp_col = 0; teken_funcs_copy(t, &tr, &tp); tr.tr_end.tp_row = t->t_scrollreg.ts_begin + amount; } else { tr.tr_end.tp_row = t->t_scrollreg.ts_end; } /* Clear the first lines. */ tr.tr_begin.tp_row = t->t_scrollreg.ts_begin; tr.tr_begin.tp_col = 0; tr.tr_end.tp_col = t->t_winsize.tp_col; teken_funcs_fill(t, &tr, BLANK, &t->t_curattr); } } static ssize_t teken_subr_do_cpr(const teken_t *t, unsigned int cmd, char response[16]) { switch (cmd) { case 5: /* Operating status. */ strcpy(response, "0n"); return (2); case 6: { /* Cursor position. */ int len; len = snprintf(response, 16, "%u;%uR", (t->t_cursor.tp_row - t->t_originreg.ts_begin) + 1, t->t_cursor.tp_col + 1); if (len >= 16) return (-1); return (len); } case 15: /* Printer status. */ strcpy(response, "13n"); return (3); case 25: /* UDK status. */ strcpy(response, "20n"); return (3); case 26: /* Keyboard status. */ strcpy(response, "27;1n"); return (5); default: teken_printf("Unknown DSR\n"); return (-1); } } static void teken_subr_alignment_test(teken_t *t) { teken_rect_t tr; t->t_cursor.tp_row = t->t_cursor.tp_col = 0; t->t_scrollreg.ts_begin = 0; t->t_scrollreg.ts_end = t->t_winsize.tp_row; t->t_originreg = t->t_scrollreg; t->t_stateflags &= ~(TS_WRAPPED|TS_ORIGIN); teken_funcs_cursor(t); tr.tr_begin.tp_row = 0; tr.tr_begin.tp_col = 0; tr.tr_end = t->t_winsize; teken_funcs_fill(t, &tr, 'E', &t->t_defattr); } static void teken_subr_backspace(teken_t *t) { if (t->t_stateflags & TS_CONS25) { if (t->t_cursor.tp_col == 0) { if (t->t_cursor.tp_row == t->t_originreg.ts_begin) return; t->t_cursor.tp_row--; t->t_cursor.tp_col = t->t_winsize.tp_col - 1; } else { t->t_cursor.tp_col--; } } else { if (t->t_cursor.tp_col == 0) return; t->t_cursor.tp_col--; t->t_stateflags &= ~TS_WRAPPED; } teken_funcs_cursor(t); } static void teken_subr_bell(const teken_t *t) { teken_funcs_bell(t); } static void teken_subr_carriage_return(teken_t *t) { t->t_cursor.tp_col = 0; t->t_stateflags &= ~TS_WRAPPED; teken_funcs_cursor(t); } static void teken_subr_cursor_backward(teken_t *t, unsigned int ncols) { if (ncols > t->t_cursor.tp_col) t->t_cursor.tp_col = 0; else t->t_cursor.tp_col -= ncols; t->t_stateflags &= ~TS_WRAPPED; teken_funcs_cursor(t); } static void teken_subr_cursor_backward_tabulation(teken_t *t, unsigned int ntabs) { do { /* Stop when we've reached the beginning of the line. */ if (t->t_cursor.tp_col == 0) break; t->t_cursor.tp_col--; /* Tab marker set. */ if (teken_tab_isset(t, t->t_cursor.tp_col)) ntabs--; } while (ntabs > 0); teken_funcs_cursor(t); } static void teken_subr_cursor_down(teken_t *t, unsigned int nrows) { if (t->t_cursor.tp_row + nrows >= t->t_scrollreg.ts_end) t->t_cursor.tp_row = t->t_scrollreg.ts_end - 1; else t->t_cursor.tp_row += nrows; t->t_stateflags &= ~TS_WRAPPED; teken_funcs_cursor(t); } static void teken_subr_cursor_forward(teken_t *t, unsigned int ncols) { if (t->t_cursor.tp_col + ncols >= t->t_winsize.tp_col) t->t_cursor.tp_col = t->t_winsize.tp_col - 1; else t->t_cursor.tp_col += ncols; t->t_stateflags &= ~TS_WRAPPED; teken_funcs_cursor(t); } static void teken_subr_cursor_forward_tabulation(teken_t *t, unsigned int ntabs) { do { /* Stop when we've reached the end of the line. */ if (t->t_cursor.tp_col == t->t_winsize.tp_col - 1) break; t->t_cursor.tp_col++; /* Tab marker set. */ if (teken_tab_isset(t, t->t_cursor.tp_col)) ntabs--; } while (ntabs > 0); teken_funcs_cursor(t); } static void teken_subr_cursor_next_line(teken_t *t, unsigned int ncols) { t->t_cursor.tp_col = 0; teken_subr_cursor_down(t, ncols); } static void teken_subr_cursor_position(teken_t *t, unsigned int row, unsigned int col) { row = (row - 1) + t->t_originreg.ts_begin; t->t_cursor.tp_row = row < t->t_originreg.ts_end ? row : t->t_originreg.ts_end - 1; col--; t->t_cursor.tp_col = col < t->t_winsize.tp_col ? col : t->t_winsize.tp_col - 1; t->t_stateflags &= ~TS_WRAPPED; teken_funcs_cursor(t); } static void teken_subr_cursor_position_report(const teken_t *t, unsigned int cmd) { char response[18] = "\x1B["; ssize_t len; len = teken_subr_do_cpr(t, cmd, response + 2); if (len < 0) return; teken_funcs_respond(t, response, len + 2); } static void teken_subr_cursor_previous_line(teken_t *t, unsigned int ncols) { t->t_cursor.tp_col = 0; teken_subr_cursor_up(t, ncols); } static void teken_subr_cursor_up(teken_t *t, unsigned int nrows) { if (t->t_scrollreg.ts_begin + nrows >= t->t_cursor.tp_row) t->t_cursor.tp_row = t->t_scrollreg.ts_begin; else t->t_cursor.tp_row -= nrows; t->t_stateflags &= ~TS_WRAPPED; teken_funcs_cursor(t); } static void teken_subr_set_cursor_style(teken_t *t, unsigned int style) { /* TODO */ (void)t; (void)style; /* * CSI Ps SP q * Set cursor style (DECSCUSR), VT520. * Ps = 0 -> blinking block. * Ps = 1 -> blinking block (default). * Ps = 2 -> steady block. * Ps = 3 -> blinking underline. * Ps = 4 -> steady underline. * Ps = 5 -> blinking bar (xterm). * Ps = 6 -> steady bar (xterm). */ } static void teken_subr_delete_character(const teken_t *t, unsigned int ncols) { teken_rect_t tr; tr.tr_begin.tp_row = t->t_cursor.tp_row; tr.tr_end.tp_row = t->t_cursor.tp_row + 1; tr.tr_end.tp_col = t->t_winsize.tp_col; if (t->t_cursor.tp_col + ncols >= t->t_winsize.tp_col) { tr.tr_begin.tp_col = t->t_cursor.tp_col; } else { /* Copy characters to the left. */ tr.tr_begin.tp_col = t->t_cursor.tp_col + ncols; teken_funcs_copy(t, &tr, &t->t_cursor); tr.tr_begin.tp_col = t->t_winsize.tp_col - ncols; } /* Blank trailing columns. */ teken_funcs_fill(t, &tr, BLANK, &t->t_curattr); } static void teken_subr_delete_line(const teken_t *t, unsigned int nrows) { teken_rect_t tr; /* Ignore if outside scrolling region. */ if (t->t_cursor.tp_row < t->t_scrollreg.ts_begin || t->t_cursor.tp_row >= t->t_scrollreg.ts_end) return; tr.tr_begin.tp_col = 0; tr.tr_end.tp_row = t->t_scrollreg.ts_end; tr.tr_end.tp_col = t->t_winsize.tp_col; if (t->t_cursor.tp_row + nrows >= t->t_scrollreg.ts_end) { tr.tr_begin.tp_row = t->t_cursor.tp_row; } else { teken_pos_t tp; /* Copy rows up. */ tr.tr_begin.tp_row = t->t_cursor.tp_row + nrows; tp.tp_row = t->t_cursor.tp_row; tp.tp_col = 0; teken_funcs_copy(t, &tr, &tp); tr.tr_begin.tp_row = t->t_scrollreg.ts_end - nrows; } /* Blank trailing rows. */ teken_funcs_fill(t, &tr, BLANK, &t->t_curattr); } static void teken_subr_device_control_string(teken_t *t) { teken_printf("Unsupported device control string\n"); t->t_stateflags |= TS_INSTRING; } static void teken_subr_device_status_report(const teken_t *t, unsigned int cmd) { char response[19] = "\x1B[?"; ssize_t len; len = teken_subr_do_cpr(t, cmd, response + 3); if (len < 0) return; teken_funcs_respond(t, response, len + 3); } static void teken_subr_double_height_double_width_line_top(const teken_t *t) { (void)t; teken_printf("double height double width top\n"); } static void teken_subr_double_height_double_width_line_bottom(const teken_t *t) { (void)t; teken_printf("double height double width bottom\n"); } static void teken_subr_erase_character(const teken_t *t, unsigned int ncols) { teken_rect_t tr; tr.tr_begin = t->t_cursor; tr.tr_end.tp_row = t->t_cursor.tp_row + 1; if (t->t_cursor.tp_col + ncols >= t->t_winsize.tp_col) tr.tr_end.tp_col = t->t_winsize.tp_col; else tr.tr_end.tp_col = t->t_cursor.tp_col + ncols; teken_funcs_fill(t, &tr, BLANK, &t->t_curattr); } static void teken_subr_erase_display(const teken_t *t, unsigned int mode) { teken_rect_t r; r.tr_begin.tp_col = 0; r.tr_end.tp_col = t->t_winsize.tp_col; switch (mode) { case 1: /* Erase from the top to the cursor. */ teken_subr_erase_line(t, 1); /* Erase lines above. */ if (t->t_cursor.tp_row == 0) return; r.tr_begin.tp_row = 0; r.tr_end.tp_row = t->t_cursor.tp_row; break; case 2: /* Erase entire display. */ r.tr_begin.tp_row = 0; r.tr_end.tp_row = t->t_winsize.tp_row; break; default: /* Erase from cursor to the bottom. */ teken_subr_erase_line(t, 0); /* Erase lines below. */ if (t->t_cursor.tp_row == t->t_winsize.tp_row - 1) return; r.tr_begin.tp_row = t->t_cursor.tp_row + 1; r.tr_end.tp_row = t->t_winsize.tp_row; break; } teken_funcs_fill(t, &r, BLANK, &t->t_curattr); } static void teken_subr_erase_line(const teken_t *t, unsigned int mode) { teken_rect_t r; r.tr_begin.tp_row = t->t_cursor.tp_row; r.tr_end.tp_row = t->t_cursor.tp_row + 1; switch (mode) { case 1: /* Erase from the beginning of the line to the cursor. */ r.tr_begin.tp_col = 0; r.tr_end.tp_col = t->t_cursor.tp_col + 1; break; case 2: /* Erase entire line. */ r.tr_begin.tp_col = 0; r.tr_end.tp_col = t->t_winsize.tp_col; break; default: /* Erase from cursor to the end of the line. */ r.tr_begin.tp_col = t->t_cursor.tp_col; r.tr_end.tp_col = t->t_winsize.tp_col; break; } teken_funcs_fill(t, &r, BLANK, &t->t_curattr); } static void teken_subr_g0_scs_special_graphics(teken_t *t) { t->t_scs[0] = teken_scs_special_graphics; } static void teken_subr_g0_scs_uk_national(teken_t *t) { t->t_scs[0] = teken_scs_uk_national; } static void teken_subr_g0_scs_us_ascii(teken_t *t) { t->t_scs[0] = teken_scs_us_ascii; } static void teken_subr_g1_scs_special_graphics(teken_t *t) { t->t_scs[1] = teken_scs_special_graphics; } static void teken_subr_g1_scs_uk_national(teken_t *t) { t->t_scs[1] = teken_scs_uk_national; } static void teken_subr_g1_scs_us_ascii(teken_t *t) { t->t_scs[1] = teken_scs_us_ascii; } static void teken_subr_horizontal_position_absolute(teken_t *t, unsigned int col) { col--; t->t_cursor.tp_col = col < t->t_winsize.tp_col ? col : t->t_winsize.tp_col - 1; t->t_stateflags &= ~TS_WRAPPED; teken_funcs_cursor(t); } static void teken_subr_horizontal_tab(teken_t *t) { teken_subr_cursor_forward_tabulation(t, 1); } static void teken_subr_horizontal_tab_set(teken_t *t) { teken_tab_set(t, t->t_cursor.tp_col); } static void teken_subr_index(teken_t *t) { if (t->t_cursor.tp_row < t->t_scrollreg.ts_end - 1) { t->t_cursor.tp_row++; t->t_stateflags &= ~TS_WRAPPED; teken_funcs_cursor(t); } else { teken_subr_do_scroll(t, 1); } } static void teken_subr_insert_character(const teken_t *t, unsigned int ncols) { teken_rect_t tr; tr.tr_begin = t->t_cursor; tr.tr_end.tp_row = t->t_cursor.tp_row + 1; if (t->t_cursor.tp_col + ncols >= t->t_winsize.tp_col) { tr.tr_end.tp_col = t->t_winsize.tp_col; } else { teken_pos_t tp; /* Copy characters to the right. */ tr.tr_end.tp_col = t->t_winsize.tp_col - ncols; tp.tp_row = t->t_cursor.tp_row; tp.tp_col = t->t_cursor.tp_col + ncols; teken_funcs_copy(t, &tr, &tp); tr.tr_end.tp_col = t->t_cursor.tp_col + ncols; } /* Blank current location. */ teken_funcs_fill(t, &tr, BLANK, &t->t_curattr); } static void teken_subr_insert_line(const teken_t *t, unsigned int nrows) { teken_rect_t tr; /* Ignore if outside scrolling region. */ if (t->t_cursor.tp_row < t->t_scrollreg.ts_begin || t->t_cursor.tp_row >= t->t_scrollreg.ts_end) return; tr.tr_begin.tp_row = t->t_cursor.tp_row; tr.tr_begin.tp_col = 0; tr.tr_end.tp_col = t->t_winsize.tp_col; if (t->t_cursor.tp_row + nrows >= t->t_scrollreg.ts_end) { tr.tr_end.tp_row = t->t_scrollreg.ts_end; } else { teken_pos_t tp; /* Copy lines down. */ tr.tr_end.tp_row = t->t_scrollreg.ts_end - nrows; tp.tp_row = t->t_cursor.tp_row + nrows; tp.tp_col = 0; teken_funcs_copy(t, &tr, &tp); tr.tr_end.tp_row = t->t_cursor.tp_row + nrows; } /* Blank current location. */ teken_funcs_fill(t, &tr, BLANK, &t->t_curattr); } static void teken_subr_keypad_application_mode(const teken_t *t) { teken_funcs_param(t, TP_KEYPADAPP, 1); } static void teken_subr_keypad_numeric_mode(const teken_t *t) { teken_funcs_param(t, TP_KEYPADAPP, 0); } static void teken_subr_newline(teken_t *t) { t->t_cursor.tp_row++; if (t->t_cursor.tp_row >= t->t_scrollreg.ts_end) { teken_subr_do_scroll(t, 1); t->t_cursor.tp_row = t->t_scrollreg.ts_end - 1; } t->t_stateflags &= ~TS_WRAPPED; teken_funcs_cursor(t); } static void teken_subr_newpage(teken_t *t) { if (t->t_stateflags & TS_CONS25) { teken_rect_t tr; /* Clear screen. */ tr.tr_begin.tp_row = t->t_originreg.ts_begin; tr.tr_begin.tp_col = 0; tr.tr_end.tp_row = t->t_originreg.ts_end; tr.tr_end.tp_col = t->t_winsize.tp_col; teken_funcs_fill(t, &tr, BLANK, &t->t_curattr); /* Cursor at top left. */ t->t_cursor.tp_row = t->t_originreg.ts_begin; t->t_cursor.tp_col = 0; t->t_stateflags &= ~TS_WRAPPED; teken_funcs_cursor(t); } else { teken_subr_newline(t); } } static void teken_subr_next_line(teken_t *t) { t->t_cursor.tp_col = 0; teken_subr_newline(t); } static void teken_subr_operating_system_command(teken_t *t) { teken_printf("Unsupported operating system command\n"); t->t_stateflags |= TS_INSTRING; } static void teken_subr_pan_down(const teken_t *t, unsigned int nrows) { teken_subr_do_scroll(t, (int)nrows); } static void teken_subr_pan_up(const teken_t *t, unsigned int nrows) { teken_subr_do_scroll(t, -(int)nrows); } static void teken_subr_primary_device_attributes(const teken_t *t, unsigned int request) { if (request == 0) { const char response[] = "\x1B[?1;2c"; teken_funcs_respond(t, response, sizeof response - 1); } else { teken_printf("Unknown DA1\n"); } } static void teken_subr_do_putchar(teken_t *t, const teken_pos_t *tp, teken_char_t c, int width) { t->t_last = c; if (t->t_stateflags & TS_INSERT && tp->tp_col < t->t_winsize.tp_col - width) { teken_rect_t ctr; teken_pos_t ctp; /* Insert mode. Move existing characters to the right. */ ctr.tr_begin = *tp; ctr.tr_end.tp_row = tp->tp_row + 1; ctr.tr_end.tp_col = t->t_winsize.tp_col - width; ctp.tp_row = tp->tp_row; ctp.tp_col = tp->tp_col + width; teken_funcs_copy(t, &ctr, &ctp); } teken_funcs_putchar(t, tp, c, &t->t_curattr); if (width == 2 && tp->tp_col + 1 < t->t_winsize.tp_col) { teken_pos_t tp2; teken_attr_t attr; /* Print second half of CJK fullwidth character. */ tp2.tp_row = tp->tp_row; tp2.tp_col = tp->tp_col + 1; attr = t->t_curattr; attr.ta_format |= TF_CJK_RIGHT; teken_funcs_putchar(t, &tp2, c, &attr); } } static void teken_subr_regular_character(teken_t *t, teken_char_t c) { int width; if (t->t_stateflags & TS_8BIT) { if (!(t->t_stateflags & TS_CONS25) && (c <= 0x1b || c == 0x7f)) return; c = teken_scs_process(t, c); width = 1; } else { c = teken_scs_process(t, c); width = teken_wcwidth(c); /* XXX: Don't process zero-width characters yet. */ if (width <= 0) return; } if (t->t_stateflags & TS_CONS25) { teken_subr_do_putchar(t, &t->t_cursor, c, width); t->t_cursor.tp_col += width; if (t->t_cursor.tp_col >= t->t_winsize.tp_col) { if (t->t_cursor.tp_row == t->t_scrollreg.ts_end - 1) { /* Perform scrolling. */ teken_subr_do_scroll(t, 1); } else { /* No scrolling needed. */ if (t->t_cursor.tp_row < t->t_winsize.tp_row - 1) t->t_cursor.tp_row++; } t->t_cursor.tp_col = 0; } } else if (t->t_stateflags & TS_AUTOWRAP && ((t->t_stateflags & TS_WRAPPED && t->t_cursor.tp_col + 1 == t->t_winsize.tp_col) || t->t_cursor.tp_col + width > t->t_winsize.tp_col)) { teken_pos_t tp; /* * Perform line wrapping, if: * - Autowrapping is enabled, and * - We're in the wrapped state at the last column, or * - The character to be printed does not fit anymore. */ if (t->t_cursor.tp_row == t->t_scrollreg.ts_end - 1) { /* Perform scrolling. */ teken_subr_do_scroll(t, 1); tp.tp_row = t->t_scrollreg.ts_end - 1; } else { /* No scrolling needed. */ tp.tp_row = t->t_cursor.tp_row + 1; if (tp.tp_row == t->t_winsize.tp_row) { /* * Corner case: regular character * outside scrolling region, but at the * bottom of the screen. */ teken_subr_do_putchar(t, &t->t_cursor, c, width); return; } } tp.tp_col = 0; teken_subr_do_putchar(t, &tp, c, width); t->t_cursor.tp_row = tp.tp_row; t->t_cursor.tp_col = width; t->t_stateflags &= ~TS_WRAPPED; } else { /* No line wrapping needed. */ teken_subr_do_putchar(t, &t->t_cursor, c, width); t->t_cursor.tp_col += width; if (t->t_cursor.tp_col >= t->t_winsize.tp_col) { t->t_stateflags |= TS_WRAPPED; t->t_cursor.tp_col = t->t_winsize.tp_col - 1; } else { t->t_stateflags &= ~TS_WRAPPED; } } teken_funcs_cursor(t); } static void teken_subr_reset_dec_mode(teken_t *t, unsigned int cmd) { switch (cmd) { case 1: /* Cursor keys mode. */ t->t_stateflags &= ~TS_CURSORKEYS; break; case 2: /* DECANM: ANSI/VT52 mode. */ teken_printf("DECRST VT52\n"); break; case 3: /* 132 column mode. */ teken_funcs_param(t, TP_132COLS, 0); teken_subr_reset_to_initial_state(t); break; case 5: /* Inverse video. */ teken_printf("DECRST inverse video\n"); break; case 6: /* Origin mode. */ t->t_stateflags &= ~TS_ORIGIN; t->t_originreg.ts_begin = 0; t->t_originreg.ts_end = t->t_winsize.tp_row; t->t_cursor.tp_row = t->t_cursor.tp_col = 0; t->t_stateflags &= ~TS_WRAPPED; teken_funcs_cursor(t); break; case 7: /* Autowrap mode. */ t->t_stateflags &= ~TS_AUTOWRAP; break; case 8: /* Autorepeat mode. */ teken_funcs_param(t, TP_AUTOREPEAT, 0); break; case 25: /* Hide cursor. */ teken_funcs_param(t, TP_SHOWCURSOR, 0); break; case 40: /* Disallow 132 columns. */ teken_printf("DECRST allow 132\n"); break; case 45: /* Disable reverse wraparound. */ teken_printf("DECRST reverse wraparound\n"); break; case 47: /* Switch to alternate buffer. */ teken_printf("Switch to alternate buffer\n"); break; case 1000: /* Mouse input. */ teken_funcs_param(t, TP_MOUSE, 0); break; default: teken_printf("Unknown DECRST: %u\n", cmd); } } static void teken_subr_reset_mode(teken_t *t, unsigned int cmd) { switch (cmd) { case 4: t->t_stateflags &= ~TS_INSERT; break; default: teken_printf("Unknown reset mode: %u\n", cmd); } } static void teken_subr_do_resize(teken_t *t) { t->t_scrollreg.ts_begin = 0; t->t_scrollreg.ts_end = t->t_winsize.tp_row; t->t_originreg = t->t_scrollreg; } static void teken_subr_do_reset(teken_t *t) { t->t_curattr = t->t_defattr; t->t_cursor.tp_row = t->t_cursor.tp_col = 0; t->t_scrollreg.ts_begin = 0; t->t_scrollreg.ts_end = t->t_winsize.tp_row; t->t_originreg = t->t_scrollreg; t->t_stateflags &= TS_8BIT|TS_CONS25; t->t_stateflags |= TS_AUTOWRAP; t->t_scs[0] = teken_scs_us_ascii; t->t_scs[1] = teken_scs_us_ascii; t->t_curscs = 0; teken_subr_save_cursor(t); teken_tab_default(t); } static void teken_subr_reset_to_initial_state(teken_t *t) { teken_subr_do_reset(t); teken_subr_erase_display(t, 2); teken_funcs_param(t, TP_SHOWCURSOR, 1); teken_funcs_cursor(t); } static void teken_subr_restore_cursor(teken_t *t) { t->t_cursor = t->t_saved_cursor; t->t_curattr = t->t_saved_curattr; t->t_scs[t->t_curscs] = t->t_saved_curscs; t->t_stateflags &= ~TS_WRAPPED; /* Get out of origin mode when the cursor is moved outside. */ if (t->t_cursor.tp_row < t->t_originreg.ts_begin || t->t_cursor.tp_row >= t->t_originreg.ts_end) { t->t_stateflags &= ~TS_ORIGIN; t->t_originreg.ts_begin = 0; t->t_originreg.ts_end = t->t_winsize.tp_row; } teken_funcs_cursor(t); } static void teken_subr_reverse_index(teken_t *t) { if (t->t_cursor.tp_row > t->t_scrollreg.ts_begin) { t->t_cursor.tp_row--; t->t_stateflags &= ~TS_WRAPPED; teken_funcs_cursor(t); } else { teken_subr_do_scroll(t, -1); } } static void teken_subr_save_cursor(teken_t *t) { t->t_saved_cursor = t->t_cursor; t->t_saved_curattr = t->t_curattr; t->t_saved_curscs = t->t_scs[t->t_curscs]; } static void teken_subr_secondary_device_attributes(const teken_t *t, unsigned int request) { if (request == 0) { const char response[] = "\x1B[>0;10;0c"; teken_funcs_respond(t, response, sizeof response - 1); } else { teken_printf("Unknown DA2\n"); } } static void teken_subr_set_dec_mode(teken_t *t, unsigned int cmd) { switch (cmd) { case 1: /* Cursor keys mode. */ t->t_stateflags |= TS_CURSORKEYS; break; case 2: /* DECANM: ANSI/VT52 mode. */ teken_printf("DECSET VT52\n"); break; case 3: /* 132 column mode. */ teken_funcs_param(t, TP_132COLS, 1); teken_subr_reset_to_initial_state(t); break; case 5: /* Inverse video. */ teken_printf("DECSET inverse video\n"); break; case 6: /* Origin mode. */ t->t_stateflags |= TS_ORIGIN; t->t_originreg = t->t_scrollreg; t->t_cursor.tp_row = t->t_scrollreg.ts_begin; t->t_cursor.tp_col = 0; t->t_stateflags &= ~TS_WRAPPED; teken_funcs_cursor(t); break; case 7: /* Autowrap mode. */ t->t_stateflags |= TS_AUTOWRAP; break; case 8: /* Autorepeat mode. */ teken_funcs_param(t, TP_AUTOREPEAT, 1); break; case 25: /* Display cursor. */ teken_funcs_param(t, TP_SHOWCURSOR, 1); break; case 40: /* Allow 132 columns. */ teken_printf("DECSET allow 132\n"); break; case 45: /* Enable reverse wraparound. */ teken_printf("DECSET reverse wraparound\n"); break; case 47: /* Switch to alternate buffer. */ teken_printf("Switch away from alternate buffer\n"); break; case 1000: /* Mouse input. */ teken_funcs_param(t, TP_MOUSE, 1); break; default: teken_printf("Unknown DECSET: %u\n", cmd); } } static void teken_subr_set_mode(teken_t *t, unsigned int cmd) { switch (cmd) { case 4: teken_printf("Insert mode\n"); t->t_stateflags |= TS_INSERT; break; default: teken_printf("Unknown set mode: %u\n", cmd); } } static void teken_subr_set_graphic_rendition(teken_t *t, unsigned int ncmds, const unsigned int cmds[]) { unsigned int i, n; /* No attributes means reset. */ if (ncmds == 0) { t->t_curattr = t->t_defattr; return; } for (i = 0; i < ncmds; i++) { n = cmds[i]; switch (n) { case 0: /* Reset. */ t->t_curattr = t->t_defattr; break; case 1: /* Bold. */ t->t_curattr.ta_format |= TF_BOLD; break; case 4: /* Underline. */ t->t_curattr.ta_format |= TF_UNDERLINE; break; case 5: /* Blink. */ t->t_curattr.ta_format |= TF_BLINK; break; case 7: /* Reverse. */ t->t_curattr.ta_format |= TF_REVERSE; break; case 22: /* Remove bold. */ t->t_curattr.ta_format &= ~TF_BOLD; break; case 24: /* Remove underline. */ t->t_curattr.ta_format &= ~TF_UNDERLINE; break; case 25: /* Remove blink. */ t->t_curattr.ta_format &= ~TF_BLINK; break; case 27: /* Remove reverse. */ t->t_curattr.ta_format &= ~TF_REVERSE; break; case 30: /* Set foreground color: black */ case 31: /* Set foreground color: red */ case 32: /* Set foreground color: green */ case 33: /* Set foreground color: brown */ case 34: /* Set foreground color: blue */ case 35: /* Set foreground color: magenta */ case 36: /* Set foreground color: cyan */ case 37: /* Set foreground color: white */ t->t_curattr.ta_fgcolor = n - 30; break; case 38: /* Set foreground color: 256 color mode */ if (i + 2 >= ncmds || cmds[i + 1] != 5) continue; t->t_curattr.ta_fgcolor = cmds[i + 2]; i += 2; break; case 39: /* Set default foreground color. */ t->t_curattr.ta_fgcolor = t->t_defattr.ta_fgcolor; break; case 40: /* Set background color: black */ case 41: /* Set background color: red */ case 42: /* Set background color: green */ case 43: /* Set background color: brown */ case 44: /* Set background color: blue */ case 45: /* Set background color: magenta */ case 46: /* Set background color: cyan */ case 47: /* Set background color: white */ t->t_curattr.ta_bgcolor = n - 40; break; case 48: /* Set background color: 256 color mode */ if (i + 2 >= ncmds || cmds[i + 1] != 5) continue; t->t_curattr.ta_bgcolor = cmds[i + 2]; i += 2; break; case 49: /* Set default background color. */ t->t_curattr.ta_bgcolor = t->t_defattr.ta_bgcolor; break; case 90: /* Set bright foreground color: black */ case 91: /* Set bright foreground color: red */ case 92: /* Set bright foreground color: green */ case 93: /* Set bright foreground color: brown */ case 94: /* Set bright foreground color: blue */ case 95: /* Set bright foreground color: magenta */ case 96: /* Set bright foreground color: cyan */ case 97: /* Set bright foreground color: white */ t->t_curattr.ta_fgcolor = (n - 90) + 8; break; case 100: /* Set bright background color: black */ case 101: /* Set bright background color: red */ case 102: /* Set bright background color: green */ case 103: /* Set bright background color: brown */ case 104: /* Set bright background color: blue */ case 105: /* Set bright background color: magenta */ case 106: /* Set bright background color: cyan */ case 107: /* Set bright background color: white */ t->t_curattr.ta_bgcolor = (n - 100) + 8; break; default: teken_printf("unsupported attribute %u\n", n); } } } static void teken_subr_set_top_and_bottom_margins(teken_t *t, unsigned int top, unsigned int bottom) { /* Adjust top row number. */ if (top > 0) top--; /* Adjust bottom row number. */ if (bottom == 0 || bottom > t->t_winsize.tp_row) bottom = t->t_winsize.tp_row; /* Invalid arguments. */ if (top >= bottom - 1) { top = 0; bottom = t->t_winsize.tp_row; } /* Apply scrolling region. */ t->t_scrollreg.ts_begin = top; t->t_scrollreg.ts_end = bottom; if (t->t_stateflags & TS_ORIGIN) t->t_originreg = t->t_scrollreg; /* Home cursor to the top left of the scrolling region. */ t->t_cursor.tp_row = t->t_originreg.ts_begin; t->t_cursor.tp_col = 0; t->t_stateflags &= ~TS_WRAPPED; teken_funcs_cursor(t); } static void teken_subr_single_height_double_width_line(const teken_t *t) { (void)t; teken_printf("single height double width???\n"); } static void teken_subr_single_height_single_width_line(const teken_t *t) { (void)t; teken_printf("single height single width???\n"); } static void teken_subr_string_terminator(const teken_t *t) { (void)t; /* * Strings are already terminated in teken_input_char() when ^[ * is inserted. */ } static void teken_subr_tab_clear(teken_t *t, unsigned int cmd) { switch (cmd) { case 0: teken_tab_clear(t, t->t_cursor.tp_col); break; case 3: memset(t->t_tabstops, 0, T_NUMCOL / 8); break; default: break; } } static void teken_subr_vertical_position_absolute(teken_t *t, unsigned int row) { row = (row - 1) + t->t_originreg.ts_begin; t->t_cursor.tp_row = row < t->t_originreg.ts_end ? row : t->t_originreg.ts_end - 1; t->t_stateflags &= ~TS_WRAPPED; teken_funcs_cursor(t); } static void teken_subr_repeat_last_graphic_char(teken_t *t, unsigned int rpts) { for (; t->t_last != 0 && rpts > 0; rpts--) teken_subr_regular_character(t, t->t_last); } varnish-7.5.0/bin/varnishtest/teken_subr_compat.h000066400000000000000000000104171457605730600222140ustar00rootroot00000000000000/*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (c) 2008-2009 Ed Schouten * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * $FreeBSD: head/sys/teken/teken_subr_compat.h 332297 2018-04-08 19:23:50Z phk $ */ static void teken_subr_cons25_set_border(const teken_t *t, unsigned int c) { teken_funcs_param(t, TP_SETBORDER, c); } static void teken_subr_cons25_set_global_cursor_shape(const teken_t *t, unsigned int ncmds, const unsigned int cmds[]) { unsigned int code, i; /* * Pack the args to work around API deficiencies. This requires * knowing too much about the low level to be fully compatible. * Returning when ncmds > 3 is necessary and happens to be * compatible. Discarding high bits is necessary and happens to * be incompatible only for invalid args when ncmds == 3. */ if (ncmds > 3) return; code = 0; for (i = ncmds; i > 0; i--) code = (code << 8) | (cmds[i - 1] & 0xff); code = (code << 8) | ncmds; teken_funcs_param(t, TP_SETGLOBALCURSOR, code); } static void teken_subr_cons25_set_local_cursor_type(const teken_t *t, unsigned int type) { teken_funcs_param(t, TP_SETLOCALCURSOR, type); } static const teken_color_t cons25_colors[8] = { TC_BLACK, TC_BLUE, TC_GREEN, TC_CYAN, TC_RED, TC_MAGENTA, TC_BROWN, TC_WHITE }; static void teken_subr_cons25_set_default_background(teken_t *t, unsigned int c) { t->t_defattr.ta_bgcolor = cons25_colors[c % 8] | (c & 8); t->t_curattr.ta_bgcolor = cons25_colors[c % 8] | (c & 8); } static void teken_subr_cons25_set_default_foreground(teken_t *t, unsigned int c) { t->t_defattr.ta_fgcolor = cons25_colors[c % 8] | (c & 8); t->t_curattr.ta_fgcolor = cons25_colors[c % 8] | (c & 8); } static const teken_color_t cons25_revcolors[8] = { 0, 4, 2, 6, 1, 5, 3, 7 }; void teken_get_defattr_cons25(const teken_t *t, int *fg, int *bg) { *fg = cons25_revcolors[teken_256to8(t->t_defattr.ta_fgcolor)]; if (t->t_defattr.ta_format & TF_BOLD) *fg += 8; *bg = cons25_revcolors[teken_256to8(t->t_defattr.ta_bgcolor)]; } static void teken_subr_cons25_switch_virtual_terminal(const teken_t *t, unsigned int vt) { teken_funcs_param(t, TP_SWITCHVT, vt); } static void teken_subr_cons25_set_bell_pitch_duration(const teken_t *t, unsigned int pitch, unsigned int duration) { teken_funcs_param(t, TP_SETBELLPD, (pitch << 16) | (duration & 0xffff)); } static void teken_subr_cons25_set_graphic_rendition(teken_t *t, unsigned int cmd, unsigned int param) { (void)param; switch (cmd) { case 0: /* Reset. */ t->t_curattr = t->t_defattr; break; default: teken_printf("unsupported attribute %u\n", cmd); } } static void teken_subr_cons25_set_terminal_mode(teken_t *t, unsigned int mode) { switch (mode) { case 0: /* Switch terminal to xterm. */ t->t_stateflags &= ~TS_CONS25; break; case 1: /* Switch terminal to cons25. */ t->t_stateflags |= TS_CONS25; break; default: break; } } #if 0 static void teken_subr_vt52_decid(teken_t *t) { const char response[] = "\x1B/Z"; teken_funcs_respond(t, response, sizeof response - 1); } #endif varnish-7.5.0/bin/varnishtest/teken_wcwidth.h000066400000000000000000000122731457605730600213510ustar00rootroot00000000000000/* * Markus Kuhn -- 2007-05-26 (Unicode 5.0) * * Permission to use, copy, modify, and distribute this software * for any purpose and without fee is hereby granted. The author * disclaims all warranties with regard to this software. * * Latest version: http://www.cl.cam.ac.uk/~mgk25/ucs/wcwidth.c * * $FreeBSD: head/sys/teken/teken_wcwidth.h 332297 2018-04-08 19:23:50Z phk $ */ struct interval { teken_char_t first; teken_char_t last; }; /* auxiliary function for binary search in interval table */ static int bisearch(teken_char_t ucs, const struct interval *table, int max) { int min = 0; int mid; if (ucs < table[0].first || ucs > table[max].last) return 0; while (max >= min) { mid = (min + max) / 2; if (ucs > table[mid].last) min = mid + 1; else if (ucs < table[mid].first) max = mid - 1; else return 1; } return 0; } static int teken_wcwidth(teken_char_t ucs) { /* sorted list of non-overlapping intervals of non-spacing characters */ /* generated by "uniset +cat=Me +cat=Mn +cat=Cf -00AD +1160-11FF +200B c" */ static const struct interval combining[] = { { 0x0300, 0x036F }, { 0x0483, 0x0486 }, { 0x0488, 0x0489 }, { 0x0591, 0x05BD }, { 0x05BF, 0x05BF }, { 0x05C1, 0x05C2 }, { 0x05C4, 0x05C5 }, { 0x05C7, 0x05C7 }, { 0x0600, 0x0603 }, { 0x0610, 0x0615 }, { 0x064B, 0x065E }, { 0x0670, 0x0670 }, { 0x06D6, 0x06E4 }, { 0x06E7, 0x06E8 }, { 0x06EA, 0x06ED }, { 0x070F, 0x070F }, { 0x0711, 0x0711 }, { 0x0730, 0x074A }, { 0x07A6, 0x07B0 }, { 0x07EB, 0x07F3 }, { 0x0901, 0x0902 }, { 0x093C, 0x093C }, { 0x0941, 0x0948 }, { 0x094D, 0x094D }, { 0x0951, 0x0954 }, { 0x0962, 0x0963 }, { 0x0981, 0x0981 }, { 0x09BC, 0x09BC }, { 0x09C1, 0x09C4 }, { 0x09CD, 0x09CD }, { 0x09E2, 0x09E3 }, { 0x0A01, 0x0A02 }, { 0x0A3C, 0x0A3C }, { 0x0A41, 0x0A42 }, { 0x0A47, 0x0A48 }, { 0x0A4B, 0x0A4D }, { 0x0A70, 0x0A71 }, { 0x0A81, 0x0A82 }, { 0x0ABC, 0x0ABC }, { 0x0AC1, 0x0AC5 }, { 0x0AC7, 0x0AC8 }, { 0x0ACD, 0x0ACD }, { 0x0AE2, 0x0AE3 }, { 0x0B01, 0x0B01 }, { 0x0B3C, 0x0B3C }, { 0x0B3F, 0x0B3F }, { 0x0B41, 0x0B43 }, { 0x0B4D, 0x0B4D }, { 0x0B56, 0x0B56 }, { 0x0B82, 0x0B82 }, { 0x0BC0, 0x0BC0 }, { 0x0BCD, 0x0BCD }, { 0x0C3E, 0x0C40 }, { 0x0C46, 0x0C48 }, { 0x0C4A, 0x0C4D }, { 0x0C55, 0x0C56 }, { 0x0CBC, 0x0CBC }, { 0x0CBF, 0x0CBF }, { 0x0CC6, 0x0CC6 }, { 0x0CCC, 0x0CCD }, { 0x0CE2, 0x0CE3 }, { 0x0D41, 0x0D43 }, { 0x0D4D, 0x0D4D }, { 0x0DCA, 0x0DCA }, { 0x0DD2, 0x0DD4 }, { 0x0DD6, 0x0DD6 }, { 0x0E31, 0x0E31 }, { 0x0E34, 0x0E3A }, { 0x0E47, 0x0E4E }, { 0x0EB1, 0x0EB1 }, { 0x0EB4, 0x0EB9 }, { 0x0EBB, 0x0EBC }, { 0x0EC8, 0x0ECD }, { 0x0F18, 0x0F19 }, { 0x0F35, 0x0F35 }, { 0x0F37, 0x0F37 }, { 0x0F39, 0x0F39 }, { 0x0F71, 0x0F7E }, { 0x0F80, 0x0F84 }, { 0x0F86, 0x0F87 }, { 0x0F90, 0x0F97 }, { 0x0F99, 0x0FBC }, { 0x0FC6, 0x0FC6 }, { 0x102D, 0x1030 }, { 0x1032, 0x1032 }, { 0x1036, 0x1037 }, { 0x1039, 0x1039 }, { 0x1058, 0x1059 }, { 0x1160, 0x11FF }, { 0x135F, 0x135F }, { 0x1712, 0x1714 }, { 0x1732, 0x1734 }, { 0x1752, 0x1753 }, { 0x1772, 0x1773 }, { 0x17B4, 0x17B5 }, { 0x17B7, 0x17BD }, { 0x17C6, 0x17C6 }, { 0x17C9, 0x17D3 }, { 0x17DD, 0x17DD }, { 0x180B, 0x180D }, { 0x18A9, 0x18A9 }, { 0x1920, 0x1922 }, { 0x1927, 0x1928 }, { 0x1932, 0x1932 }, { 0x1939, 0x193B }, { 0x1A17, 0x1A18 }, { 0x1B00, 0x1B03 }, { 0x1B34, 0x1B34 }, { 0x1B36, 0x1B3A }, { 0x1B3C, 0x1B3C }, { 0x1B42, 0x1B42 }, { 0x1B6B, 0x1B73 }, { 0x1DC0, 0x1DCA }, { 0x1DFE, 0x1DFF }, { 0x200B, 0x200F }, { 0x202A, 0x202E }, { 0x2060, 0x2063 }, { 0x206A, 0x206F }, { 0x20D0, 0x20EF }, { 0x302A, 0x302F }, { 0x3099, 0x309A }, { 0xA806, 0xA806 }, { 0xA80B, 0xA80B }, { 0xA825, 0xA826 }, { 0xFB1E, 0xFB1E }, { 0xFE00, 0xFE0F }, { 0xFE20, 0xFE23 }, { 0xFEFF, 0xFEFF }, { 0xFFF9, 0xFFFB }, { 0x10A01, 0x10A03 }, { 0x10A05, 0x10A06 }, { 0x10A0C, 0x10A0F }, { 0x10A38, 0x10A3A }, { 0x10A3F, 0x10A3F }, { 0x1D167, 0x1D169 }, { 0x1D173, 0x1D182 }, { 0x1D185, 0x1D18B }, { 0x1D1AA, 0x1D1AD }, { 0x1D242, 0x1D244 }, { 0xE0001, 0xE0001 }, { 0xE0020, 0xE007F }, { 0xE0100, 0xE01EF } }; /* test for 8-bit control characters */ if (ucs == 0) return 0; if (ucs < 32 || (ucs >= 0x7f && ucs < 0xa0)) return -1; /* binary search in table of non-spacing characters */ if (bisearch(ucs, combining, sizeof(combining) / sizeof(struct interval) - 1)) return 0; /* if we arrive here, ucs is not a combining or C0/C1 control character */ return 1 + (int)(ucs >= 0x1100 && (ucs <= 0x115f || /* Hangul Jamo init. consonants */ ucs == 0x2329 || ucs == 0x232a || (ucs >= 0x2e80 && ucs <= 0xa4cf && ucs != 0x303f) || /* CJK ... Yi */ (ucs >= 0xac00 && ucs <= 0xd7a3) || /* Hangul Syllables */ (ucs >= 0xf900 && ucs <= 0xfaff) || /* CJK Compatibility Ideographs */ (ucs >= 0xfe10 && ucs <= 0xfe19) || /* Vertical forms */ (ucs >= 0xfe30 && ucs <= 0xfe6f) || /* CJK Compatibility Forms */ (ucs >= 0xff00 && ucs <= 0xff60) || /* Fullwidth Forms */ (ucs >= 0xffe0 && ucs <= 0xffe6) || (ucs >= 0x20000 && ucs <= 0x2fffd) || (ucs >= 0x30000 && ucs <= 0x3fffd))); } varnish-7.5.0/bin/varnishtest/tests/000077500000000000000000000000001457605730600174765ustar00rootroot00000000000000varnish-7.5.0/bin/varnishtest/tests/README000066400000000000000000000027131457605730600203610ustar00rootroot00000000000000Test-scripts for varnishtest ============================ Naming scheme ------------- The intent is to be able to run all scripts in lexicographic order and get a sensible failure mode. This requires more basic tests to be earlier and more complex tests to be later in the test sequence, we do this with the prefix/id letter: [id]%05d.vtc id ~ ^a --> varnishtest(1) tests id ~ ^a02 --> HTTP2 id ~ ^b --> Basic functionality tests id ~ ^c --> Complex functionality tests id ~ ^d --> Director facility tests id ~ ^e --> ESI tests id ~ ^f --> Security related tests id ~ ^g --> GZIP tests id ~ ^h --> HAproxy tests id ~ ^i --> Interoperability and standards compliance id ~ ^j --> JAIL tests id ~ ^l --> VSL tests id ~ ^m --> VMOD facility, vmod_debug and vmod_vtc id ~ ^o --> prOxy protocol id ~ ^p --> Persistent tests id ~ ^r --> Regression tests, same number as ticket id ~ ^s --> Slow tests, expiry, grace etc. id ~ ^t --> Transport protocol tests id ~ ^t02 --> HTTP2 id ~ ^u --> Utilities and background processes id ~ ^v --> VCL tests: execute VRT functions id ~ ^x --> VEXT tests Coverage for individual VMODs is in "${top_srcdir}vmod/tests". Tests suffixed with .disabled are not executed automatically on 'make check'. The main reason for a test to live here is when the underlying problem isn't fixed yet, but a test case exists. This avoids breaking the rest of the 'make check' tests while a fix is being produced. varnish-7.5.0/bin/varnishtest/tests/a00000.vtc000066400000000000000000000116411457605730600210170ustar00rootroot00000000000000varnishtest "Test varnishtest itself" shell -exit 1 -expect {varnishtest [options]} {varnishtest -h} shell -exit 1 -match {-D.*Define macro} {varnishtest -h} shell { pwd echo 'notvarnishtest foo bar' > _.vtc echo 'shell "exit 9"' >> _.vtc } shell -exit 2 -expect {doesn't start with 'vtest' or 'varnishtes} { varnishtest -v _.vtc } shell -exit 77 -expect {0 tests failed, 1 tests skipped, 0 tests passed} { unset TMPDIR varnishtest -k _.vtc } # Test external macro-def with a a two-turtle test shell -expect {__=barf} { echo varnishtest foo > _.vtc printf 'shell {echo %c{foobar} > ${tmpdir}/__}' '$' >> _.vtc varnishtest -q -Dfoobar=barf _.vtc echo __=`cat __` } # Test a test failure shell -exit 2 -expect {TEST _.vtc FAILED} { echo varnishtest foo > _.vtc echo 'shell {false}' >> _.vtc exec varnishtest -v _.vtc || true } # Test a test skip shell -exit 77 -expect {TEST _.vtc skipped} { echo varnishtest foo > _.vtc echo 'feature cmd false' >> _.vtc exec varnishtest -v _.vtc || true } # Simple process tests process p1 "cat" -start process p1 -writeln "foo" process p1 -expect-text 2 1 foo process p1 -stop process p1 -wait shell "grep -q foo ${p1_out}" shell "test -f ${p1_err} -a ! -s ${p1_err}" process p2 -log "cat" -start process p2 -writeln "bar" process p2 -expect-text 2 1 bar process p2 -write "\x04" process p2 -wait shell "grep -q bar ${p2_out}" shell "test -f ${p2_err} -a ! -s ${p2_err}" process p3 -dump "cat" -start process p3 -writeln "baz" process p3 -expect-text 2 1 baz process p3 -kill KILL process p3 -wait shell "grep -q baz ${p3_out}" shell "test -f ${p3_err} -a ! -s ${p3_err}" process p4 -hexdump "cat" -start process p4 -writeln "b\001z" process p4 -expect-text 2 1 "b" process p4 -kill TERM process p4 -wait -screen_dump # Curses process tests process p5 "ps -lw | grep '[p][s]' ; tty ; echo @" -start process p5 -expect-text 0 0 {@} -screen_dump -wait process p6 "stty -a ; echo '*'" -start process p6 -expect-text 0 0 {*} -screen_dump -wait process p7 -hexdump {stty raw -echo; stty -a ; echo "*" ; cat} -start process p7 -expect-text 0 0 "*" -screen_dump process p7 -write "\x1b[H\x1b[2Jzzzzzzz" process p7 -write "\x0c1\x1b[79C2\x08>\x1b[25;1H3\x1b[25;80H" process p7 -write "\x1b[H\x1b[2J1\x1b[79C2\x08>\x1b[25;1H3\x1b[25;80H" process p7 -write "4\x08>\x1b[A\x1b[Cv\x1b[22A^\x1b[79D^\x1b[;2H<\n\n\n\n" process p7 -write "\n\n\n\n\n\n\n\n\x1b[B\x1b[11B\x08<\x1b[24;Hv\x1b[12;1H" process p7 -write "111111112222222333333\x0d\x0a111111112" process p7 -write "222222333333\x0d\x0a111111112222222333333 UTF8: " process p7 -writehex {c2 a2 20} process p7 -writehex {e2 82 ac 20} process p7 -writehex {f0 90 80 80 20} process p7 -writehex {f0 9f 90 b0 20} process p7 -writehex {f0 a0 80 80 20} process p7 -writehex {f0 b0 80 80 20} process p7 -write "\x1b[22;24;25;27;30;47;49;97;107m" process p7 -write "\x1b[22;24;25;27;30m" process p7 -write "\x1b[47;49;97;107m" process p7 -write "\x0d\x0a111111112222222333333\x0d\x0a\x1b[12" process p7 -write ";12H\x1b[K\x1b[13;12H\x1b[0K\x1b[14;12H\x1b[1K\x1b" process p7 -write "[15;12H\x1b[2K\x1b[3;1Hline3 <\x0d\x0a" process p7 -need-bytes 310 -expect-text 3 1 "line3 <" process p7 -expect-cursor 4 1 process p7 -expect-cursor 4 0 process p7 -expect-cursor 0 1 process p7 -screen-dump # Also exercise CONS25 mode process p7 -write "\x1b[=1T" process p7 -write "\x1b[=2T" process p7 -write "\x1b[8z" process p7 -write "\x1b[0x" process p7 -write "\x1b[=1A" process p7 -write "\x1b[=1;2B" process p7 -write "\x1b[=1;2;3C" process p7 -write "\x1b[=1;2;3;4C" process p7 -write "\x1b[=1F" process p7 -write "\x1b[=1G" process p7 -write "\x1b[=1S" process p7 -writehex {0c 08 40 0d 0a 08} process p7 -expect-text 1 1 "@" process p7 -expect-cursor 1 80 process p7 -writehex "0c 41 0e 42 0f" process p7 -expect-text 1 1 "A" process p7 -expect-text 0 0 "B" process p7 -write "\x1b[=0T" process p7 -writehex "0c 0a 0d 43 0a 08 08 0e 44 0f" process p7 -expect-text 3 1 "C" process p7 -expect-text 4 1 "D" process p7 -write "\x1b[2T" process p7 -expect-text 5 1 "C" process p7 -expect-text 6 1 "D" process p7 -write "\x1b[3S" process p7 -expect-text 3 1 "D" process p7 -write "\x1b[4;200H%" process p7 -expect-text 4 80 "%" process p7 -write "\x1b[7;7H\x09X\x09Y\x09Z\x1b[2ZW\x1b[2Ew\x1b[F*" process p7 -expect-text 7 17 "W" process p7 -expect-text 9 1 "w" process p7 -expect-text 8 1 "*" process p7 -write "\x1b[10;4HABCDEFGHIJKLMN\x1b(A#$%\x1b)A" process p7 -write "\x1b[8G\x1b[2X>" process p7 -expect-text 10 8 ">" process p7 -screen-dump # Test responses process p7 -write "\x1b[3;1HA\x1b[5n" process p7 -write "\x1b[4;1HB\x1b[6n" process p7 -write "\x1b[5;1HC\x1b[15n" process p7 -write "\x1b[6;1HD\x1b[25n" process p7 -write "\x1b[7;1HE\x1b[26n" process p7 -write "\x1b[8;1HF\x1b[?26n" process p7 -write "\x1b[9;1HG\x1bPfutfutfut\x01" process p7 -write "\x1b[10;1HH\x1b]futfutfut\x01" process p7 -write "\x1b[11;1HI\x1b[>c" process p7 -write "\x1b[24;1HW" process p7 -expect-text 24 1 "W" process p7 -screen-dump varnish-7.5.0/bin/varnishtest/tests/a00001.vtc000066400000000000000000000147341457605730600210260ustar00rootroot00000000000000vtest "Test Teken terminal emulator" feature cmd "vttest --version 2>&1 | grep -q Usage" process p4 {vttest} -ansi-response -start process p4 -expect-text 21 11 "Enter choice number (0 - 12):" process p4 -screen_dump # 1. Test of cursor movements process p4 -writehex "31 0d" process p4 -expect-text 14 61 "RETURN" process p4 -screen_dump process p4 -writehex 0d process p4 -expect-text 14 87 "RETURN" process p4 -screen_dump process p4 -writehex 0d process p4 -expect-text 22 7 "RETURN" process p4 -screen_dump process p4 -writehex 0d process p4 -expect-text 3 132 "i" process p4 -expect-text 22 7 "RETURN" process p4 -screen_dump process p4 -writehex 0d process p4 -expect-text 9 7 "RETURN" process p4 -screen_dump process p4 -writehex 0d process p4 -expect-text 4 1 "This is a correct sentence" process p4 -expect-text 20 7 "RETURN" process p4 -screen_dump process p4 -writehex 0d process p4 -expect-text 21 11 "Enter choice number (0 - 12):" process p4 -screen_dump # 2. Test of screen features process p4 -writehex "32 0d" process p4 -expect-text 8 1 "Push " process p4 -screen_dump process p4 -writehex 0d process p4 -expect-text 5 1 "should look the same. Push " process p4 -screen_dump process p4 -writehex 0d process p4 -expect-text 0 0 "This is 132 column mode, light background.Push " process p4 -screen_dump process p4 -writehex 0d process p4 -expect-text 0 0 "This is 80 column mode, light background.Push " process p4 -screen_dump process p4 -writehex 0d process p4 -expect-text 0 0 "This is 132 column mode, dark background.Push " process p4 -screen_dump process p4 -writehex 0d process p4 -expect-text 0 0 "This is 80 column mode, dark background.Push " process p4 -screen_dump process p4 -writehex 0d process p4 -expect-text 0 1 "Push " process p4 -screen_dump process p4 -writehex 0d process p4 -expect-text 3 1 "Soft scroll down region [1..24] size 24 Line 28" process p4 -expect-text 1 1 "Push " process p4 -screen_dump process p4 -writehex 0d process p4 -expect-text 0 1 "Jump scroll down region [12..13] size 2 Line 29" process p4 -expect-text 0 1 "Push " process p4 -screen_dump process p4 -writehex 0d process p4 -expect-text 3 1 "Jump scroll down region [1..24] size 24 Line 28" process p4 -expect-text 1 1 "Push " process p4 -screen_dump process p4 -writehex 0d process p4 -expect-text 23 0 "Push " process p4 -screen_dump process p4 -writehex 0d process p4 -expect-text 1 0 "Push " process p4 -screen_dump process p4 -writehex 0d process p4 -expect-text 23 1 "Dark background. Push " process p4 -screen_dump process p4 -writehex 0d process p4 -expect-text 23 1 "Light background. Push " process p4 -screen_dump process p4 -writehex 0d process p4 -expect-text 24 1 "Push " process p4 -screen_dump process p4 -writehex 0d process p4 -expect-text 21 11 "Enter choice number (0 - 12):" process p4 -screen_dump # 4. Test of double-sized characters process p4 -writehex "34 0d" process p4 -expect-text 21 1 "This is not a double-width line" process p4 -expect-text 23 1 "Push " process p4 -screen_dump process p4 -writehex "0d" process p4 -expect-text 21 1 "This **is** a double-width line" process p4 -expect-text 23 1 "Push " process p4 -screen_dump process p4 -writehex "0d" process p4 -expect-text 21 1 "This is not a double-width line" process p4 -expect-text 23 1 "Push " process p4 -screen_dump process p4 -writehex "0d" process p4 -expect-text 21 1 "This **is** a double-width line" process p4 -expect-text 23 1 "Push " process p4 -screen_dump process p4 -writehex "0d" delay 2 process p4 -expect-text 23 41 "Push " process p4 -screen_dump process p4 -writehex "0d" delay 2 process p4 -expect-text 1 1 "Exactly half of the box should remain. Push " process p4 -screen_dump process p4 -writehex 0d process p4 -expect-text 21 11 "Enter choice number (0 - 12):" process p4 -screen_dump # 8. Test of VT102 features (Insert/Delete Char/Line) process p4 -writehex "38 0d" process p4 -expect-text 4 1 "Screen accordion test (Insert & Delete Line). Push D" process p4 -screen_dump process p4 -writehex 0d process p4 -expect-text 2 45 "nothing more. Push " process p4 -screen_dump process p4 -writehex 0d process p4 -expect-text 4 59 "*B'. Push " process p4 -screen_dump process p4 -writehex 0d process p4 -expect-text 4 52 "'AB'. Push " process p4 -screen_dump process p4 -writehex 0d process p4 -expect-text 5 1 "by one. Push E" process p4 -screen_dump process p4 -writehex 0d process p4 -expect-text 5 1 "by one. Push EEEEEEEEEEEEE " process p4 -screen_dump process p4 -writehex 0d process p4 -expect-text 10 1 "Push " process p4 -screen_dump process p4 -writehex 0d process p4 -expect-text 4 1 "Screen accordion test (Insert & Delete Line). Push D" process p4 -screen_dump process p4 -writehex 0d process p4 -expect-text 2 45 "nothing more. Push " process p4 -screen_dump process p4 -writehex 0d process p4 -expect-text 4 59 "*B'. Push " process p4 -screen_dump process p4 -writehex 0d process p4 -expect-text 4 52 "'AB'. Push " process p4 -screen_dump process p4 -writehex 0d process p4 -expect-text 5 1 "by one. Push E" process p4 -screen_dump process p4 -writehex 0d process p4 -expect-text 5 59 "EEE " process p4 -expect-text 5 1 "by one. Push E" process p4 -screen_dump process p4 -writehex 0d process p4 -expect-text 10 1 "Push " process p4 -screen_dump process p4 -writehex 0d process p4 -expect-text 21 11 "Enter choice number (0 - 12):" process p4 -screen_dump # 11. Test non-VT100 (e.g., VT220, XTERM) terminals process p4 -writehex "31 31 0d" process p4 -expect-text 0 0 "Menu 11: Non-VT100 Tests" process p4 -writehex "37 0d" process p4 -expect-text 0 0 "Menu 11.7: Miscellaneous ISO-6429 (ECMA-48) Tests" process p4 -writehex "32 0d" process p4 -expect-text 0 0 "Push " process p4 -screen_dump process p4 -expect-text 20 1 "Test Repeat (REP)" process p4 -expect-text 1 1 " ++ " process p4 -expect-text 2 2 " ++ " process p4 -expect-text 17 17 " ++ " process p4 -expect-text 18 18 "*++*" process p4 -writehex "0d" process p4 -expect-text 0 0 "Menu 11.7: Miscellaneous ISO-6429 (ECMA-48) Tests" process p4 -writehex "30 0d" process p4 -expect-text 0 0 "Menu 11: Non-VT100 Tests" process p4 -writehex "30 0d" # 0. Exit process p4 -writehex "30 0d" process p4 -expect-text 12 30 "That's all, folks!" process p4 -screen_dump process p4 -wait varnish-7.5.0/bin/varnishtest/tests/a00002.vtc000066400000000000000000000017731457605730600210260ustar00rootroot00000000000000varnishtest "basic default HTTP transactions with expect and options" server s1 { rxreq expect req.method == PUT expect req.proto == HTTP/1.0 expect req.url == "/foo" txresp -proto HTTP/1.2 -status 201 -reason Foo } server s1 -start client c1 -connect ${s1_sock} { txreq -method PUT -proto HTTP/1.0 -url /foo rxresp expect resp.proto == HTTP/1.2 expect resp.status == 201 expect resp.reason == Foo expect resp.status -lt 202 expect resp.status -le 201 expect resp.status -eq 201 expect resp.status -ge 201 expect resp.status -gt 0xc8 } client c1 -run server s1 -wait # The same tests with unix domain sockets server s2 -listen "${tmpdir}/s2.sock" { rxreq expect req.method == PUT expect req.proto == HTTP/1.0 expect req.url == "/foo" txresp -proto HTTP/1.2 -status 201 -reason Foo } -start client c2 -connect ${s2_sock} { txreq -req PUT -proto HTTP/1.0 -url /foo rxresp expect resp.proto == HTTP/1.2 expect resp.status == 201 expect resp.reason == Foo } client c2 -run server s2 -wait varnish-7.5.0/bin/varnishtest/tests/a00003.vtc000066400000000000000000000030151457605730600210160ustar00rootroot00000000000000varnishtest "dual independent HTTP transactions" server s1 { rxreq expect req.method == PUT expect req.proto == HTTP/1.0 expect req.url == "/foo" txresp -proto HTTP/1.2 -status 201 -reason Foo } server s2 { rxreq expect req.method == GET expect req.proto == HTTP/1.1 expect req.url == "/" txresp } server s1 -start server s2 -start client c1 -connect ${s1_sock} { txreq -req PUT -proto HTTP/1.0 -url /foo rxresp expect resp.proto == HTTP/1.2 expect resp.status == 201 expect resp.reason == Foo } client c2 -connect ${s2_sock} { txreq rxresp expect resp.proto == HTTP/1.1 expect resp.status == 200 expect resp.reason == OK } client c1 -start client c2 -start client c1 -wait client c2 -wait server s1 -wait server s2 -wait # The same tests with Unix domain sockets server s3 -listen "${tmpdir}/s3.sock" { rxreq expect req.method == PUT expect req.proto == HTTP/1.0 expect req.url == "/foo" txresp -proto HTTP/1.2 -status 201 -reason Foo } server s4 -listen "${tmpdir}/s4.sock" { rxreq expect req.method == GET expect req.proto == HTTP/1.1 expect req.url == "/" txresp } server s3 -start server s4 -start client c3 -connect ${s3_sock} { txreq -req PUT -proto HTTP/1.0 -url /foo rxresp expect resp.proto == HTTP/1.2 expect resp.status == 201 expect resp.reason == Foo } client c4 -connect ${s4_sock} { txreq rxresp expect resp.proto == HTTP/1.1 expect resp.status == 200 expect resp.reason == OK } client c3 -start client c4 -start client c3 -wait client c4 -wait server s3 -wait server s4 -wait varnish-7.5.0/bin/varnishtest/tests/a00004.vtc000066400000000000000000000024631457605730600210250ustar00rootroot00000000000000varnishtest "dual shared server HTTP transactions" server s1 -repeat 2 { rxreq expect req.method == PUT expect req.proto == HTTP/1.0 expect req.url == "/foo" txresp -proto HTTP/1.2 -status 201 -reason Foo } server s1 -start client c1 -connect ${s1_sock} { txreq -req PUT -proto HTTP/1.0 -url /foo rxresp expect resp.proto == HTTP/1.2 expect resp.status == 201 expect resp.reason == Foo } client c2 -connect ${s1_sock} { txreq -req PUT -proto HTTP/1.0 -url /foo rxresp expect resp.proto == HTTP/1.2 expect resp.status == 201 expect resp.reason == Foo } client c1 -start client c2 -start client c1 -wait client c2 -wait server s1 -wait # The same tests with Unix domain sockets server s2 -repeat 2 -listen "${tmpdir}/s2.sock" { rxreq expect req.method == PUT expect req.proto == HTTP/1.0 expect req.url == "/foo" txresp -proto HTTP/1.2 -status 201 -reason Foo } server s2 -start client c3 -connect ${s2_sock} { txreq -req PUT -proto HTTP/1.0 -url /foo rxresp expect resp.proto == HTTP/1.2 expect resp.status == 201 expect resp.reason == Foo } client c4 -connect ${s2_sock} { txreq -req PUT -proto HTTP/1.0 -url /foo rxresp expect resp.proto == HTTP/1.2 expect resp.status == 201 expect resp.reason == Foo } client c3 -start client c4 -start client c3 -wait client c4 -wait server s2 -wait varnish-7.5.0/bin/varnishtest/tests/a00005.vtc000066400000000000000000000027071457605730600210270ustar00rootroot00000000000000varnishtest "dual shared client HTTP transactions" server s1 { rxreq expect req.method == PUT expect req.proto == HTTP/1.0 expect req.url == "/foo" txresp -proto HTTP/1.2 -status 201 -reason Foo } server s2 { rxreq expect req.method == GET expect req.proto == HTTP/1.1 expect req.url == "/" txresp } server s1 -start server s2 -start client c1 -connect ${s1_sock} { txreq -req PUT -proto HTTP/1.0 -url /foo rxresp expect resp.proto == HTTP/1.2 expect resp.status == 201 expect resp.reason == Foo } client c1 -run client c1 -connect ${s2_sock} { txreq rxresp expect resp.proto == HTTP/1.1 expect resp.status == 200 expect resp.reason == OK } client c1 -run server s1 -wait server s2 -wait # The same tests with unix domain sockets server s3 -listen "${tmpdir}/s3.sock" { rxreq expect req.method == PUT expect req.proto == HTTP/1.0 expect req.url == "/foo" txresp -proto HTTP/1.2 -status 201 -reason Foo } server s4 -listen "${tmpdir}/s4.sock" { rxreq expect req.method == GET expect req.proto == HTTP/1.1 expect req.url == "/" txresp } server s3 -start server s4 -start client c2 -connect ${s3_sock} { txreq -req PUT -proto HTTP/1.0 -url /foo rxresp expect resp.proto == HTTP/1.2 expect resp.status == 201 expect resp.reason == Foo } client c2 -run client c2 -connect ${s4_sock} { txreq rxresp expect resp.proto == HTTP/1.1 expect resp.status == 200 expect resp.reason == OK } client c2 -run server s3 -wait server s4 -wait varnish-7.5.0/bin/varnishtest/tests/a00006.vtc000066400000000000000000000016731457605730600210310ustar00rootroot00000000000000varnishtest "bidirectional message bodies" server s1 { rxreq expect req.method == PUT expect req.proto == HTTP/1.0 expect req.url == "/foo" txresp -proto HTTP/1.2 -status 201 -reason Foo \ -body "987654321\n" } server s1 -start client c1 -connect ${s1_sock} { txreq -req PUT -proto HTTP/1.0 -url /foo \ -body "123456789\n" rxresp expect resp.proto == HTTP/1.2 expect resp.status == 201 expect resp.reason == Foo } client c1 -run server s1 -wait # The same tests with Unix domain sockets server s2 -listen "${tmpdir}/s2.sock" { rxreq expect req.method == PUT expect req.proto == HTTP/1.0 expect req.url == "/foo" txresp -proto HTTP/1.2 -status 201 -reason Foo \ -body "987654321\n" } server s2 -start client c2 -connect ${s2_sock} { txreq -req PUT -proto HTTP/1.0 -url /foo \ -body "123456789\n" rxresp expect resp.proto == HTTP/1.2 expect resp.status == 201 expect resp.reason == Foo } client c2 -run server s2 -wait varnish-7.5.0/bin/varnishtest/tests/a00007.vtc000066400000000000000000000005451457605730600210270ustar00rootroot00000000000000varnishtest "TCP reuse" server s1 { rxreq expect req.url == "/1" txresp -body "123456789\n" rxreq expect req.url == "/2" txresp -body "987654321\n" } server s1 -start client c1 -connect ${s1_sock} { txreq -url "/1" -req "POST" -body "abcdefghi\n" rxresp txreq -url "/2" -req "POST" -body "ihgfedcba\n" rxresp } client c1 -run server s1 -wait varnish-7.5.0/bin/varnishtest/tests/a00008.vtc000066400000000000000000000015201457605730600210220ustar00rootroot00000000000000varnishtest "Barrier operations" # bs -> server, bc -> client, bb -> both barrier bs cond 4 barrier bc cond 4 barrier bb cond 4 -cyclic server s1 { rxreq barrier bs sync barrier bb sync delay .9 txresp } -start server s2 { rxreq barrier bs sync barrier bb sync delay .6 txresp } -start server s3 { rxreq barrier bs sync barrier bb sync delay .2 txresp } -start client c1 -connect ${s1_sock} { delay .2 txreq rxresp barrier bc sync barrier bb sync } -start client c2 -connect ${s2_sock} { delay .6 txreq rxresp barrier bc sync barrier bb sync } -start client c3 -connect ${s3_sock} { delay .9 txreq rxresp barrier bc sync barrier bb sync } -start # Wait for all servers to have received requests barrier bs sync barrier bb sync # Wait for all clients to have received responses barrier bc sync barrier bb sync varnish-7.5.0/bin/varnishtest/tests/a00009.vtc000066400000000000000000000015271457605730600210320ustar00rootroot00000000000000varnishtest "VTC process: match text" process p1 { echo 0123 echo 4567 echo 89AB echo CDEF } -run -screen-dump # y == 0, x == 0 process p1 -match-text 0 0 "0123" process p1 -match-text 0 0 "0.*3" process p1 -match-text 0 0 "0123\(.|\n)*^CDEF" process p1 -match-text 0 0 "0123\(.|\n)*^89AB\(.|\n)*F$" # y != 0, x == 0 process p1 -match-text 1 0 "4567" process p1 -match-text 2 0 "4567" process p1 -match-text 2 0 "4567\(.|\n)*^9" process p1 -match-text 3 0 "89AB" process p1 -match-text 4 0 "C.*F" # y == 0, x != 0 process p1 -match-text 0 1 "4567" process p1 -match-text 0 2 "567" process p1 -match-text 0 2 "123\(.|\n)*^5" process p1 -match-text 0 2 "567\(.|\n)*^9" # y != 0, x != 0 process p1 -match-text 1 1 "4567\(.|\n)*^89" process p1 -match-text 2 2 "567\(.|\n)*^9" process p1 -match-text 3 4 "B\(.|\n)*^F" process p1 -match-text 4 3 "EF" varnish-7.5.0/bin/varnishtest/tests/a00010.vtc000066400000000000000000000003731457605730600210200ustar00rootroot00000000000000varnishtest "simply test that the framework support \0" server s1 { rxreq expect req.url == "/" txresp -body {a\0bc} } server s1 -start client c1 -connect ${s1_sock} { txreq rxresp expect resp.bodylen == 4 } client c1 -run server s1 -wait varnish-7.5.0/bin/varnishtest/tests/a00011.vtc000066400000000000000000000007061457605730600210210ustar00rootroot00000000000000varnishtest "test vtc gzip support" server s1 { rxreq expect req.http.content-length == "26" expect req.bodylen == "26" gunzip expect req.bodylen == "3" expect req.http.content-encoding == "gzip" txresp -gzipbody FOO } -start client c1 -connect ${s1_sock} { txreq -gzipbody FOO rxresp expect resp.http.content-length == "26" expect resp.bodylen == "26" gunzip expect resp.bodylen == "3" expect resp.http.content-encoding == "gzip" } -run varnish-7.5.0/bin/varnishtest/tests/a00012.vtc000066400000000000000000000003101457605730600210110ustar00rootroot00000000000000varnishtest "Ensure that we can test non-existence of headers (#1062)" server s1 { rxreq txresp } -start client c1 -connect ${s1_sock} { txreq rxresp expect resp.http.X-Test == } -run varnish-7.5.0/bin/varnishtest/tests/a00013.vtc000066400000000000000000000016041457605730600210210ustar00rootroot00000000000000varnishtest "Barrier operations" # same as a00008.vtc, with socket barriers instead # bs -> server, bc -> client, bb -> both barrier bs sock 4 barrier bc sock 4 barrier bb sock 4 -cyclic server s1 { rxreq barrier bs sync barrier bb sync delay .9 txresp } -start server s2 { rxreq barrier bs sync barrier bb sync delay .6 txresp } -start server s3 { rxreq barrier bs sync barrier bb sync delay .2 txresp } -start client c1 -connect ${s1_sock} { delay .2 txreq rxresp barrier bc sync barrier bb sync } -start client c2 -connect ${s2_sock} { delay .6 txreq rxresp barrier bc sync barrier bb sync } -start client c3 -connect ${s3_sock} { delay .9 txreq rxresp barrier bc sync barrier bb sync } -start # Wait for all servers to have received requests barrier bs sync barrier bb sync # Wait for all clients to have received responses barrier bc sync barrier bb sync varnish-7.5.0/bin/varnishtest/tests/a00014.vtc000066400000000000000000000001661457605730600210240ustar00rootroot00000000000000varnishtest "Custom feature verification" feature cmd true feature cmd false this is an invalid varnishtest command varnish-7.5.0/bin/varnishtest/tests/a00015.vtc000066400000000000000000000010221457605730600210150ustar00rootroot00000000000000varnishtest "Write a body to a file" server s1 { # First, HTTP checks rxreq expect req.http.Content-Type == "text/plain" # Then, payload checks write_body req.txt shell {grep -q request req.txt} txresp -hdr "Content-Type: text/plain" -body response } -start client c1 -connect ${s1_sock} { txreq -req POST -hdr "Content-Type: text/plain" -body request # First, HTTP checks rxresp expect resp.http.Content-Type == "text/plain" # Then, payload checks write_body resp.txt shell {grep -q response resp.txt} } -run varnish-7.5.0/bin/varnishtest/tests/a00018.vtc000066400000000000000000000004511457605730600210250ustar00rootroot00000000000000varnishtest "feature ignore_unknown_macro" feature ignore_unknown_macro server s1 { rxreq expect req.http.Foo == "${foo}" txresp -hdr "Bar: ${bar}" } -start client c1 -connect ${s1_sock} { txreq -hdr "Foo: ${foo}" rxresp expect resp.status == 200 expect resp.http.Bar == "${bar}" } -run varnish-7.5.0/bin/varnishtest/tests/a00019.vtc000066400000000000000000000017701457605730600210330ustar00rootroot00000000000000varnishtest "Check rxrespbody -max" server s1 { rxreq txresp -bodylen 65536 rxreq txresp } -start server s2 { rxreq txresp -nolen -hdr "Transfer-Encoding: chunked" chunkedlen 8192 chunkedlen 4096 chunkedlen 4096 chunkedlen 16384 chunkedlen 16384 chunkedlen 16384 chunkedlen 0 rxreq txresp } -start server s3 { rxreq txresp -nolen -bodylen 65536 } -start client c1 -connect ${s1_sock} { txreq rxresphdrs rxrespbody -max 8192 expect resp.bodylen == 8192 rxrespbody -max 8192 expect resp.bodylen == 16384 rxrespbody expect resp.bodylen == 65536 txreq rxresp } -run client c2 -connect ${s2_sock} { txreq rxresphdrs rxrespbody -max 8192 expect resp.bodylen == 8192 rxrespbody -max 8192 expect resp.bodylen == 16384 rxrespbody expect resp.bodylen == 65536 txreq rxresp } -run client c3 -connect ${s3_sock} { txreq rxresphdrs rxrespbody -max 8192 expect resp.bodylen == 8192 rxrespbody -max 8192 expect resp.bodylen == 16384 rxrespbody expect resp.bodylen == 65536 } -run varnish-7.5.0/bin/varnishtest/tests/a00020.vtc000066400000000000000000000003611457605730600210160ustar00rootroot00000000000000varnishtest "Test -bodyfrom" server s1 { rxreq expect req.bodylen == 241 txresp -bodyfrom ${testdir}/a00020.vtc } -start client c1 -connect ${s1_sock} { txreq -bodyfrom ${testdir}/a00020.vtc rxresp expect resp.bodylen == 241 } -run varnish-7.5.0/bin/varnishtest/tests/a00021.vtc000066400000000000000000000010441457605730600210160ustar00rootroot00000000000000varnishtest "tunnel basic coverage" barrier b1 cond 2 barrier b2 cond 2 barrier b3 cond 2 server s1 { rxreq barrier b2 sync barrier b3 sync txresp } -start tunnel t1 -connect "${s1_sock}" { pause barrier b1 sync send 10 resume barrier b2 sync pause barrier b3 sync recv 10 resume } -start client c1 -connect "${t1_sock}" { barrier b1 sync txreq rxresp } -run tunnel t1 -wait server s2 { rxreq txresp } -start tunnel t2 -connect "${s2_sock}" { resume } -start+pause client c2 -connect "${t2_sock}" { txreq rxresp } -run varnish-7.5.0/bin/varnishtest/tests/a00022.vtc000066400000000000000000000003021457605730600210130ustar00rootroot00000000000000varnishtest "Test setenv" setenv FOO "BAR BAZ" shell -expect "BAR BAZ" {echo $FOO} setenv -ifunset FOO QUUX shell -expect "BAR BAZ" {echo $FOO} setenv FOO QUUX shell -expect QUUX {echo $FOO} varnish-7.5.0/bin/varnishtest/tests/a00023.vtc000066400000000000000000000006761457605730600210320ustar00rootroot00000000000000varnishtest "Run server -dispatch more than once" feature ignore_unknown_macro shell { cat <<-'EOF' >_.vtc varnishtest "Run server -dispatch more than once (nested)" server s0 { rxreq txresp } -dispatch client c1 -connect ${s0_sock} { txreq rxresp } -run server s0 -break server s0 -dispatch client c1 -run EOF varnishtest -v _.vtc >_.log } shell -match "s1 +rxhdr" { cat _.log } shell -match "s2 +rxhdr" { cat _.log } varnish-7.5.0/bin/varnishtest/tests/a00024.vtc000066400000000000000000000024671457605730600210330ustar00rootroot00000000000000varnishtest "test -nouseragent and -noserver" server s1 { rxreq # by default, User-Agent header is set to cNAME expect req.http.User-Agent == "c101" txresp rxreq # when specified with -hdr, it overrides the default expect req.http.User-Agent == "not-c101" txresp -hdr "Server: not-s1" } -start server s2 { rxreq expect req.http.User-Agent == "c202" txresp rxreq # default User-Agent header is not included when -nouseragent is specified expect req.http.User-Agent == txresp -noserver } -start varnish v1 -vcl+backend { sub vcl_recv { if (req.url == "/home") { set req.backend_hint = s1; } else { set req.backend_hint = s2; } } sub vcl_backend_response { set Beresp.uncacheable = true; } } -start client c101 { txreq -url "/home" rxresp # by default, Server header is set to sNAME expect resp.http.Server == "s1" txreq -url "/home" -hdr "User-Agent: not-c101" rxresp # when specified with -hdr, it overrides the default expect resp.http.Server == "not-s1" } -run client c202 { txreq rxresp expect resp.http.Server == "s2" txreq -nouseragent rxresp # default Server header is not included when -noserver is specified expect resp.http.Server == } -run varnish-7.5.0/bin/varnishtest/tests/a02000.vtc000066400000000000000000000007101457605730600210140ustar00rootroot00000000000000varnishtest "Close/accept after H/2 upgrade" server s1 { stream 1 { rxreq txresp } -run expect_close accept rxreq txresp close accept rxreq txresp stream 1 { rxreq txresp } -run } -start client c1 -connect ${s1_sock} { stream 1 { txreq rxresp } -run } -run client c1 -connect ${s1_sock} { txreq rxresp } -run client c1 -connect ${s1_sock} { txreq rxresp stream 1 { txreq rxresp } -run } -run server s1 -wait varnish-7.5.0/bin/varnishtest/tests/a02001.vtc000066400000000000000000000033301457605730600210160ustar00rootroot00000000000000varnishtest "Quickly test all frames" server s1 { rxpri stream 0 { # PRIO txprio -stream 23456 -weight 123 # RST txrst -err 2 # SETTINGS txsettings -push true -hdrtbl 11111111 -maxstreams 222222 -winsize 333333 -framesize 444444 -hdrsize 555555 txsettings -ack # PING txping -data "01234567" txping -data "abcdefgh" -ack # GOAWAY txgoaway -laststream 17432423 -err 12 -debug "kangaroo" # WINUP txwinup -size 500 # FRAME txresp -body "floubidou" # FRAME txresp -body "tata" } -run } -start client c1 -connect ${s1_sock} { txpri stream 0 { # PRIO rxprio expect prio.stream == 23456 expect prio.weight == 123 # RST rxrst expect rst.err >= 2 expect rst.err < 3 # SETTINGS rxsettings expect settings.hdrtbl == 11111111 expect settings.maxstreams == 222222 expect settings.winsize == 333333 expect settings.framesize == 444444 expect settings.hdrsize == 555555 rxsettings expect settings.ack == true expect settings.hdrtbl == expect settings.maxstreams == expect settings.winsize == expect settings.framesize == expect settings.hdrsize == # PING rxping expect ping.ack == "false" expect ping.data == "01234567" expect ping.data != "O1234567" rxping expect ping.ack == "true" expect ping.data == "abcdefgh" expect ping.data != "abcdefgt" # GOAWAY rxgoaway expect goaway.err == 12 expect goaway.laststream == 17432423 expect goaway.debug == "kangaroo" # WINUP rxwinup expect winup.size == 500 # FRAME rxhdrs rxdata expect frame.data == "floubidou" expect frame.type == 0 expect frame.size == 9 expect frame.stream == 0 rxresp expect resp.body == "floubidoutata" } -run } -run server s1 -wait varnish-7.5.0/bin/varnishtest/tests/a02002.vtc000066400000000000000000000007071457605730600210240ustar00rootroot00000000000000varnishtest "Trigger a compression error via bad index" server s1 { non_fatal stream 1 { rxreq expect req.http.foo == txgoaway -laststream 0 -err 9 -debug "compression_error" } -run } -start client c1 -connect ${s1_sock} { stream 1 { txreq -idxHdr 100 -litHdr inc plain "foo" plain "bar" rxgoaway expect goaway.err == 9 expect goaway.laststream == 0 expect goaway.debug == "compression_error" } -run } -run server s1 -wait varnish-7.5.0/bin/varnishtest/tests/a02003.vtc000066400000000000000000000004001457605730600210130ustar00rootroot00000000000000varnishtest "Check bodylen" server s1 { stream 1 { rxreq expect req.bodylen == 3 txresp -bodylen 7 } -run } -start client c1 -connect ${s1_sock} { stream 1 { txreq -bodylen 3 rxresp expect resp.bodylen == 7 } -run } -run server s1 -wait varnish-7.5.0/bin/varnishtest/tests/a02004.vtc000066400000000000000000000003471457605730600210260ustar00rootroot00000000000000varnishtest "Simple request with body" server s1 { stream 1 { rxreq txresp -body "bob" } -run } -start client c1 -connect ${s1_sock} { stream 1 { txreq rxresp expect resp.bodylen == 3 } -run } -run server s1 -wait varnish-7.5.0/bin/varnishtest/tests/a02005.vtc000066400000000000000000000011771457605730600210310ustar00rootroot00000000000000varnishtest "Continuation frames" server s1 { stream 1 { rxreq txresp -nohdrend txcont -nohdrend -hdr "foo" "bar" txcont -hdr "baz" "qux" } -run stream 3 { rxreq txresp -nohdrend txcont -nohdrend -hdr "foo2" "bar2" txcont -hdr "baz2" "qux2" } -run } -start client c1 -connect ${s1_sock} { stream 1 { txreq rxhdrs -all expect resp.http.foo == "bar" expect resp.http.baz == "qux" } -run stream 3 { txreq rxhdrs -some 2 expect resp.http.foo2 == expect resp.http.baz2 == rxcont expect resp.http.foo2 == "bar2" expect resp.http.baz2 == "qux2" } -run } -run server s1 -wait varnish-7.5.0/bin/varnishtest/tests/a02006.vtc000066400000000000000000000020511457605730600210220ustar00rootroot00000000000000varnishtest "Keep track of window credits" server s1 { stream 2 { rxreq txresp -nohdrend txcont -nohdrend -hdr "foo" "bar" txcont -hdr "baz" "qux" txdata -data "foo" txdata -data "bar" expect stream.peer_window == 65529 rxreq txresp -bodylen 529 expect stream.peer_window == 65000 rxwinup expect stream.peer_window == 65200 } -run stream 0 { expect stream.peer_window == 65000 rxwinup expect stream.peer_window == 66000 } -run } -start client c1 -connect ${s1_sock} { stream 0 { expect stream.window == 65535 } -run stream 2 { expect stream.window == 65535 txreq rxhdrs rxcont rxcont expect resp.http.:status == "200" expect resp.http.foo == "bar" expect stream.window == 65535 rxdata expect stream.window == 65532 rxdata expect stream.window == 65529 expect resp.body == "foobar" expect resp.http.baz == "qux" } -run stream 0 { expect stream.window == 65529 } -run stream 2 { txreq rxresp txwinup -size 200 } -run stream 0 { txwinup -size 1000 } -run } -run server s1 -wait varnish-7.5.0/bin/varnishtest/tests/a02007.vtc000066400000000000000000000033671457605730600210360ustar00rootroot00000000000000varnishtest "HPACK test" server s1 { stream 1 { rxreq expect tbl.dec.size == 57 expect tbl.dec[1].key == ":authority" expect tbl.dec[1].value == "www.example.com" txresp } -run stream 3 { rxreq expect tbl.dec[1].key == "cache-control" expect tbl.dec[1].value == "no-cache" expect tbl.dec[2].key == ":authority" expect tbl.dec[2].value == "www.example.com" expect tbl.dec.size == 110 txresp } -run stream 5 { rxreq expect tbl.dec[1].key == "custom-key" expect tbl.dec[1].value == "custom-value" expect tbl.dec[2].key == "cache-control" expect tbl.dec[2].value == "no-cache" expect tbl.dec[3].key == ":authority" expect tbl.dec[3].value == "www.example.com" expect tbl.dec.size == 164 txresp } -run } -start client c1 -connect ${s1_sock} { stream 1 { txreq -idxHdr 2 \ -idxHdr 6 \ -idxHdr 4 \ -litIdxHdr inc 1 huf "www.example.com" expect tbl.enc[1].key == ":authority" expect tbl.enc[1].value == "www.example.com" rxresp } -run stream 3 { txreq -idxHdr 2 \ -idxHdr 6 \ -idxHdr 4 \ -idxHdr 62 \ -litIdxHdr inc 24 huf no-cache expect tbl.enc[1].key == "cache-control" expect tbl.enc[1].value == "no-cache" expect tbl.enc[2].key == ":authority" expect tbl.enc[2].value == "www.example.com" expect tbl.enc.size == 110 rxresp } -run stream 5 { txreq -idxHdr 2 \ -idxHdr 7 \ -idxHdr 5 \ -idxHdr 63 \ -litHdr inc huf "custom-key" huf "custom-value" expect tbl.enc[1].key == "custom-key" expect tbl.enc[1].value == "custom-value" expect tbl.enc[2].key == "cache-control" expect tbl.enc[2].value == "no-cache" expect tbl.enc[3].key == ":authority" expect tbl.enc[3].value == "www.example.com" expect tbl.enc.size == 164 rxresp } -run } -run server s1 -wait varnish-7.5.0/bin/varnishtest/tests/a02008.vtc000066400000000000000000000062241457605730600210320ustar00rootroot00000000000000varnishtest "HPACK test" server s1 { stream 0 { txsettings -hdrtbl 256 rxsettings txsettings -ack rxsettings expect settings.ack == true } -run stream 1 { rxreq expect tbl.dec[1].key == "location" expect tbl.dec[1].value == "https://www.example.com" expect tbl.dec[2].key == "date" expect tbl.dec[2].value == "Mon, 21 Oct 2013 20:13:21 GMT" expect tbl.dec[3].key == "cache-control" expect tbl.dec[3].value == "private" expect tbl.dec[4].key == ":status" expect tbl.dec[4].value == "302" expect tbl.dec.size == 222 txresp } -run stream 3 { rxreq expect tbl.dec[1].key == ":status" expect tbl.dec[1].value == "307" expect tbl.dec[2].key == "location" expect tbl.dec[2].value == "https://www.example.com" expect tbl.dec[3].key == "date" expect tbl.dec[3].value == "Mon, 21 Oct 2013 20:13:21 GMT" expect tbl.dec[4].key == "cache-control" expect tbl.dec[4].value == "private" expect tbl.dec.size == 222 txresp } -run stream 5 { rxreq expect tbl.dec[1].key == "set-cookie" expect tbl.dec[1].value == "foo=ASDJKHQKBZXOQWEOPIUAXQWEOIU; max-age=3600; version=1" expect tbl.dec[2].key == "content-encoding" expect tbl.dec[2].value == "gzip" expect tbl.dec[3].key == "date" expect tbl.dec[3].value == "Mon, 21 Oct 2013 20:13:22 GMT" expect tbl.dec.size == 215 txresp } -run } -start client c1 -connect ${s1_sock} { stream 0 { txsettings -hdrtbl 256 rxsettings txsettings -ack rxsettings expect settings.ack == true } -run stream 1 { txreq \ -litIdxHdr inc 8 plain "302" \ -litIdxHdr inc 24 plain "private" \ -litIdxHdr inc 33 plain "Mon, 21 Oct 2013 20:13:21 GMT" \ -litIdxHdr inc 46 plain "https://www.example.com" expect tbl.enc[1].key == "location" expect tbl.enc[1].value == "https://www.example.com" expect tbl.enc[2].key == "date" expect tbl.enc[2].value == "Mon, 21 Oct 2013 20:13:21 GMT" expect tbl.enc[3].key == "cache-control" expect tbl.enc[3].value == "private" expect tbl.enc[4].key == ":status" expect tbl.enc[4].value == "302" expect tbl.enc.size == 222 rxresp } -run stream 3 { txreq \ -litIdxHdr inc 8 huf "307" \ -idxHdr 65 \ -idxHdr 64 \ -idxHdr 63 expect tbl.enc[1].key == ":status" expect tbl.enc[1].value == "307" expect tbl.enc[2].key == "location" expect tbl.enc[2].value == "https://www.example.com" expect tbl.enc[3].key == "date" expect tbl.enc[3].value == "Mon, 21 Oct 2013 20:13:21 GMT" expect tbl.enc[4].key == "cache-control" expect tbl.enc[4].value == "private" expect tbl.enc.size == 222 rxresp } -run stream 5 { txreq -idxHdr 8 \ -idxHdr 65 \ -litIdxHdr inc 33 plain "Mon, 21 Oct 2013 20:13:22 GMT" \ -idxHdr 64 \ -litIdxHdr inc 26 plain "gzip" \ -litIdxHdr inc 55 plain "foo=ASDJKHQKBZXOQWEOPIUAXQWEOIU; max-age=3600; version=1" expect tbl.enc[1].key == "set-cookie" expect tbl.enc[1].value == "foo=ASDJKHQKBZXOQWEOPIUAXQWEOIU; max-age=3600; version=1" expect tbl.enc[2].key == "content-encoding" expect tbl.enc[2].value == "gzip" expect tbl.enc[3].key == "date" expect tbl.enc[3].value == "Mon, 21 Oct 2013 20:13:22 GMT" expect tbl.enc.size == 215 rxresp } -run } -run server s1 -wait varnish-7.5.0/bin/varnishtest/tests/a02009.vtc000066400000000000000000000033741457605730600210360ustar00rootroot00000000000000varnishtest "More HPACK tests" server s1 { stream 1 { rxreq expect tbl.dec.size == 57 expect tbl.dec[1].key == ":authority" expect tbl.dec[1].value == "www.example.com" txresp } -run stream 3 { rxreq expect tbl.dec[1].key == "cache-control" expect tbl.dec[1].value == "no-cache" expect tbl.dec[2].key == ":authority" expect tbl.dec[2].value == "www.example.com" expect tbl.dec.size == 110 txresp } -run stream 5 { rxreq expect tbl.dec[1].key == "custom-key" expect tbl.dec[1].value == "custom-value" expect tbl.dec[2].key == "cache-control" expect tbl.dec[2].value == "no-cache" expect tbl.dec[3].key == ":authority" expect tbl.dec[3].value == "www.example.com" expect tbl.dec.size == 164 txresp } -run } -start client c1 -connect ${s1_sock} { stream 1 { txreq -idxHdr 2 \ -idxHdr 6 \ -idxHdr 4 \ -litIdxHdr inc 1 huf "www.example.com" expect tbl.enc[1].key == ":authority" expect tbl.enc[1].value == "www.example.com" rxresp } -run stream 3 { txreq -idxHdr 2 \ -idxHdr 6 \ -idxHdr 4 \ -idxHdr 62 \ -litIdxHdr inc 24 huf no-cache expect tbl.enc[1].key == "cache-control" expect tbl.enc[1].value == "no-cache" expect tbl.enc[2].key == ":authority" expect tbl.enc[2].value == "www.example.com" expect tbl.enc.size == 110 rxresp } -run stream 5 { txreq -idxHdr 2 \ -idxHdr 7 \ -idxHdr 5 \ -idxHdr 63 \ -litHdr inc huf "custom-key" huf "custom-value" expect tbl.enc[1].key == "custom-key" expect tbl.enc[1].value == "custom-value" expect tbl.enc[2].key == "cache-control" expect tbl.enc[2].value == "no-cache" expect tbl.enc[3].key == ":authority" expect tbl.enc[3].value == "www.example.com" expect tbl.enc.size == 164 rxresp } -run } -run server s1 -wait varnish-7.5.0/bin/varnishtest/tests/a02010.vtc000066400000000000000000000013341457605730600210200ustar00rootroot00000000000000varnishtest "Verify the initial window size" server s1 { stream 0 { expect stream.peer_window == 65535 rxsettings txsettings -ack } -run stream 1 { expect stream.peer_window == 128 rxreq txresp -bodylen 100 expect stream.peer_window == 28 } -run stream 0 { rxsettings txsettings -ack } -run stream 1 { expect stream.peer_window == -36 } -run } -start client c1 -connect ${s1_sock} { stream 0 { txsettings -winsize 128 rxsettings } -run stream 1 { txreq rxresp expect resp.bodylen == 100 expect stream.window == 28 } -run stream 0 { txsettings -winsize 64 rxsettings expect stream.window == 65435 } -run stream 1 { expect stream.window == -36 } -run } -run server s1 -wait varnish-7.5.0/bin/varnishtest/tests/a02011.vtc000066400000000000000000000007011457605730600210160ustar00rootroot00000000000000varnishtest "overflow" server s1 { stream 1 { rxreq txresp -hdr long-header-original1 original1 \ -hdr long-header-original2 original2 \ -hdr long-header-original3 original3 \ -hdr long-header-original4 original4 } -run } -start client c1 -connect ${s1_sock} { stream 1 { txreq -req GET \ -url / \ -hdr :scheme http \ -hdr :authority localhost rxresp expect resp.http.:status == 200 } -run } -run server s1 -wait varnish-7.5.0/bin/varnishtest/tests/a02012.vtc000066400000000000000000000016221457605730600210220ustar00rootroot00000000000000varnishtest "padded DATA frames" server s1 { stream 1 { rxreq # HDR indexed ":status: 200" + 2 padding bytes sendhex "00 00 04 01 0c 00 00 00 01 02 88 12 34" # DATA "foo" + 4 padding bytes sendhex "00 00 08 00 09 00 00 00 01 04 66 6f 6f 6e 6e 6e 6e" } -run stream 3 { rxreq txresp -nostrend txdata -data "bull" -pad "frog" -nostrend txdata -data "terrier" -padlen 17 txdata -datalen 4 -padlen 2 } -run stream 5 { rxreq txresp -pad "pwepew" } -run } -start client c1 -connect ${s1_sock} { stream 1 { txreq rxresp expect resp.bodylen == 3 expect resp.body == "foo" } -run stream 3 { txreq rxhdrs rxdata expect frame.size == 9 expect resp.body == "bull" rxdata expect frame.size == 25 expect resp.body == "bullterrier" rxdata expect frame.size == 7 } -run stream 5 { txreq rxresp expect frame.padding == 6 } -run } -run server s1 -wait varnish-7.5.0/bin/varnishtest/tests/a02013.vtc000066400000000000000000000003751457605730600210270ustar00rootroot00000000000000varnishtest "H/2 state after sending/receiving preface" server s1 { expect h2.state == false rxpri expect h2.state == true } -start client c1 -connect ${s1_sock} { expect h2.state == false txpri expect h2.state == true } -start server s1 -wait varnish-7.5.0/bin/varnishtest/tests/a02014.vtc000066400000000000000000000020301457605730600210160ustar00rootroot00000000000000varnishtest "priority" server s1 { stream 1 { rxreq txresp expect stream.weight == 16 expect stream.dependency == 0 } -run stream 3 { rxreq txresp expect stream.weight == 123 expect stream.dependency == 5 rxprio expect prio.weight == 10 expect prio.stream == 7 expect stream.weight == 10 expect stream.dependency == 7 } -run } -start client c1 -connect ${s1_sock} { stream 1 { txreq -method GET -url /1 \ -hdr :scheme http -hdr :authority localhost rxresp expect stream.weight == 16 expect stream.dependency == 0 } -run stream 3 { txreq -req GET -url /3 \ -hdr :scheme http -hdr :authority localhost \ -weight 123 -ex -dep 5 rxresp expect stream.weight == 123 expect stream.dependency == 5 txprio -weight 10 -stream 7 expect stream.weight == 10 expect stream.dependency == 7 } -run stream 5 { expect stream.weight == 16 expect stream.dependency == 0 } -run stream 0 { expect stream.weight == expect stream.dependency == } -run } -run server s1 -wait varnish-7.5.0/bin/varnishtest/tests/a02015.vtc000066400000000000000000000014421457605730600210250ustar00rootroot00000000000000varnishtest "exclusive dependency" server s1 { stream 1 { rxreq txresp } -run stream 3 { rxreq txresp } -run stream 5 { rxreq txresp expect stream.dependency == 0 } -run stream 1 { expect stream.dependency == 5 } -run stream 3 { expect stream.dependency == 5 } -run stream 1 { rxprio } -run stream 5 { expect stream.dependency == 1 } -run } -start client c1 -connect ${s1_sock} { stream 1 { txreq rxresp } -run stream 3 { txreq rxresp } -run stream 5 { txreq -req GET -ex expect stream.dependency == 0 rxresp } -run stream 1 { expect stream.dependency == 5 } -run stream 3 { expect stream.dependency == 5 } -run stream 1 { txprio -stream 0 -ex } -run stream 5 { expect stream.dependency == 1 } -run } -run server s1 -wait varnish-7.5.0/bin/varnishtest/tests/a02016.vtc000066400000000000000000000011611457605730600210240ustar00rootroot00000000000000varnishtest "Test pseudo-headers inspection" server s1 { stream 1 { rxreq expect req.url == "/foo" expect req.http.:path == "/foo" expect req.method == "NOTGET" expect req.http.:method == "NOTGET" expect req.authority == "bar" expect req.http.:authority == "bar" expect req.scheme == "baz" expect req.http.:scheme == "baz" txresp -status 123 } -run } -start client c1 -connect ${s1_sock} { stream 1 { txreq -url "/foo" \ -req "NOTGET" \ -hdr ":authority" "bar" \ -scheme "baz" rxresp expect resp.status == 123 expect resp.http.:status == 123 } -run } -start server s1 -wait varnish-7.5.0/bin/varnishtest/tests/a02017.vtc000066400000000000000000000004701457605730600210270ustar00rootroot00000000000000varnishtest "Push promise" server s1 { stream 1 { rxreq txpush -promised 2 -url "/hereyougo" txresp } -run } -start client c1 -connect ${s1_sock} { stream 1 { txreq rxpush expect push.id == 2 expect req.url == "/hereyougo" expect req.method == "GET" rxresp } -run } -run server s1 -wait varnish-7.5.0/bin/varnishtest/tests/a02018.vtc000066400000000000000000000005011457605730600210230ustar00rootroot00000000000000varnishtest "H/2 state after sending/receiving preface" server s1 { expect h2.state == "false" stream 1 { rxreq txresp } -run expect h2.state == "true" } -start client c1 -connect ${s1_sock} { expect h2.state == "false" stream 1 { txreq rxresp } -run expect h2.state == "true" } -start server s1 -wait varnish-7.5.0/bin/varnishtest/tests/a02019.vtc000066400000000000000000000004221457605730600210260ustar00rootroot00000000000000varnishtest "Static table encoding" server s1 { stream 1 { rxreq expect req.http.:path == "/index.html" txresp } -run } -start client c1 -connect ${s1_sock} { stream 1 { txreq -idxHdr 2 \ -idxHdr 6 \ -idxHdr 5 rxresp } -run } -run server s1 -wait varnish-7.5.0/bin/varnishtest/tests/a02020.vtc000066400000000000000000000013461457605730600210240ustar00rootroot00000000000000varnishtest "Reduce dynamic table while incoming headers are flying" server s1 { stream 1 { rxreq txresp -litHdr inc plain hoge plain fuga expect tbl.enc[1].key == "hoge" expect tbl.enc[1].value == "fuga" expect tbl.enc.size == 40 } -run stream 3 { rxreq txresp -idxHdr 62 -litHdr inc plain "foo" plain "bar" } -run stream 0 { rxsettings } -run } -start client c1 -connect ${s1_sock} { stream 1 { txreq rxresp expect tbl.dec[1].key == "hoge" expect tbl.dec[1].value == "fuga" expect tbl.dec.size == 40 expect tbl.dec.length == 1 } -run stream 3 { txreq } -run stream 0 { txsettings -hdrtbl 0 } -run non_fatal stream 3 { rxresp expect resp.http.foo == } -run } -run server s1 -wait varnish-7.5.0/bin/varnishtest/tests/a02021.vtc000066400000000000000000000014551457605730600210260ustar00rootroot00000000000000varnishtest "Reduce dynamic table size" server s1 { stream 1 { rxreq txresp -litHdr inc plain hoge plain fuga expect tbl.dec.size == 57 expect tbl.enc.size == 40 } -run stream 0 { rxsettings txsettings -ack } -run stream 3 { rxreq expect tbl.dec.size == 110 expect tbl.enc.size == 0 txresp } -run } -start client c1 -connect ${s1_sock} { stream 1 { txreq -idxHdr 2 \ -idxHdr 6 \ -idxHdr 4 \ -litIdxHdr inc 1 huf "www.example.com" rxresp expect tbl.dec.size == 40 expect tbl.enc.size == 57 } -run stream 0 { txsettings -hdrtbl 0 rxsettings } -run stream 3 { txreq -idxHdr 2 \ -idxHdr 6 \ -idxHdr 4 \ -idxHdr 62 \ -litIdxHdr inc 24 huf no-cache expect tbl.enc.size == 110 expect tbl.dec.size == 0 rxresp } -run } -run server s1 -wait varnish-7.5.0/bin/varnishtest/tests/a02022.vtc000066400000000000000000000006631457605730600210270ustar00rootroot00000000000000varnishtest "H/1 -> H/2 upgrade" feature cmd "nghttp --version | grep -q 'nghttp2/[1-9]'" server s1 { rxreq upgrade stream 1 { rxprio } -start stream 3 { rxprio } -start stream 5 { rxprio } -start stream 7 { rxprio } -start stream 9 { rxprio } -start stream 11 { rxprio } -start stream 0 -wait stream 1 -wait { txresp } -run stream 0 { rxgoaway } -run } -start shell { nghttp http://${s1_sock} -nu } server s1 -wait varnish-7.5.0/bin/varnishtest/tests/a02023.vtc000066400000000000000000000012451457605730600210250ustar00rootroot00000000000000varnishtest "Window update" server s1 { stream 1 { rxreq txresp -body "bob" } -run stream 3 { rxreq txresp -nohdrend txcont -nohdrend -hdr "foo" "bar" txcont -hdr "baz" "qux" txdata -data "foo" txdata -data "bar" } -run } -start client c1 -connect ${s1_sock} { stream 1 { txreq rxresp expect resp.bodylen == 3 } -run stream 3 { txreq rxhdrs rxcont rxcont expect resp.http.:status == "200" expect resp.http.foo == "bar" expect stream.window == 65535 rxdata expect stream.window == 65532 rxdata expect stream.window == 65529 expect resp.body == "foobar" expect resp.http.baz == "qux" } -run } -run server s1 -wait varnish-7.5.0/bin/varnishtest/tests/a02024.vtc000066400000000000000000000011421457605730600210220ustar00rootroot00000000000000varnishtest "Write a body to a file" server s1 { fatal stream 1 { # First, HTTP checks rxreq expect req.http.Content-Type == "text/plain" # Then, payload checks write_body req.txt shell {grep -q request req.txt} txresp -hdr Content-Type text/plain -body response } -run } -start client c1 -connect ${s1_sock} { fatal stream 1 { txreq -req POST -hdr Content-Type text/plain -body request # First, HTTP checks rxresp expect resp.http.Content-Type == "text/plain" # Then, payload checks write_body resp.txt shell {grep -q response resp.txt} } -run } -run server s1 -wait varnish-7.5.0/bin/varnishtest/tests/a02025.vtc000066400000000000000000000005021457605730600210220ustar00rootroot00000000000000varnishtest "Test -bodyfrom" shell {printf helloworld >body.txt} server s1 { stream 1 { rxreq expect req.body == helloworld txresp -bodyfrom body.txt } -run } -start client c1 -connect ${s1_sock} { stream 1 { txreq -bodyfrom body.txt rxresp expect resp.body == helloworld } -run } -run server s1 -wait varnish-7.5.0/bin/varnishtest/tests/a02026.vtc000066400000000000000000000005651457605730600210340ustar00rootroot00000000000000varnishtest "Test -gzipbody and -gziplen" server s1 { stream 1 { rxreq txresp -gzipbody "foo" } -run stream 3 { rxreq txresp -gziplen 10 } -run } -start client c1 -connect ${s1_sock} { stream 1 { txreq rxresp gunzip expect resp.body == "foo" } -run stream 3 { txreq rxresp gunzip expect resp.bodylen == 10 } -run } -run server s1 -wait varnish-7.5.0/bin/varnishtest/tests/a02027.vtc000066400000000000000000000027331457605730600210340ustar00rootroot00000000000000varnishtest "Malformed :path handling" server s1 { } -start varnish v1 -vcl+backend { sub vcl_recv { return (synth(200)); } } -start varnish v1 -cliok "param.set feature +http2" client c1 { stream 1 { txreq -noadd -hdr ":authority" "foo.com" -hdr ":path" "foobar" -hdr ":scheme" "http" -hdr ":method" "GET" rxrst expect rst.err == PROTOCOL_ERROR } -run } -run client c1 { stream 1 { txreq -noadd -hdr ":authority" "foo.com" -hdr ":path" "//foo" -hdr ":scheme" "http" -hdr ":method" "GET" rxresp expect resp.status == 200 } -run } -run client c1 { stream 3 { txreq -noadd -hdr ":authority" "foo.com" -hdr ":path" "*a" -hdr ":scheme" "http" -hdr ":method" "GET" rxrst expect rst.err == PROTOCOL_ERROR } -run } -run client c1 { stream 1 { txreq -noadd -hdr ":authority" "foo.com" -hdr ":path" "*" -hdr ":scheme" "http" -hdr ":method" "GET" rxrst expect rst.err == PROTOCOL_ERROR } -run } -run client c1 { stream 1 { txreq -noadd -hdr ":authority" "foo.com" -hdr ":path" "*" -hdr ":scheme" "http" -hdr ":method" "OPTIONS" rxresp expect resp.status == 200 } -run } -run client c1 { stream 1 { txreq -noadd -hdr ":authority" "foo.com" -hdr ":path" "*" -hdr ":scheme" "http" -hdr ":method" "OPTIONs" rxrst expect rst.err == PROTOCOL_ERROR } -run } -run client c1 { stream 1 { txreq -noadd -hdr ":authority" "foo.com" -hdr ":path" "*" -hdr ":scheme" "http" -hdr ":method" "OPTIONSx" rxrst expect rst.err == PROTOCOL_ERROR } -run } -run varnish-7.5.0/bin/varnishtest/tests/a02028.vtc000066400000000000000000000003751457605730600210350ustar00rootroot00000000000000varnishtest "Automatic stream numbers" server s1 { loop 4 { stream next { rxreq txresp } -run } } -start client c1 -connect ${s1_sock} { loop 3 { stream next { txreq rxresp } -run } stream 7 { txreq rxresp } -run } -run varnish-7.5.0/bin/varnishtest/tests/b00000.vtc000066400000000000000000000026651457605730600210260ustar00rootroot00000000000000varnishtest "Does anything get through at all ?" feature ipv4 feature ipv6 server s1 -listen 127.0.0.1:0 { rxreq txresp -body "012345\n" } -start server s2 -listen [::1]:0 { rxreq txresp -body "012345\n" } -start varnish v1 -vcl+backend { sub vcl_recv { if (req.url == "/1") { set req.backend_hint = s1; } else { set req.backend_hint = s2; } } sub vcl_backend_response { set beresp.do_stream = false; } sub vcl_deliver { # make s_resp_hdrbytes deterministic unset resp.http.via; } } -start varnish v1 -cliok "param.set debug +workspace" varnish v1 -cliok "param.set debug +witness" #varnish v1 -vsc * varnish v1 -expect MAIN.n_object == 0 varnish v1 -expect MAIN.sess_conn == 0 varnish v1 -expect MAIN.client_req == 0 varnish v1 -expect MAIN.cache_miss == 0 client c1 { txreq -url "/1" rxresp expect resp.status == 200 } -run varnish v1 -expect n_object == 1 varnish v1 -expect sess_conn == 1 varnish v1 -expect client_req == 1 varnish v1 -expect cache_miss == 1 varnish v1 -expect s_sess == 1 varnish v1 -expect s_resp_bodybytes == 7 varnish v1 -expect s_resp_hdrbytes == 158 client c1 { txreq -url "/2" rxresp expect resp.status == 200 } -run # varnish v1 -vsc * varnish v1 -expect n_object == 2 varnish v1 -expect sess_conn == 2 varnish v1 -expect client_req == 2 varnish v1 -expect cache_miss == 2 varnish v1 -expect s_sess == 2 varnish v1 -expect s_resp_bodybytes == 14 varnish v1 -expect s_resp_hdrbytes == 316 varnish-7.5.0/bin/varnishtest/tests/b00001.vtc000066400000000000000000000015121457605730600210150ustar00rootroot00000000000000varnishtest "Check that a pipe transaction works" server s1 -repeat 1 { rxreq expect req.proto == "HTTP/1.0" expect req.http.connection == "close" txresp rxreq expect req.proto == "nonsense" expect req.http.connection == "keep-alive" txresp } -start varnish v1 -vcl+backend { sub vcl_recv { return(pipe); } sub vcl_pipe { if (req.url == "/2") { set bereq.http.connection = req.http.connection; } set bereq.http.xid = bereq.xid; } } -start client c1 { txreq -proto HTTP/1.0 -url /1 -hdr "Connection: keep-alive" rxresp expect resp.status == 200 txreq -proto nonsense -url /2 -hdr "Connection: keep-alive" rxresp expect resp.status == 200 } -run varnish v1 -expect n_object == 0 varnish v1 -expect sess_conn == 1 varnish v1 -expect client_req == 1 varnish v1 -expect s_sess == 1 varnish v1 -expect s_pipe == 1 varnish-7.5.0/bin/varnishtest/tests/b00002.vtc000066400000000000000000000024071457605730600210220ustar00rootroot00000000000000varnishtest "Check that a pass transaction works" server s1 { rxreq expect req.proto == HTTP/1.1 txresp -hdr "Cache-Control: max-age=120" -hdr "Connection: close" -nodate -body "012345\n" } -start varnish v1 -arg "-sTransient=default,1m" -vcl+backend { sub vcl_recv { return(pass); } sub vcl_backend_response { set beresp.http.x-ttl = beresp.ttl; } sub vcl_deliver { set resp.http.o_uncacheable = obj.uncacheable; set resp.http.xport = req.transport; } } -start # check that there are no TTL LogTags between the # last header and VCL_return b deliver logexpect l1 -v v1 -g request { expect * 1002 Begin expect * = BerespHeader {^Date:} expect 0 = VCL_call {^BACKEND_RESPONSE} expect 0 = BerespHeader {^x-ttl: 0.000} expect 0 = VCL_return {^deliver} } -start client c1 { txreq -proto HTTP/1.0 -url "/" rxresp expect resp.proto == HTTP/1.1 expect resp.status == 200 expect resp.http.connection == close expect resp.http.x-ttl == 0.000 expect resp.http.o_uncacheable == true expect resp.http.xport == HTTP/1 } -run varnish v1 -expect n_object == 0 varnish v1 -expect SM?.Transient.g_alloc == 0 varnish v1 -expect sess_conn == 1 varnish v1 -expect client_req == 1 varnish v1 -expect s_sess == 1 varnish v1 -expect s_pass == 1 logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/b00003.vtc000066400000000000000000000012151457605730600210170ustar00rootroot00000000000000varnishtest "Check that a cache fetch + hit transaction works" server s1 { rxreq txresp -hdr "Connection: close" -body "012345\n" } -start varnish v1 -vcl+backend { } -start client c1 { txreq -url "/" rxresp expect resp.status == 200 expect resp.http.X-Varnish == "1001" } -run client c2 { txreq -url "/" rxresp expect resp.status == 200 expect resp.http.X-Varnish == "1004 1002" } -run # Give varnish a chance to update stats delay .1 varnish v1 -expect sess_conn == 2 varnish v1 -expect cache_hit == 1 varnish v1 -expect cache_miss == 1 varnish v1 -expect client_req == 2 varnish v1 -expect s_sess == 2 varnish v1 -expect s_fetch == 1 varnish-7.5.0/bin/varnishtest/tests/b00004.vtc000066400000000000000000000010401457605730600210140ustar00rootroot00000000000000varnishtest "Torture Varnish with start/stop commands" server s1 { rxreq } -start varnish v1 -vcl+backend { } varnish v1 -start varnish v1 -cliexpect "running" status varnish v1 -clijson "status -j" varnish v1 -stop varnish v1 -cliexpect "stopped" status varnish v1 -clijson "status -j" varnish v1 -start varnish v1 -stop varnish v1 -start varnish v1 -stop varnish v1 -cliok start varnish v1 -clierr 300 start varnish v1 -clierr 300 start varnish v1 -cliok stop varnish v1 -clierr 300 stop varnish v1 -clierr 300 stop varnish v1 -wait varnish-7.5.0/bin/varnishtest/tests/b00005.vtc000066400000000000000000000005361457605730600210260ustar00rootroot00000000000000varnishtest "Check that -s works" server s1 { rxreq txresp -hdr "Connection: close" -body "012345\n" } -start varnish v1 \ -arg "-s file,${tmpdir}/varnishtest_backing,10M" \ -vcl+backend {} -start client c1 { txreq -url "/" rxresp expect resp.status == 200 } -run server s1 -wait varnish v1 -stop shell "rm ${tmpdir}/varnishtest_backing" varnish-7.5.0/bin/varnishtest/tests/b00006.vtc000066400000000000000000000003651457605730600210270ustar00rootroot00000000000000varnishtest "Check that -s default works" server s1 { rxreq txresp -hdr "Connection: close" -body "012345\n" } -start varnish v1 -arg "-s default" -vcl+backend {} -start client c1 { txreq -url "/" rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/b00007.vtc000066400000000000000000000012041457605730600210210ustar00rootroot00000000000000varnishtest "Check chunked encoding from backend works" server s1 { rxreq expect req.url == "/bar" send "HTTP/1.1 200 OK\r\n" send "Transfer-encoding: chunked\r\n" send "\r\n" send "00000004\r\n1234\r\n" send "00000000\r\n" send "\r\n" rxreq expect req.url == "/foo" send "HTTP/1.1 200 OK\r\n" send "Transfer-encoding: chunked\r\n" send "\r\n" send "00000004\r\n1234\r\n" chunked "1234" chunked "" } -start varnish v1 -vcl+backend {} -start client c1 { txreq -url "/bar" rxresp expect resp.status == 200 expect resp.bodylen == "4" txreq -url "/foo" rxresp expect resp.status == 200 expect resp.bodylen == "8" } -run varnish-7.5.0/bin/varnishtest/tests/b00008.vtc000066400000000000000000000021171457605730600210260ustar00rootroot00000000000000varnishtest "Test CLI commands and parameter functions" varnish v1 -arg "-b ${bad_ip}:9080" varnish v1 -cliok "help" varnish v1 -cliok "-help" varnish v1 -cliok "help -a" varnish v1 -cliok "help -d" varnish v1 -clierr 101 "help -x" varnish v1 -cliok "help vcl.load" varnish v1 -clierr 101 "help ban" varnish v1 -clierr 101 "FOO?" varnish v1 -clierr 100 "\x22" varnish v1 -clierr 105 "help 1 2 3" varnish v1 -cliok "param.show" varnish v1 -start varnish v1 -cliok "help" varnish v1 -clijson "help -j" varnish v1 -cliok "backend.list" varnish v1 -clijson "backend.list -j" varnish v1 -cliok "ping" varnish v1 -clijson "ping -j" varnish v1 -clierr 106 "param.set waiter HASH(0x8839c4c)" varnish v1 -cliexpect 60 "param.show first_byte_timeout" varnish v1 -cliok "param.set first_byte_timeout 120" varnish v1 -cliexpect 120 "param.show first_byte_timeout" varnish v1 -cliok "param.reset first_byte_timeout" varnish v1 -cliexpect 60 "param.show first_byte_timeout" varnish v1 -cliok "param.set cli_limit 128" varnish v1 -clierr 201 "param.show" varnish v1 -cliok "\"help\" \"help\"" varnish-7.5.0/bin/varnishtest/tests/b00009.vtc000066400000000000000000000004461457605730600210320ustar00rootroot00000000000000varnishtest "Check poll acceptor" server s1 { rxreq txresp -hdr "Connection: close" -body "012345\n" } -start varnish v1 -arg "-Wpoll" -vcl+backend {} -start client c1 { txreq -url "/" rxresp expect resp.status == 200 delay .1 txreq -url "/" rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/b00010.vtc000066400000000000000000000005241457605730600210170ustar00rootroot00000000000000varnishtest "Check simple list hasher" server s1 { rxreq txresp -hdr "Connection: close" -body "012345\n" } -start varnish v1 -arg "-h simple_list" -vcl+backend {} -start client c1 { txreq -url "/" rxresp expect resp.status == 200 txreq -url "/" rxresp expect resp.status == 200 expect resp.http.x-varnish == "1003 1002" } -run varnish-7.5.0/bin/varnishtest/tests/b00011.vtc000066400000000000000000000005231457605730600210170ustar00rootroot00000000000000varnishtest "Check HTTP/1.0 EOF transmission" server s1 { rxreq send "HTTP/1.0 200 OK\n" send "Connection: close\n" send "\n" send "Body line 1\n" send "Body line 2\n" send "Body line 3\n" } -start varnish v1 -vcl+backend {} -start client c1 { txreq -url "/" rxresp expect resp.status == 200 expect resp.bodylen == 36 } -run varnish-7.5.0/bin/varnishtest/tests/b00012.vtc000066400000000000000000000012301457605730600210140ustar00rootroot00000000000000varnishtest "Check pipelining" server s1 { rxreq expect req.url == "/foo" txresp -body "foo" rxreq expect req.url == "/bar" txresp -body "foobar" } -start varnish v1 -vcl+backend {} -start client c1 { send "GET /foo HTTP/1.1\nHost: foo\n\nGET /bar HTTP/1.1\nHost: foo\n\nGET /bar HTTP/1.1\nHost: foo\n\n" rxresp expect resp.status == 200 expect resp.bodylen == 3 expect resp.http.x-varnish == "1001" rxresp expect resp.status == 200 expect resp.bodylen == 6 expect resp.http.x-varnish == "1003" rxresp expect resp.status == 200 expect resp.bodylen == 6 expect resp.http.x-varnish == "1005 1004" } -run varnish v1 -expect sess_readahead == 2 varnish-7.5.0/bin/varnishtest/tests/b00013.vtc000066400000000000000000000014551457605730600210260ustar00rootroot00000000000000varnishtest "Check read-head / partial pipelining" server s1 { rxreq expect req.url == "/foo" txresp -body "foo" rxreq expect req.url == "/bar" txresp -body "foobar" } -start varnish v1 -vcl+backend {} # NB: The accept_filter param may not exist. varnish v1 -cli "param.set accept_filter false" varnish v1 -start client c1 { send "GET /foo HTTP/1.1\r\nHost: foo\r\n\r\nGET " rxresp expect resp.status == 200 expect resp.bodylen == 3 expect resp.http.x-varnish == "1001" send "/bar HTTP/1.1\nHost: foo\n\nGET /bar " rxresp expect resp.status == 200 expect resp.bodylen == 6 expect resp.http.x-varnish == "1003" send "HTTP/1.1\nHost: foo\n\n" rxresp expect resp.status == 200 expect resp.bodylen == 6 expect resp.http.x-varnish == "1005 1004" } -run varnish v1 -expect sess_readahead == 2 varnish-7.5.0/bin/varnishtest/tests/b00014.vtc000066400000000000000000000007621457605730600210270ustar00rootroot00000000000000varnishtest "Check -f command line arg" server s1 { rxreq expect req.url == "/foo" txresp -body "foo" rxreq expect req.url == "/bar" txresp -body "bar" } -start shell "echo 'vcl 4.0; backend foo { .host = \"${s1_addr}\"; .port = \"${s1_port}\"; }' > ${tmpdir}/_b00014.vcl" varnish v1 -arg "-f ${tmpdir}/_b00014.vcl" -start client c1 { txreq -url /foo rxresp } -run varnish v1 -cliok "vcl.load foo ${tmpdir}/_b00014.vcl" -cliok "vcl.use foo" client c1 { txreq -url /bar rxresp } -run varnish-7.5.0/bin/varnishtest/tests/b00015.vtc000066400000000000000000000027351457605730600210320ustar00rootroot00000000000000varnishtest "Check synthetic error page caching" # First test that an internally generated error is not cached varnish v1 -cliok "param.set backend_remote_error_holddown 60" varnish v1 -vcl { backend foo { .host = "${bad_backend}"; } } -start client c1 { txreq -url "/" rxresp expect resp.status == 503 expect resp.http.X-varnish == "1001" txreq -url "/" rxresp expect resp.status == 503 expect resp.http.X-varnish == "1003" } -run varnish v1 -expect VBE.vcl1.foo.fail_econnrefused > 0 varnish v1 -expect VBE.vcl1.foo.helddown > 0 # Then check that a cacheable error from the backend is varnish v1 -cliok "ban req.url ~ .*" server s1 { rxreq txresp -status 301 } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.http.ttl = beresp.ttl; set beresp.http.uncacheable = beresp.uncacheable; } } client c2 { txreq -url "/" rxresp expect resp.status == 301 expect resp.http.X-varnish == "1006" txreq -url "/" rxresp expect resp.status == 301 expect resp.http.X-varnish == "1008 1007" } -run server s1 -wait # Then check that a non-cacheable error from the backend can be server s1 { rxreq txresp -status 502 } -start varnish v1 -vcl+backend { sub vcl_backend_response { if (beresp.status == 502) { set beresp.ttl = 10m; } } } client c3 { txreq -url "/2" rxresp expect resp.status == 502 expect resp.http.X-varnish == "1010" txreq -url "/2" rxresp expect resp.status == 502 expect resp.http.X-varnish == "1012 1011" } -run varnish-7.5.0/bin/varnishtest/tests/b00016.vtc000066400000000000000000000020511457605730600210220ustar00rootroot00000000000000varnishtest "Check naming of backends" server s1 -repeat 2 -keepalive { rxreq txresp } -start varnish v1 -vcl+backend { import directors; sub vcl_recv { return (pass); } sub vcl_backend_response { set beresp.http.X-Backend-Name = bereq.backend; } } -start client c1 { txreq -url "/" rxresp expect resp.http.X-Backend-Name == "s1" } -run varnish v1 -vcl+backend { import directors; sub vcl_init { new bar = directors.random(); bar.add_backend(directors.lookup("s1"), 1); } sub vcl_recv { set req.backend_hint = bar.backend(); return (pass); } sub vcl_backend_response { set beresp.http.X-Director-Name = bereq.backend; set beresp.http.X-Backend-Name = beresp.backend; } } client c1 { txreq -url "/" rxresp expect resp.http.X-Director-Name == "bar" expect resp.http.X-Backend-Name == "s1" } -run varnish v1 -errvcl "Not available in subroutine 'vcl_recv'" { import directors; backend dummy None; sub vcl_recv { if (req.url == "/lookup") { set req.http.foo = directors.lookup("s1"); } return (pass); } } varnish-7.5.0/bin/varnishtest/tests/b00017.vtc000066400000000000000000000007641457605730600210340ustar00rootroot00000000000000varnishtest "Check set resp.body in vcl_synth" varnish v1 -arg "-sTransient=debug,lessspace" -vcl { backend foo { .host = "${bad_backend}"; } sub vcl_recv { return (synth(888)); } sub vcl_synth { set resp.body = "Custom vcl_synth's body"; return (deliver); } } -start client c1 { txreq -url "/" rxresp expect resp.status == 888 expect resp.http.connection != close expect resp.bodylen == 23 expect resp.body == "Custom vcl_synth's body" } -run varnish v1 -expect s_synth == 1 varnish-7.5.0/bin/varnishtest/tests/b00018.vtc000066400000000000000000000011051457605730600210230ustar00rootroot00000000000000varnishtest "Check that synth response in vcl_backend_response works" server s1 { rxreq txresp -body "012345\n" } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.http.Foo = "bar"; set beresp.status = 523; set beresp.reason = "not ok"; set beresp.uncacheable = true; set beresp.ttl = 0s; return (deliver); } } -start varnish v1 -cliok "param.set default_grace 0" varnish v1 -cliok "param.set default_keep 0" client c1 { txreq -url "/" rxresp expect resp.status == 523 } -run delay 4 varnish v1 -expect n_object == 0 varnish-7.5.0/bin/varnishtest/tests/b00019.vtc000066400000000000000000000016601457605730600210320ustar00rootroot00000000000000varnishtest "Check that max_restarts outside vcl_recv works and that we don't fall over" server s1 { rxreq txresp -body "012345\n" } -start varnish v1 -vcl+backend { sub vcl_hit { return (restart); } sub vcl_synth { # when we end up here, we have _exceeded_ the number # allowed restarts if (req.restarts == 3) { set resp.status = 200; set resp.reason = "restart=3"; } elsif (req.restarts > 3) { set resp.status = 501; set resp.reason = "restart>3"; } elsif (req.restarts < 3) { set resp.status = 500; set resp.reason = "restart<3"; } } } -start varnish v1 -cliok "param.set max_restarts 2" client c1 { txreq -url "/" rxresp expect resp.status == 200 } -run varnish v1 -cliok "param.set max_restarts 3" client c1 { txreq -url "/" rxresp expect resp.status == 501 } -run varnish v1 -cliok "param.set max_restarts 1" client c1 { txreq -url "/" rxresp expect resp.status == 500 } -run varnish-7.5.0/bin/varnishtest/tests/b00020.vtc000066400000000000000000000014701457605730600210210ustar00rootroot00000000000000varnishtest "Check that between_bytes_timeout behaves from parameters" server s1 { rxreq send "HTTP/1.0 200 OK\r\nConnection: close\r\n\r\n" delay 1.5 # send "Baba\n" } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_stream = false; } } -start varnish v1 -cliok "param.set between_bytes_timeout 1" client c1 { txreq rxresp expect resp.status == 503 } -run delay 1 varnish v1 -expect n_object == 0 varnish v1 -expect n_objectcore == 0 server s1 -wait { non_fatal rxreq send "HTTP/1.0 200 OK\r\nConnection: close\r\n\r\n" delay 0.5 send "Baba\n" delay 0.5 send "Baba\n" } -start client c1 { txreq rxresp expect resp.status == 200 expect resp.bodylen == 10 } -run varnish v1 -expect n_object == 1 varnish v1 -expect n_objectcore == 1 varnish v1 -expect fetch_failed == 1 varnish-7.5.0/bin/varnishtest/tests/b00021.vtc000066400000000000000000000013111457605730600210140ustar00rootroot00000000000000varnishtest "Check the between_bytes_timeout behaves from vcl" server s1 { rxreq send "HTTP/1.0 200 OK\r\nConnection: close\r\n\r\n" delay 4.0 send "Baba\n" } -start varnish v1 -vcl+backend { sub vcl_backend_fetch { set bereq.between_bytes_timeout = 2s; } sub vcl_backend_response { set beresp.do_stream = false; } } -start client c1 { txreq rxresp expect resp.status == 503 } -run server s1 { rxreq send "HTTP/1.0 200 OK\r\nConnection: close\r\n\r\n" delay 1.0 send "Baba\n" delay 1.0 send "Baba\n" delay 1.0 send "Baba\n" delay 1.0 send "Baba\n" delay 1.0 send "Baba\n" } -start client c1 { txreq timeout 10 rxresp expect resp.status == 200 expect resp.bodylen == 25 } -run varnish-7.5.0/bin/varnishtest/tests/b00022.vtc000066400000000000000000000011671457605730600210260ustar00rootroot00000000000000varnishtest "Check the between_bytes_timeout behaves from backend definition" server s1 { rxreq send "HTTP/1.0 200 OK\r\nConnection: close\r\n\r\n" delay 1.5 send "Baba\n" } -start varnish v1 -vcl { backend b1 { .host = "${s1_addr}"; .port = "${s1_port}"; .between_bytes_timeout = 1s; } sub vcl_backend_response { set beresp.do_stream = false; } } -start client c1 { txreq rxresp expect resp.status == 503 } -run server s1 { rxreq send "HTTP/1.0 200 OK\r\nConnection: close\r\n\r\n" delay 0.5 send "Baba\n" delay 0.5 send "Baba\n" } -start client c1 { txreq rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/b00023.vtc000066400000000000000000000021031457605730600210160ustar00rootroot00000000000000varnishtest "Check that the first_byte_timeout works" # From VCL server s1 { rxreq delay 1 txresp } -start varnish v1 -vcl+backend { sub vcl_recv { return (pass); } sub vcl_backend_fetch { set bereq.first_byte_timeout = 0.5s; } } -start logexpect l1 -v v1 { expect * 1002 FetchError "first byte timeout" } -start client c1 { txreq rxresp expect resp.status == 503 } -run logexpect l1 -wait server s1 { rxreq delay 0.2 txresp } -start client c2 { txreq rxresp expect resp.status == 200 } -run # From CLI varnish v1 -vcl+backend { sub vcl_recv { return (pass); } } varnish v1 -cliok "param.set first_byte_timeout 0.5" server s1 { rxreq delay 1 txresp } -start client c1 -run server s1 { rxreq delay 0.2 txresp } -start client c2 -run # From backend definition server s1 { rxreq delay 1 txresp } -start varnish v1 -vcl { backend b1 { .host = "${s1_addr}"; .port = "${s1_port}"; .first_byte_timeout = 0.5s; } } client c1 -run server s1 { rxreq delay 0.2 txresp } -start client c2 -run varnish v1 -expect fetch_failed == 3 varnish-7.5.0/bin/varnishtest/tests/b00024.vtc000066400000000000000000000016361457605730600210310ustar00rootroot00000000000000varnishtest "Check that max_restarts from vcl_recv works and that we don't fall over" varnish v1 -vcl { backend dummy { .host = "${bad_backend}"; } sub vcl_recv { return (restart); } sub vcl_synth { # when we end up here, we have _exceeded_ the number # allowed restarts if (req.restarts == 3) { set resp.status = 200; set resp.reason = "restart=3"; } elsif (req.restarts > 3) { set resp.status = 501; set resp.reason = "restart>3"; } elsif (req.restarts < 3) { set resp.status = 500; set resp.reason = "restart<3"; } } } -start varnish v1 -cliok "param.set max_restarts 2" client c1 { txreq -url "/" rxresp expect resp.status == 200 } -run varnish v1 -cliok "param.set max_restarts 3" client c1 { txreq -url "/" rxresp expect resp.status == 501 } -run varnish v1 -cliok "param.set max_restarts 1" client c1 { txreq -url "/" rxresp expect resp.status == 500 } -run varnish-7.5.0/bin/varnishtest/tests/b00025.vtc000066400000000000000000000106741457605730600210340ustar00rootroot00000000000000varnishtest "more backends" varnish v1 -arg "-p vcc_feature=-err_unref" -vcl { backend d0 { .host = "${bad_backend}"; } backend d1 { .host = "${bad_backend}"; } backend d2 { .host = "${bad_backend}"; } backend d3 { .host = "${bad_backend}"; } backend d4 { .host = "${bad_backend}"; } backend d5 { .host = "${bad_backend}"; } backend d6 { .host = "${bad_backend}"; } backend d7 { .host = "${bad_backend}"; } backend d8 { .host = "${bad_backend}"; } backend d9 { .host = "${bad_backend}"; } backend d10 { .host = "${bad_backend}"; } backend d11 { .host = "${bad_backend}"; } backend d12 { .host = "${bad_backend}"; } backend d13 { .host = "${bad_backend}"; } backend d14 { .host = "${bad_backend}"; } backend d15 { .host = "${bad_backend}"; } backend d16 { .host = "${bad_backend}"; } backend d17 { .host = "${bad_backend}"; } backend d18 { .host = "${bad_backend}"; } backend d19 { .host = "${bad_backend}"; } backend d20 { .host = "${bad_backend}"; } backend d21 { .host = "${bad_backend}"; } backend d22 { .host = "${bad_backend}"; } backend d23 { .host = "${bad_backend}"; } backend d24 { .host = "${bad_backend}"; } backend d25 { .host = "${bad_backend}"; } backend d26 { .host = "${bad_backend}"; } backend d27 { .host = "${bad_backend}"; } backend d28 { .host = "${bad_backend}"; } backend d29 { .host = "${bad_backend}"; } backend d30 { .host = "${bad_backend}"; } backend d31 { .host = "${bad_backend}"; } backend d32 { .host = "${bad_backend}"; } backend d33 { .host = "${bad_backend}"; } backend d34 { .host = "${bad_backend}"; } backend d35 { .host = "${bad_backend}"; } backend d36 { .host = "${bad_backend}"; } backend d37 { .host = "${bad_backend}"; } backend d38 { .host = "${bad_backend}"; } backend d39 { .host = "${bad_backend}"; } backend d40 { .host = "${bad_backend}"; } backend d41 { .host = "${bad_backend}"; } backend d42 { .host = "${bad_backend}"; } backend d43 { .host = "${bad_backend}"; } backend d44 { .host = "${bad_backend}"; } backend d45 { .host = "${bad_backend}"; } backend d46 { .host = "${bad_backend}"; } backend d47 { .host = "${bad_backend}"; } backend d48 { .host = "${bad_backend}"; } backend d49 { .host = "${bad_backend}"; } backend d50 { .host = "${bad_backend}"; } backend d51 { .host = "${bad_backend}"; } backend d52 { .host = "${bad_backend}"; } backend d53 { .host = "${bad_backend}"; } backend d54 { .host = "${bad_backend}"; } backend d55 { .host = "${bad_backend}"; } backend d56 { .host = "${bad_backend}"; } backend d57 { .host = "${bad_backend}"; } backend d58 { .host = "${bad_backend}"; } backend d59 { .host = "${bad_backend}"; } backend d60 { .host = "${bad_backend}"; } backend d61 { .host = "${bad_backend}"; } backend d62 { .host = "${bad_backend}"; } backend d63 { .host = "${bad_backend}"; } backend d64 { .host = "${bad_backend}"; } backend d65 { .host = "${bad_backend}"; } backend d66 { .host = "${bad_backend}"; } backend d67 { .host = "${bad_backend}"; } backend d68 { .host = "${bad_backend}"; } backend d69 { .host = "${bad_backend}"; } backend d70 { .host = "${bad_backend}"; } backend d71 { .host = "${bad_backend}"; } backend d72 { .host = "${bad_backend}"; } backend d73 { .host = "${bad_backend}"; } backend d74 { .host = "${bad_backend}"; } backend d75 { .host = "${bad_backend}"; } backend d76 { .host = "${bad_backend}"; } backend d77 { .host = "${bad_backend}"; } # end of 1st 8k VSMW cluster on 64bit backend d78 { .host = "${bad_backend}"; } backend d79 { .host = "${bad_backend}"; } backend d80 { .host = "${bad_backend}"; } backend d81 { .host = "${bad_backend}"; } backend d82 { .host = "${bad_backend}"; } backend d83 { .host = "${bad_backend}"; } backend d84 { .host = "${bad_backend}"; } backend d85 { .host = "${bad_backend}"; } backend d86 { .host = "${bad_backend}"; } backend d87 { .host = "${bad_backend}"; } backend d88 { .host = "${bad_backend}"; } backend d89 { .host = "${bad_backend}"; } backend d90 { .host = "${bad_backend}"; } backend d91 { .host = "${bad_backend}"; } backend d92 { .host = "${bad_backend}"; } backend d93 { .host = "${bad_backend}"; } backend d94 { .host = "${bad_backend}"; } backend d95 { .host = "${bad_backend}"; } backend d96 { .host = "${bad_backend}"; } backend d97 { .host = "${bad_backend}"; } backend d98 { .host = "${bad_backend}"; } backend d99 { .host = "${bad_backend}"; } } -start client c1 { txreq -url "/" rxresp expect resp.status == 503 } -run shell -match "d99" "varnishstat -1 -n ${v1_name}" varnish-7.5.0/bin/varnishtest/tests/b00026.vtc000066400000000000000000000015351457605730600210310ustar00rootroot00000000000000varnishtest "Check the precedence for timeouts" server s1 { rxreq expect req.url == "from_backend" delay 1 txresp } -start server s2 { rxreq expect req.url == "from_vcl" delay 1.5 txresp } -start varnish v1 -vcl { backend b1 { .host = "${s1_addr}"; .port = "${s1_port}"; .first_byte_timeout = 2s; } backend b2 { .host = "${s2_addr}"; .port = "${s2_port}"; .first_byte_timeout = 1s; } sub vcl_recv { if (req.url == "from_backend") { return(pass); } } sub vcl_backend_fetch { set bereq.first_byte_timeout = 2s; if (bereq.url == "from_backend") { set bereq.backend = b1; } else { set bereq.backend = b2; } } } -start varnish v1 -cliok "param.set first_byte_timeout 0.5" client c1 { txreq -url "from_backend" rxresp expect resp.status == 200 txreq -url "from_vcl" rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/b00027.vtc000066400000000000000000000010571457605730600210310ustar00rootroot00000000000000varnishtest "test backend transmission corner cases" server s1 { rxreq txresp rxreq txresp -proto HTTP/1.0 -hdr "Connection: keep-alive" rxreq send "HTTP/1.1 200 OK\n" send "Transfer-encoding: foobar\n" send "\n" } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_stream = false; } } -start client c1 { txreq -url /foo rxresp expect resp.status == 200 expect resp.bodylen == 0 txreq -url /bar rxresp expect resp.status == 200 expect resp.bodylen == 0 txreq -url /barf rxresp expect resp.status == 503 } -run varnish-7.5.0/bin/varnishtest/tests/b00028.vtc000066400000000000000000000010311457605730600210220ustar00rootroot00000000000000varnishtest "regexp match and no-match" server s1 { rxreq txresp -hdr "Foo: bar" -hdr "Bar: foo" -body "1111\n" } -start varnish v1 -vcl+backend { sub vcl_backend_response { if (beresp.http.foo ~ "b" + "ar") { set beresp.http.foo1 = "1"; } else { set beresp.status = 999; } if (beresp.http.bar !~ "bar") { set beresp.http.bar1 = "2"; } else { set beresp.status = 999; } } } -start client c1 { txreq rxresp expect resp.status == "200" expect resp.http.foo1 == "1" expect resp.http.bar1 == "2" } -run varnish-7.5.0/bin/varnishtest/tests/b00029.vtc000066400000000000000000000005261457605730600210330ustar00rootroot00000000000000varnishtest "Test orderly connection closure" server s1 { rxreq txresp -nolen -hdr "Transfer-encoding: chunked" delay .2 chunkedlen 30000 delay .2 chunkedlen 100000 delay .2 chunkedlen 0 } -start varnish v1 -vcl+backend { } -start client c1 { txreq -hdr "Connection: close" delay 3 rxresp expect resp.bodylen == 130000 } -run varnish-7.5.0/bin/varnishtest/tests/b00030.vtc000066400000000000000000000051261457605730600210240ustar00rootroot00000000000000varnishtest "Test timestamps" # We can't test the value of a timestamp, but this should fail # if we can't set the header at all. # We also test that `now` remains unchanged during a vcl sub server s1 { rxreq txresp rxreq txresp rxreq expect req.method == PIPE expect req.http.req-time != expect req.http.bereq-time != txresp } -start varnish v1 -vcl+backend { import vtc; import std; sub recv_sub { set req.http.now-recv_sub = now; } sub vcl_recv { if (req.restarts == 0) { set req.http.req-time = req.time; } else if (req.http.req-time != "" + req.time) { return (fail); } set req.http.now-recv = now; vtc.sleep(1s); call recv_sub; if (req.http.now-recv != req.http.now-recv_sub) { return (fail); } } sub vcl_pipe { set bereq.http.req-time = req.time; set bereq.http.bereq-time = bereq.time; } sub vcl_synth { set resp.http.now-synth = now; if (req.http.req-time != "" + req.time) { return (fail); } set req.http.req-time = req.time; set resp.http.resp-time = resp.time; } sub vcl_deliver { if (req.http.req-time != "" + req.time) { return (fail); } set resp.http.resp-time = resp.time; set resp.http.obj-time = obj.time; set resp.http.now-deliver = now; if (req.http.now-recv == req.http.now-deliver) { return (fail); } vtc.sleep(1s); if (req.restarts == 0) { return (restart); } return (synth(200)); } sub bf_sub { set bereq.http.now-bf_sub = now; } sub vcl_backend_fetch { if (bereq.retries == 0) { set bereq.http.bereq-time = bereq.time; } else if (bereq.http.bereq-time != "" + bereq.time) { # bereq.time is identical for all retries return (fail); } if (bereq.time <= std.time(bereq.http.req-time)) { return (fail); } set bereq.http.now-bf = now; vtc.sleep(1s); call bf_sub; # now remains constant during built-in vcl sub if (bereq.http.now-bf != bereq.http.now-bf_sub) { return (fail); } } sub br_sub { set beresp.http.now-br_sub = now; } sub vcl_backend_response { if (bereq.http.bereq-time != "" + bereq.time) { return (fail); } set beresp.http.beresp-time = beresp.time; set beresp.http.now-br = now; vtc.sleep(1s); call br_sub; if (beresp.http.now-br != beresp.http.now-br_sub) { return (fail); } if (bereq.http.now-bf == beresp.http.now-br) { return (fail); } if (bereq.retries == 0) { return (retry); } } sub vcl_backend_error { call vcl_backend_response; } } -start client c1 { txreq rxresp expect resp.status == 200 expect resp.http.now-synth ~ "^..., .. ... .... ..:..:.. GMT" txreq -method PIPE rxresp } -run varnish-7.5.0/bin/varnishtest/tests/b00031.vtc000066400000000000000000000005221457605730600210200ustar00rootroot00000000000000varnishtest "Test X-Forward-For headers" server s1 { rxreq expect req.http.X-Forwarded-For == "${localhost}" txresp rxreq expect req.http.X-Forwarded-For == "1.2.3.4, ${localhost}" txresp } -start varnish v1 -vcl+backend { } -start client c1 { txreq -url /1 rxresp txreq -url /2 -hdr "X-forwarded-for: 1.2.3.4" rxresp } -run varnish-7.5.0/bin/varnishtest/tests/b00032.vtc000066400000000000000000000014211457605730600210200ustar00rootroot00000000000000varnishtest "CLI coverage test" varnish v1 -cliok storage.list varnish v1 -clijson "storage.list -j" server s1 { rxreq txresp } -start varnish v1 varnish v1 -cliok vcl.list varnish v1 -clijson "vcl.list -j" varnish v1 -vcl+backend {} varnish v1 -vcl+backend {} varnish v1 -cliok vcl.list varnish v1 -clijson "vcl.list -j" varnish v1 -cliok start varnish v1 -cliok vcl.list varnish v1 -clijson "vcl.list -j" varnish v1 -cliok "vcl.use vcl1" varnish v1 -clierr 300 "vcl.discard vcl1" varnish v1 -clierr 106 "vcl.discard vcl0" varnish v1 -clierr 106 {vcl.inline vcl2 "vcl 4.0; backend foo {.host = \"${localhost}\";} "} varnish v1 -clierr 106 {vcl.load vcl3 ./nonexistent.vcl} varnish v1 -cliok "vcl.discard vcl2" varnish v1 -clierr 106 {vcl.load /invalid/name.vcl vcl4} varnish-7.5.0/bin/varnishtest/tests/b00033.vtc000066400000000000000000000011131457605730600210170ustar00rootroot00000000000000varnishtest "classic hash code coverage" server s1 { rxreq expect req.url == /1 txresp -bodylen 5 rxreq expect req.url == /2 txresp -bodylen 6 rxreq expect req.url == /1 txresp -bodylen 7 rxreq expect req.url == /2 txresp -bodylen 8 } -start varnish v1 -arg "-hclassic,11" -vcl+backend {} -start client c1 { txreq -url /1 rxresp expect resp.bodylen == 5 txreq -url /2 rxresp expect resp.bodylen == 6 } -run varnish v1 -cliok "ban req.url ~ ." client c1 { txreq -url /1 rxresp expect resp.bodylen == 7 txreq -url /2 rxresp expect resp.bodylen == 8 } -run varnish-7.5.0/bin/varnishtest/tests/b00034.vtc000066400000000000000000000006041457605730600210240ustar00rootroot00000000000000varnishtest "mempool param handling" server s1 { } -start varnish v1 -vcl+backend {} varnish v1 -cliok "param.set pool_req 1,10,1" varnish v1 -clierr 106 "param.set pool_req 10" varnish v1 -clierr 106 "param.set pool_req 10,1,1" varnish v1 -clierr 106 "param.set pool_req a,10,10" varnish v1 -clierr 106 "param.set pool_req 10,a,10" varnish v1 -clierr 106 "param.set pool_req 10,10,a" varnish-7.5.0/bin/varnishtest/tests/b00035.vtc000066400000000000000000000013251457605730600210260ustar00rootroot00000000000000varnishtest "Test grace object not replaced by ttl = 0s" server s1 { rxreq txresp -bodylen 3 rxreq txresp -bodylen 6 # bgfetch fails } -start varnish v1 -vcl+backend { sub vcl_recv { if (req.http.X-Force-Miss) { set req.hash_always_miss = true; } } sub vcl_backend_response { if (beresp.status != 200 || bereq.http.X-Force-Miss) { set beresp.ttl = 0s; } else { set beresp.ttl = 0.001s; } set beresp.grace = 10s; return (deliver); } } -start client c1 { txreq rxresp expect resp.status == 200 expect resp.bodylen == 3 txreq -hdr "X-Force-Miss: 1" rxresp expect resp.status == 200 expect resp.bodylen == 6 txreq rxresp expect resp.status == 200 expect resp.bodylen == 3 } -run varnish-7.5.0/bin/varnishtest/tests/b00036.vtc000066400000000000000000000011641457605730600210300ustar00rootroot00000000000000varnishtest "builtin purge from vcl_recv{}" server s1 { rxreq txresp -hdr "foo: 1" rxreq txresp -hdr "foo: 2" } -start varnish v1 -vcl+backend { sub vcl_recv { if (req.method == "PURGE") { return (purge); } } } -start client c1 { txreq rxresp expect resp.http.foo == 1 txreq rxresp expect resp.http.foo == 1 txreq -req PURGE rxresp expect resp.reason == "Purged" txreq rxresp expect resp.http.foo == 2 } -run varnish v1 -vsl_catchup varnish v1 -expect MAIN.n_purges == 1 varnish v1 -expect MAIN.n_obj_purged == 1 # NB: a purge used to increase n_expired varnish v1 -expect MAIN.n_expired == 0 varnish-7.5.0/bin/varnishtest/tests/b00037.vtc000066400000000000000000000016311457605730600210300ustar00rootroot00000000000000varnishtest "Error on multiple Host headers" varnish v1 -vcl {backend be none;} -start varnish v1 -cliok "param.set debug +syncvsl" client c1 { txreq -hdr "Host: foo" -hdr "Host: bar" rxresp expect resp.status == 400 } -run varnish v1 -vsl_catchup varnish v1 -expect client_req_400 == 1 client c1 { txreq -method POST -hdr "Content-Length: 12" -hdr "Content-Length: 12" -bodylen 12 rxresp expect resp.status == 400 } -run varnish v1 -vsl_catchup varnish v1 -expect client_req_400 == 2 varnish v1 -cliok "param.set feature +http2" client c2 { stream 7 { txreq -hdr host foo -hdr host bar rxresp expect resp.status == 400 } -run } -run varnish v1 -vsl_catchup varnish v1 -expect client_req_400 == 3 # H2 with multiple content-length runs into thread-scheduling differences, # and is unnecessary, as we know the check works from H1 and that it # will be hit, because H2 with multiple Host: triggered. varnish-7.5.0/bin/varnishtest/tests/b00038.vtc000066400000000000000000000011421457605730600210260ustar00rootroot00000000000000varnishtest "Test abandon from vcl_backend_xxx" server s1 { rxreq expect req.url == "/bar" txresp } -start varnish v1 -arg "-pfirst_byte_timeout=0.1" -vcl+backend { sub vcl_backend_fetch { if (bereq.url == "/foo") { return (abandon); } } sub vcl_backend_response { if (bereq.url == "/bar") { return (abandon); } } sub vcl_backend_error { return (abandon); } } -start client c1 { txreq -url /foo rxresp expect resp.status == 503 txreq -url /bar rxresp expect resp.status == 503 txreq -url /baz rxresp expect resp.status == 503 } -run varnish v1 -expect fetch_failed == 1 varnish-7.5.0/bin/varnishtest/tests/b00039.vtc000066400000000000000000000026171457605730600210370ustar00rootroot00000000000000varnishtest "Test Backend IMS" server s1 { rxreq txresp -hdr "Last-Modified: Wed, 11 Sep 2013 13:36:55 GMT" -body "Geoff Rules" rxreq expect req.http.if-modified-since == "Wed, 11 Sep 2013 13:36:55 GMT" txresp -status 304 rxreq expect req.http.if-modified-since == "Wed, 11 Sep 2013 13:36:55 GMT" txresp -status 304 } -start varnish v1 -cliok "param.set vsl_mask +ExpKill" varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.ttl = 2s; set beresp.grace = 20s; set beresp.keep = 1m; set beresp.http.was-304 = beresp.was_304; } } -start logexpect l1 -v v1 -g raw -q ExpKill { expect * 0 ExpKill "VBF_Superseded x=1002 n=1005" expect * 0 ExpKill "EXP_Removed x=1002" expect * 0 ExpKill "VBF_Superseded x=1005 n=1010" expect * 0 ExpKill "EXP_Removed x=1005" } -start client c1 { txreq rxresp expect resp.status == 200 expect resp.body == "Geoff Rules" expect resp.http.was-304 == "false" } -run delay 3 client c1 { txreq rxresp expect resp.status == 200 expect resp.body == "Geoff Rules" expect resp.http.was-304 == "false" } -run delay 1 client c1 { txreq rxresp expect resp.status == 200 expect resp.body == "Geoff Rules" expect resp.http.was-304 == "true" } -run delay 1 client c1 { txreq rxresp expect resp.status == 200 expect resp.body == "Geoff Rules" expect resp.http.was-304 == "true" } -run logexpect l1 -wait varnish v1 -expect MAIN.n_superseded == 2 varnish-7.5.0/bin/varnishtest/tests/b00040.vtc000066400000000000000000000042411457605730600210220ustar00rootroot00000000000000varnishtest "test certain malformed requests and validate_headers" server s1 { rxreq expect req.url == /4 txresp rxreq expect req.url == /9 txresp } -start varnish v1 -vcl+backend { import debug; sub vcl_recv { if (req.url == "/9") { set req.http.foo = {" "}; } } sub vcl_deliver { if (req.url == "/9") { set resp.http.valid1 = debug.validhdr({" "}); set resp.http.valid2 = debug.validhdr("a"); } } } -start logexpect l1 -v v1 -g raw { expect * 1001 BogoHeader {1st header has white space:.*} expect * 1003 BogoHeader {1st header has white space:.*} expect * 1005 BogoHeader {Header has ctrl char 0x0d} expect * 1010 BogoHeader {Header has ctrl char 0x01} expect * 1012 BogoHeader {Header has ctrl char 0x0d} expect * 1014 BogoHeader {Header has ctrl char 0x0d} expect * 1016 BogoHeader {Missing header name:.*} expect * 1018 VCL_Error {Bad header foo:} } -start client c1 { send "GET /1 HTTP/1.1\r\n" send " Host: foo\r\n" send "\r\n" rxresp expect resp.status == 400 } -run delay .1 client c1 { send "GET /2 HTTP/1.1\r\n" send " Host: foo\r\n" send "\r\n" rxresp expect resp.status == 400 } -run delay .1 client c1 { send "GET /3 HTTP/1.1\r\n" send "\rHost: foo\r\n" send "\r\n" rxresp expect resp.status == 400 } -run delay .1 client c1 { send "GET /4 HTTP/1.1\r\n" send "Host: foo\r\n\r\n" rxresp expect resp.status == 200 } -run delay .1 client c1 { send "GET /5 HTTP/1.1\r\nHost: foo\r\nBogo: Header\001More\r\n\r\n" rxresp expect resp.status == 400 } -run delay .1 client c1 { send "GET /6 HTTP/1.1\r\nHost: foo\r\nBogo: Header\r\r\n\r\n" rxresp expect resp.status == 400 } -run delay .1 client c1 { send "GET /7 HTTP/1.1\r\nHost: foo\r\nBogo: Header\rMore\r\n\r\n" rxresp expect resp.status == 400 } -run delay .1 client c1 { send "GET /8 HTTP/1.1\r\nHost: foo\r\n: Header\r\n\r\n" rxresp expect resp.status == 400 } -run delay .1 client c1 { txreq -url /9 rxresp expect resp.status == 503 } -run logexpect l1 -wait varnish v1 -cliok "param.set feature -validate_headers" client c1 { txreq -url /9 rxresp expect resp.status == 200 expect resp.http.valid1 == false expect resp.http.valid2 == true } -run varnish-7.5.0/bin/varnishtest/tests/b00041.vtc000066400000000000000000000036201457605730600210230ustar00rootroot00000000000000varnishtest "Test varnishadm and the Telnet CLI" varnish v1 -vcl {backend foo { .host = "${localhost}"; } } -start shell -err -expect {Usage: varnishadm} \ "varnishadm -7" shell -err -expect {Could not get hold of varnishd, is it running?} \ "varnishadm -n ${v1_name}/nonexistent" shell -err -expect {Connection failed} \ "varnishadm -t 4 -T ${bad_ip}:1 -S ${v1_name}/_.secret" server s1 { send "FOO\n" } -start shell -err -expect {Rejected 400} \ {varnishadm -T ${s1_sock} -S /etc/group} server s1 { send "107 59 \n" send "qbvnnftpkgubadqpzznkkazoxlyqbcbj\n\n" send "Authentication required.\n" send "\n" } -start shell -err -expect {Authentication required} \ {varnishadm -T ${s1_sock}} server s1 { send "107 59 \n" send "qbvnnftpkgubadqpzznkkazoxlyqbcbj\n\n" send "Authentication required.\n" send "\n" } -start shell -err -expect {Cannot open } \ {varnishadm -T ${s1_sock} -S ${v1_name}/_.nonexistent} server s1 { send "107 59 \n" send "qbvnnftpkgubadqpzznkkazoxlyqbcbj\n\n" send "Authentication required.\n" send "\n" recv 70 send "599 0 \n" send "\n" } -start shell -err -expect {Rejected 599} \ {varnishadm -T ${s1_sock} -S ${v1_name}/_.secret} server s1 { send "107 59 \n" send "qbvnnftpkgubadqpzznkkazoxlyqbcbj\n\n" send "Authentication required.\n" send "\n" recv 70 send "200 0 \n" send "\n" recv 5 } -start shell -err -expect {No pong received from server} \ {varnishadm -T ${s1_sock} -S ${v1_name}/_.secret} server s1 { send "107 59 \n" send "qbvnnftpkgubadqpzznkkazoxlyqbcbj\n\n" send "Authentication required.\n" send "\n" recv 70 send "200 0 \n" send "\n" recv 5 send "200 8 \n" send "PONG 12\n" send "\n" recv 5 send "200 7 \n" send "Tested\n" send "\n" } -start shell -expect {Tested} \ {varnishadm -T ${s1_sock} -S ${v1_name}/_.secret test} shell "varnishadm -n ${v1_name} help > /dev/null" varnish-7.5.0/bin/varnishtest/tests/b00042.vtc000066400000000000000000000042601457605730600210250ustar00rootroot00000000000000varnishtest "param edge cases" varnish v1 -vcl {backend be none;} -start varnish v1 -clierr 106 "param.set default_ttl -1" varnish v1 -clierr 106 "param.set default_ttl 1x" varnish v1 -cliok "param.set default_ttl 1s" varnish v1 -clierr 106 {param.set default_ttl "1 x"} varnish v1 -cliok {param.set default_ttl "1 s"} varnish v1 -clierr 106 {param.set acceptor_sleep_decay "0.42 is not a number"} varnish v1 -clierr 106 "param.set acceptor_sleep_max 20" varnish v1 -cliok "param.set prefer_ipv6 off" varnish v1 -cliok "param.set prefer_ipv6 no" varnish v1 -cliok "param.set prefer_ipv6 disable" varnish v1 -cliok "param.set prefer_ipv6 false" varnish v1 -cliok "param.set prefer_ipv6 on" varnish v1 -cliok "param.set prefer_ipv6 yes" varnish v1 -cliok "param.set prefer_ipv6 enable" varnish v1 -cliok "param.set prefer_ipv6 true" varnish v1 -clierr 106 "param.set prefer_ipv6 foobar" varnish v1 -clierr 106 "param.set http_max_hdr 0" varnish v1 -clierr 106 "param.set http_max_hdr 1000000" varnish v1 -clierr 106 "param.set workspace_thread 1b" varnish v1 -clierr 106 "param.set workspace_thread 1m" varnish v1 -clierr 106 "param.set workspace_thread 1x" varnish v1 -clierr 106 "param.set workspace_thread x" varnish v1 -clierr 106 "param.set user ///" varnish v1 -clierr 106 "param.set user ///" varnish v1 -clierr 106 {param.set pool_sess "\""} varnish v1 -cliok {param.set thread_pool_max 110} varnish v1 -clierr 106 {param.set thread_pool_min 111} varnish v1 -cliok {param.set thread_pool_min 51} varnish v1 -clierr 106 {param.set thread_pool_max 50} varnish v1 -cliok {param.set thread_pool_max 51} varnish v1 -cliok {param.set thread_pool_max unlimited} varnish v1 -clierr 106 {param.show fofofofo} varnish v1 -cliok "param.show changed" varnish v1 -cliok "param.show " varnish v1 -cliok "param.show -l" varnish v1 -clijson "param.show -j pool_req" varnish v1 -clijson "param.show -j pool_sess" varnish v1 -clijson "param.show -j changed" varnish v1 -clijson "param.show -j" varnish v1 -clijson "param.set -j default_ttl 0" varnish v1 -clierr 106 "param.show -j -l" varnish v1 -clierr 106 "param.show -j fofofofo" varnish v1 -clierr 105 "param.show debug debug" varnish v1 -clierr 105 "param.show -j debug debug" varnish-7.5.0/bin/varnishtest/tests/b00043.vtc000066400000000000000000000016171457605730600210310ustar00rootroot00000000000000varnishtest "Test stale-while-revalidate" server s1 { rxreq txresp -hdr "Cache-Control: max-age=30, stale-while-revalidate=30" rxreq txresp -hdr "Cache-Control: max-age=0, stale-while-revalidate=30" rxreq txresp -hdr "Cache-Control: max-age=30, stale-while-revalidate=30" -hdr "Age: 40" rxreq txresp -status 500 -hdr "Cache-Control: max-age=30, stale-while-revalidate=30" } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.http.grace = beresp.grace; set beresp.http.ttl = beresp.ttl; } } -start client c1 { txreq -url /1 rxresp expect resp.http.grace == 30.000 expect resp.http.ttl == 30.000 txreq -url /2 rxresp expect resp.http.grace == 30.000 expect resp.http.ttl == 0.000 txreq -url /3 rxresp expect resp.http.grace == 30.000 expect resp.http.ttl == -10.000 txreq -url /4 rxresp expect resp.http.grace == 10.000 expect resp.http.ttl == 0.000 } -run varnish-7.5.0/bin/varnishtest/tests/b00044.vtc000066400000000000000000000003541457605730600210270ustar00rootroot00000000000000varnishtest "Test/coverage of varnish master signal handling" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { } -start client c1 { txreq rxresp } -run server s1 -wait shell "kill -15 ${v1_pid}" varnish v1 -cleanup varnish-7.5.0/bin/varnishtest/tests/b00045.vtc000066400000000000000000000007501457605730600210300ustar00rootroot00000000000000varnishtest "Check Pid file locking" server s1 { rxreq txresp } -start varnish v1 -vcl+backend {} -start client c1 { txreq rxresp } -run delay .2 shell -err -expect {Error: Could not open pid-file} { varnishd -P /dev/tty -b None -a :0 -n ${tmpdir} } shell -err -expect {Error: Varnishd is already running} { varnishd -P ${v1_name}/varnishd.pid -b None -a :0 -n ${tmpdir} } shell -err -expect {Error: Varnishd is already running} { varnishd -b None -a:0 -n ${tmpdir}/v1 -F } varnish-7.5.0/bin/varnishtest/tests/b00046.vtc000066400000000000000000000010311457605730600210220ustar00rootroot00000000000000varnishtest "Check that TCP OOB urgent data doesn't cause ill effects" server s1 { rxreq send_urgent " " txresp send_urgent " " rxreq send_urgent " " txresp send_urgent " " } -start # -cli because accept_filter may not be supported varnish v1 -cli "param.set accept_filter off" varnish v1 -vcl+backend {} -start client c1 { delay 0.5 send_urgent " " expect_close } -run client c1 { send_urgent " " txreq -url /1 send_urgent " " rxresp send_urgent " " txreq -url /2 send_urgent " " rxresp send_urgent " " } -run varnish-7.5.0/bin/varnishtest/tests/b00047.vtc000066400000000000000000000007431457605730600210340ustar00rootroot00000000000000varnishtest "Check that all but HTTP/1.0 and HTTP/1.1 get 400" server s1 { rxreq txresp -body "FOO" } -start varnish v1 -vcl+backend { } -start client c1 { send "GET /\r\n\r\n" rxresp expect resp.status == 400 } -run client c1 { send "GET / HTTP/0.5\r\n\r\n" rxresp expect resp.status == 400 } -run client c1 { send "GET / HTTP/1.2\r\n\r\n" rxresp expect resp.status == 400 } -run client c1 { send "GET / HTTP/2.0\r\n\r\n" rxresp expect resp.status == 400 } -run varnish-7.5.0/bin/varnishtest/tests/b00048.vtc000066400000000000000000000013251457605730600210320ustar00rootroot00000000000000varnishtest "Run a lot of transactions through" server s0 { loop 10 { rxreq txresp -body "foo1" } rxreq txresp -hdr "Connection: close" -body "foo1" expect_close } -dispatch varnish v1 -arg "-Wpoll" -vcl+backend { sub vcl_recv { return (pass); } sub vcl_backend_fetch { set bereq.backend = s0; } } -start client c1 { loop 20 { txreq -url /c1 rxresp expect resp.bodylen == 4 expect resp.status == 200 } } -start client c2 { loop 20 { txreq -url /c2 rxresp expect resp.bodylen == 4 expect resp.status == 200 } } -start client c3 { loop 20 { txreq -url /c3 rxresp expect resp.bodylen == 4 expect resp.status == 200 } } -start client c1 -wait client c2 -wait client c3 -wait varnish-7.5.0/bin/varnishtest/tests/b00049.vtc000066400000000000000000000016721457605730600210400ustar00rootroot00000000000000varnishtest "RFC 7230 compliance" server s1 { rxreq txresp -gzipbody "FOOOOOOBAR" } -start varnish v1 -vcl+backend { } -start logexpect l1 -v v1 -g raw { expect * 1004 BogoHeader "Illegal char 0x20 in header name" } -start logexpect l2 -v v1 -g raw { expect * 1006 BogoHeader "Illegal char 0x2f in header name" } -start client c1 { send "GET / HTTP/1.1\r\n" send "Host: foo\r\n" send "\r\n" rxresp expect resp.status == 200 expect resp.bodylen == 10 send "GET / HTTP/1.1\r\n" send "Host: foo\r\n" send "Accept-Encoding: gzip\r\n" send "\r\n" rxresp expect resp.status == 200 expect resp.bodylen == 33 send "GET / HTTP/1.1\r\n" send "Host: foo\r\n" send "Accept-Encoding : gzip\r\n" send "\r\n" rxresp expect resp.status == 400 } -run client c1 { send "GET / HTTP/1.1\r\n" send "Host: foo\r\n" send "Accept/Encoding: gzip\r\n" send "\r\n" rxresp expect resp.status == 400 } -run logexpect l1 -wait logexpect l2 -wait varnish-7.5.0/bin/varnishtest/tests/b00050.vtc000066400000000000000000000025401457605730600210230ustar00rootroot00000000000000varnishtest "VXID log filtering" server s1 { rxreq txresp } -start varnish v1 -arg "-p thread_pools=1" -vcl+backend { } -start logexpect l1 -v v1 -q "vxid == 1001" { expect 0 1001 Begin "req 1000 rxreq" } -start client c1 { txreq rxresp } -run logexpect l1 -wait # vxid only supports integer operations shell -err -expect "Expected vxid operator got '~'" { varnishlog -n ${v1_name} -d -q 'vxid ~ 1001' } shell -err -expect "Expected vxid operator got '!~'" { varnishlog -n ${v1_name} -d -q 'vxid !~ 1001' } shell -err -expect "Expected vxid operator got 'eq'" { varnishlog -n ${v1_name} -d -q 'vxid eq 1001' } shell -err -expect "Expected vxid operator got 'ne'" { varnishlog -n ${v1_name} -d -q 'vxid ne 1001' } # vxid only supports integer operands shell -err -expect "Expected integer got '1001.5'" { varnishlog -n ${v1_name} -d -q 'vxid != 1001.5' } # vxid doesn't support taglist selection shell -err -expect "Unexpected taglist selection for vxid" { varnishlog -n ${v1_name} -d -q 'vxid[1] >= 1001' } shell -err -expect "Unexpected taglist selection for vxid" { varnishlog -n ${v1_name} -d -q '{1}vxid <= 1001' } shell -err -expect "Unexpected taglist selection for vxid" { varnishlog -n ${v1_name} -d -q 'vxid,Link > 1001' } shell -err -expect "Unexpected taglist selection for vxid" { varnishlog -n ${v1_name} -d -q 'vxid,vxid < 1001' } varnish-7.5.0/bin/varnishtest/tests/b00051.vtc000066400000000000000000000011151457605730600210210ustar00rootroot00000000000000varnishtest "Test req.hash and bereq.hash" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { import blob; sub vcl_backend_response { set beresp.http.bereq_hash = blob.encode(HEX, blob=bereq.hash); } sub vcl_deliver { set resp.http.req_hash = blob.encode(HEX, blob=req.hash); set resp.http.req_hash-sf = req.hash; } } -start client c1 { txreq -hdr "Host: localhost" rxresp expect resp.http.req_hash ~ "[[:xdigit:]]{64}" expect resp.http.req_hash == resp.http.bereq_hash expect resp.http.req_hash-sf == ":3k0f0yRKtKt7akzkyNsTGSDOJAZOQowTwKWhu5+kIu0=:" } -run varnish-7.5.0/bin/varnishtest/tests/b00052.vtc000066400000000000000000000017601457605730600210300ustar00rootroot00000000000000varnishtest "The cache_hit_grace counter" server s1 { rxreq expect req.url == "/1" expect req.http.bgfetch == false txresp -hdr "Age: 1" -hdr "Cache-Control: max-age=2" -body "1" rxreq expect req.url == "/1" expect req.http.bgfetch == true txresp -body "2" rxreq expect req.url == "/2" expect req.http.bgfetch == false txresp } -start varnish v1 -vcl+backend { sub vcl_backend_fetch { set bereq.http.bgfetch = bereq.is_bgfetch; } } -start client c1 { txreq -url "/1" rxresp expect resp.body == "1" } -run delay 2 # Get a grace hit, will trigger a background fetch client c2 { txreq -url "/1" rxresp expect resp.body == "1" } -run varnish v1 -expect cache_hit >= cache_hit_grace delay 2 client c3 { txreq -url "/2" rxresp txreq -url "/1" rxresp expect resp.body == "2" } -run varnish v1 -expect cache_hit >= cache_hit_grace # Check that counters are correct: varnish v1 -expect cache_hit == 2 varnish v1 -expect cache_hit_grace == 1 varnish v1 -expect cache_miss == 2 varnish-7.5.0/bin/varnishtest/tests/b00053.vtc000066400000000000000000000042701457605730600210300ustar00rootroot00000000000000varnishtest "Does anything get through Unix domain sockets at all ?" server s1 -listen "${tmpdir}/s1.sock" { rxreq txresp -body "012345\n" } -start varnish v1 -arg "-a ${tmpdir}/v1.sock -a ${listen_addr}" -vcl+backend { sub vcl_backend_response { set beresp.do_stream = false; } sub vcl_deliver { # make s_resp_hdrbytes deterministic unset resp.http.via; } } -start varnish v1 -cliok "param.set debug +workspace" varnish v1 -cliok "param.set debug +witness" varnish v1 -expect n_object == 0 varnish v1 -expect sess_conn == 0 varnish v1 -expect client_req == 0 varnish v1 -expect cache_miss == 0 client c1 -connect "${v1_sock}" { txreq -url "/" rxresp expect resp.status == 200 } -run varnish v1 -expect n_object == 1 varnish v1 -expect sess_conn == 1 varnish v1 -expect client_req == 1 varnish v1 -expect cache_miss == 1 varnish v1 -expect s_sess == 1 varnish v1 -expect s_resp_bodybytes == 7 varnish v1 -expect s_resp_hdrbytes == 158 # varnishtest "vtc v_* macros when the listen address is UDS" (a00019) varnish v2 -arg "-a ${tmpdir}/v1.sock -b '${bad_backend}'" -start varnish v2 -syntax 4.0 -errvcl {Compiled VCL version (4.0) not supported.} { backend default None; } varnish v2 -syntax 4.0 -errvcl \ {Unix socket backends only supported in VCL4.1 and higher.} \ {backend default { .path = "${tmpdir}/v1.sock"; }} varnish v3 -vcl { backend default None; sub vcl_recv { return(synth(200)); } sub vcl_synth { set resp.http.addr = "${v1_addr}"; set resp.http.port = "${v1_port}"; set resp.http.sock = "${v1_sock}"; set resp.http.a0_addr = "${v1_a0_addr}"; set resp.http.a0_port = "${v1_a0_port}"; set resp.http.a0_sock = "${v1_a0_sock}"; set resp.http.a1_addr = "${v1_a1_addr}"; set resp.http.a1_port = "${v1_a1_port}"; set resp.http.a1_sock = "${v1_a1_sock}"; } } -start client c1 -connect ${v3_sock} { txreq rxresp expect resp.http.addr == "0.0.0.0" expect resp.http.port == "0" expect resp.http.sock == "${tmpdir}/v1.sock" expect resp.http.a0_addr == "0.0.0.0" expect resp.http.a0_port == "0" expect resp.http.a0_sock == "${tmpdir}/v1.sock" expect resp.http.a1_addr != "0.0.0.0" expect resp.http.a1_port != "0" expect resp.http.a1_sock ~ "[^ ]+:[^ ]+" } -run varnish-7.5.0/bin/varnishtest/tests/b00054.vtc000066400000000000000000000005601457605730600210270ustar00rootroot00000000000000varnishtest "Check poll acceptor on a UDS listen address" server s1 { rxreq txresp -hdr "Connection: close" -body "012345\n" } -start varnish v1 -arg "-a ${tmpdir}/v1.sock -Wpoll" -vcl+backend {} -start client c1 -connect "${tmpdir}/v1.sock" { txreq -url "/" rxresp expect resp.status == 200 delay .1 txreq -url "/" rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/b00055.vtc000066400000000000000000000013531457605730600210310ustar00rootroot00000000000000varnishtest "Check pipelining over a UDS listen address" server s1 { rxreq expect req.url == "/foo" txresp -body "foo" rxreq expect req.url == "/bar" txresp -body "foobar" } -start varnish v1 -arg "-a ${tmpdir}/v1.sock" -vcl+backend {} -start client c1 -connect "${tmpdir}/v1.sock" { send "GET /foo HTTP/1.1\nHost: foo\n\nGET /bar HTTP/1.1\nHost: foo\n\nGET /bar HTTP/1.1\nHost: foo\n\n" rxresp expect resp.status == 200 expect resp.bodylen == 3 expect resp.http.x-varnish == "1001" rxresp expect resp.status == 200 expect resp.bodylen == 6 expect resp.http.x-varnish == "1003" rxresp expect resp.status == 200 expect resp.bodylen == 6 expect resp.http.x-varnish == "1005 1004" } -run varnish v1 -expect sess_readahead == 2 varnish-7.5.0/bin/varnishtest/tests/b00056.vtc000066400000000000000000000016001457605730600210250ustar00rootroot00000000000000varnishtest "Check read-head / partial pipelining over a UDS listen address" server s1 { rxreq expect req.url == "/foo" txresp -body "foo" rxreq expect req.url == "/bar" txresp -body "foobar" } -start varnish v1 -arg "-a ${tmpdir}/v1.sock" -vcl+backend {} # NB: The accept_filter param may not exist. varnish v1 -cli "param.set accept_filter false" varnish v1 -start client c1 -connect "${tmpdir}/v1.sock" { send "GET /foo HTTP/1.1\r\nHost: foo\r\n\r\nGET " rxresp expect resp.status == 200 expect resp.bodylen == 3 expect resp.http.x-varnish == "1001" send "/bar HTTP/1.1\nHost: foo\n\nGET /bar " rxresp expect resp.status == 200 expect resp.bodylen == 6 expect resp.http.x-varnish == "1003" send "HTTP/1.1\nHost: foo\n\n" rxresp expect resp.status == 200 expect resp.bodylen == 6 expect resp.http.x-varnish == "1005 1004" } -run varnish v1 -expect sess_readahead == 2 varnish-7.5.0/bin/varnishtest/tests/b00057.vtc000066400000000000000000000007021457605730600210300ustar00rootroot00000000000000varnishtest "Test orderly connection closure of a UDS listen socket" server s1 -listen "${tmpdir}/s1.sock" { rxreq txresp -nolen -hdr "Transfer-encoding: chunked" delay .2 chunkedlen 30000 delay .2 chunkedlen 100000 delay .2 chunkedlen 0 } -start varnish v1 -arg "-a ${tmpdir}/v1.sock" -vcl+backend { } -start client c1 -connect "${tmpdir}/v1.sock" { txreq -hdr "Connection: close" delay 3 rxresp expect resp.bodylen == 130000 } -run varnish-7.5.0/bin/varnishtest/tests/b00058.vtc000066400000000000000000000006541457605730600210370ustar00rootroot00000000000000varnishtest "Test backend definition documentation examples" feature cmd {getent hosts localhost && getent services http} varnish v1 -arg "-p vcc_feature=-err_unref" -vcl { backend b1 {.host = "127.0.0.1";} backend b2 {.host = "[::1]:8080";} backend b3 {.host = "localhost:8081";} backend b4 {.host = "localhost:http";} backend b5 {.host = "127.0.0.1";.port = "8081";} backend b6 {.host = "127.0.0.1";.port = "http";} } varnish-7.5.0/bin/varnishtest/tests/b00059.vtc000066400000000000000000000015611457605730600210360ustar00rootroot00000000000000varnishtest "Run a lot of transactions through Unix domain sockets" server s0 -listen "${tmpdir}/s1.sock" { loop 10 { rxreq txresp -body "foo1" } rxreq txresp -hdr "Connection: close" -body "foo1" expect_close } -dispatch varnish v1 -arg "-a ${tmpdir}/v1.sock -Wpoll" -vcl+backend { sub vcl_recv { return (pass); } sub vcl_backend_fetch { set bereq.backend = s0; } } -start client c1 -connect "${tmpdir}/v1.sock" { loop 20 { txreq -url /c1 rxresp expect resp.bodylen == 4 expect resp.status == 200 } } -start client c2 -connect "${tmpdir}/v1.sock" { loop 20 { txreq -url /c2 rxresp expect resp.bodylen == 4 expect resp.status == 200 } } -start client c3 -connect "${tmpdir}/v1.sock" { loop 20 { txreq -url /c3 rxresp expect resp.bodylen == 4 expect resp.status == 200 } } -start client c1 -wait client c2 -wait client c3 -wait varnish-7.5.0/bin/varnishtest/tests/b00060.vtc000066400000000000000000000011261457605730600210230ustar00rootroot00000000000000varnishtest "VSL tags affected by the use of UDS addresses" server s1 -listen "${tmpdir}/s1.sock" { rxreq txresp } -start varnish v1 -arg "-a foo=${tmpdir}/v1.sock" -vcl+backend {} -start client c1 -connect "${tmpdir}/v1.sock" { txreq rxresp } -run logexpect l1 -v v1 -d 1 -g session { expect 0 1000 Begin expect 0 = SessOpen "^0.0.0.0 0 foo 0.0.0.0 0" } -run logexpect l2 -v v1 -d 1 -g vxid { expect * * Begin ^req expect * = ReqStart "^0.0.0.0 0 foo$" } -run logexpect l3 -v v1 -d 1 -g vxid { expect * * Begin ^bereq expect * = BackendOpen "0.0.0.0 0 0.0.0.0 0 connect$" } -run varnish-7.5.0/bin/varnishtest/tests/b00061.vtc000066400000000000000000000004111457605730600210200ustar00rootroot00000000000000varnishtest "-b arg with a Unix domain socket" server s1 -listen "${tmpdir}/s1.sock" { rxreq txresp -hdr "s1: got it" } -start varnish v1 -arg "-b ${s1_sock}" -start client c1 { txreq rxresp expect resp.status == 200 expect resp.http.s1 == "got it" } -run varnish-7.5.0/bin/varnishtest/tests/b00062.vtc000066400000000000000000000034511457605730600210300ustar00rootroot00000000000000varnishtest "Test that we properly wait for certain 304 cases" server s1 { rxreq txresp -hdr "Last-Modified: Wed, 11 Sep 2013 13:36:55 GMT" \ -hdr "Geoff: Still Rules" \ -bodylen 130560 # 2*64k-512 ^^^ see sml_trimstore() st->space - st->len < 512 # The IMS request we will spend some time to process for the sake of # this test. rxreq expect req.http.if-modified-since == "Wed, 11 Sep 2013 13:36:55 GMT" delay 1 txresp -status 304 # Last request, to a different URL to catch it if varnish asks for "/" too many times rxreq expect req.url == "/2" txresp -body "x" } -start varnish v1 -arg "-p fetch_maxchunksize=64k" -vcl+backend { sub vcl_backend_response { set beresp.ttl = 1s; set beresp.grace = 1s; set beresp.keep = 1m; set beresp.http.was-304 = beresp.was_304; } } -start client c1 { txreq rxresp expect resp.status == 200 expect resp.http.Geoff == "Still Rules" expect resp.bodylen == 130560 } -run # let the object's ttl and grace expire delay 2.1 # first client to ask for kept object - this should start the second request client c2 { txreq rxresp # we did not disable grace in the request, so we should get the graced object here expect resp.status == 200 expect resp.http.Geoff == "Still Rules" expect resp.bodylen == 130560 } -start delay .1 # second client to ask for the kept object. Here we want to wait until the backend fetch completes, not do a pass. client c3 { txreq rxresp expect resp.status == 200 expect resp.http.Geoff == "Still Rules" expect resp.bodylen == 130560 } -start client c2 -wait client c3 -wait # Finally the request to "/2". The expect in the server block makes sure that # there were no extra requests to "/" from varnish. client c4 { txreq -url "/2" rxresp expect resp.status == 200 expect resp.body == "x" } -run varnish-7.5.0/bin/varnishtest/tests/b00063.vtc000066400000000000000000000036201457605730600210270ustar00rootroot00000000000000varnishtest "Abandon background fetch when backend serves 5xx" barrier b1 cond 2 barrier b2 cond 3 server s1 { # This is what we want to get to all client requests below rxreq expect req.url == "/1" txresp -body "1" # 503s will be abandoned when we have a bgfetch rxreq expect req.url == "/1" txresp -status 503 -body "2" # varnish will disconnect on a 503 accept rxreq expect req.url == "/1" # wait until varnish has delivered 200 before replying # with the 404 barrier b2 sync delay .1 # this response will not be abandoned txresp -status 404 -reason "Not Found" -body "3" # some other resource at the end rxreq expect req.url == "/2" txresp -body "4" } -start varnish v1 -cliok "param.set vsl_mask +ExpKill" varnish v1 -vcl+backend { sub vcl_backend_response { if (beresp.status >= 500 && bereq.is_bgfetch) { return (abandon); } if (beresp.status >= 400) { set beresp.ttl = 1m; } else { set beresp.ttl = 1ms; } set beresp.grace = 1m; } } -start logexpect l1 -v v1 -g raw { expect * * ExpKill EXP_Inspect expect * * ExpKill EXP_When expect * * ExpKill EXP_Inspect } -start client c1 { txreq -url "/1" rxresp expect resp.status == 200 expect resp.body == "1" delay .2 txreq -url "/1" rxresp expect resp.status == 200 expect resp.body == "1" delay .2 barrier b1 sync txreq -url "/1" rxresp expect resp.status == 200 expect resp.body == "1" barrier b2 sync } -start client c2 { barrier b1 sync txreq -url "/1" rxresp expect resp.status == 200 expect resp.body == "1" barrier b2 sync } -start client c1 -wait client c2 -wait # Make sure the expiry has happened logexpect l1 -wait client c3 { delay .1 # We should now get a HIT on the 404: txreq -url "/1" rxresp expect resp.status == 404 expect resp.body == "3" # do a different resource to make sure we got the right number of reqs to /1 txreq -url "/2" rxresp expect resp.body == "4" } -run varnish-7.5.0/bin/varnishtest/tests/b00064.vtc000066400000000000000000000057271457605730600210420ustar00rootroot00000000000000varnishtest "Test that req.grace will hold a client when a miss is anticipated" barrier b1 cond 2 server s1 { rxreq expect req.url == "/" txresp -body "0" rxreq expect req.url == "/" txresp -body "1" # second time we get a request, we use some time to serve it rxreq expect req.url == "/" barrier b1 sync delay .1 txresp -body "2" # Last request, to a different URL to catch it if varnish asks for "/" too many times rxreq expect req.url == "/2" txresp -body "x" } -start varnish v1 -vcl+backend { import std; sub vcl_recv { # When we know in vcl_recv that we will have no grace, it is now # possible to signal this to the lookup function: if (req.http.X-no-grace) { set req.grace = 0s; } } sub vcl_hit { set req.http.X-grace = obj.grace; } sub vcl_backend_response { set beresp.ttl = 1ms; set beresp.grace = 1m; if (bereq.is_bgfetch) { set beresp.http.X-was-bgfetch = "1"; } } sub vcl_deliver { if (req.http.X-grace) { set resp.http.X-grace = req.http.X-grace; set resp.http.X-req-grace = req.grace; } } } -start client c1 { txreq rxresp expect resp.status == 200 expect resp.body == "0" expect resp.http.X-grace == # let the object's ttl expire delay .2 # get a genuinely fresh object by disabling grace # we will not get to vcl_hit to see the grace txreq -hdr "X-no-grace: true" rxresp expect resp.status == 200 expect resp.body == "1" expect resp.http.X-grace == expect resp.http.X-req-grace == expect resp.http.X-was-bgfetch == } -run # let the latest object's ttl expire. delay .2 varnish v1 -expect n_object == 1 # c2 asks for the object under grace client c2 { txreq rxresp # we did not disable grace in the request, so we should get the graced object here expect resp.status == 200 expect resp.body == "1" expect resp.http.X-grace == "60.000" expect resp.http.X-req-grace < 0. expect resp.http.X-was-bgfetch == } -run # c3 asks for graced object, but now we disable grace. The c2 client # started the background fetch, which will take a long time (until c4 # has gotten its reply). client c3 { txreq -hdr "X-no-grace: true" rxresp expect resp.status == 200 # Here we have disable grace and should get the object from the background fetch, # which will take us into vcl_hit expect resp.body == "2" expect resp.http.X-grace == "60.000" expect resp.http.X-req-grace == "0.000" expect resp.http.X-was-bgfetch == "1" } -start delay .1 # c4 does not disable grace, and should get the grace object even # though c3 is waiting on the background thread to deliver a new # version. client c4 { txreq rxresp barrier b1 sync expect resp.status == 200 # We should get what c1 got in the very beginning expect resp.body == "1" expect resp.http.X-grace == "60.000" expect resp.http.X-req-grace < 0. expect resp.http.X-was-bgfetch == } -start client c3 -wait client c4 -wait client c5 { txreq -url "/2" rxresp expect resp.status == 200 expect resp.body == "x" } -run varnish-7.5.0/bin/varnishtest/tests/b00065.vtc000066400000000000000000000011141457605730600210250ustar00rootroot00000000000000varnishtest "Check that HEAD+pass returns Content-Length if backend provides it" server s1 { rxreq expect req.method == "GET" txresp -bodylen 5 rxreq expect req.method == "HEAD" txresp -nolen -hdr "Content-Length: 6" } -start varnish v1 -vcl+backend { sub vcl_recv { if (req.url == "/2") { return (pass); } } sub vcl_backend_response { set beresp.do_stream = false; } } -start client c1 { txreq -req HEAD rxresp -no_obj expect resp.http.content-length == 5 } -run client c1 { txreq -req HEAD -url /2 rxresp -no_obj expect resp.http.content-length == 6 } -run varnish-7.5.0/bin/varnishtest/tests/b00066.vtc000066400000000000000000000040171457605730600210330ustar00rootroot00000000000000varnishtest "Test CC:max-age, CC:s-maxage and Expires handling" server s1 { rxreq txresp -hdr "Cache-Control: max-age=-2" rxreq txresp -hdr "Cache-Control: max-age=a2" rxreq txresp -hdr "Cache-Control: max-age=2a" rxreq txresp -hdr "Cache-Control: s-maxage=-2" rxreq txresp -hdr "Cache-Control: s-maxage=a2" rxreq txresp -hdr "Cache-Control: s-maxage=2a" rxreq txresp -hdr "Expires: THU, 18 Aug 2050 02:01:18 GMT" rxreq txresp -hdr "Expires: Thu, 18 AUG 2050 02:01:18 GMT" rxreq txresp -hdr "Expires: Thu, 18 Aug 2050 02:01:18 gMT" rxreq txresp -hdr "Cache-Control: max-age=5, s-maxage=1" rxreq txresp -hdr "Cache-Control: s-maxage=2, max-age=5" rxreq txresp \ -hdr "Cache-Control: max-age=5" \ -hdr "Cache-Control: s-maxage=3" } -start varnish v1 -arg "-pdefault_ttl=0" -vcl+backend { sub vcl_backend_response { set beresp.http.ttl = beresp.ttl; set beresp.uncacheable = true; } } -start client c1 { # negative max-age txreq rxresp expect resp.http.ttl == 0.000 # invalid max-age - leading alpha txreq rxresp expect resp.http.ttl == 0.000 # invalid max-age - trailing alpha txreq rxresp expect resp.http.ttl == 0.000 # negative s-maxage txreq rxresp expect resp.http.ttl == 0.000 # invalid s-maxage - leading alpha txreq rxresp expect resp.http.ttl == 0.000 # invalid s-maxage - trailing alpha txreq rxresp expect resp.http.ttl == 0.000 # Expires using wrong case (weekday) txreq rxresp expect resp.http.ttl == 0.000 # Expires using wrong case (month) txreq rxresp expect resp.http.ttl == 0.000 # Expires using wrong case (tz) txreq rxresp expect resp.http.ttl == 0.000 # s-maxage wins over longer max-age txreq rxresp expect resp.http.ttl == 1.000 # s-maxage wins over longer max-age - reversed txreq rxresp expect resp.http.ttl == 2.000 # s-maxage wins over longer max-age - multiple headers txreq rxresp expect resp.http.ttl == 3.000 } -run varnish v1 -expect *.s1.req == 12 varnish v1 -expect beresp_uncacheable == 12 varnish v1 -expect beresp_shortlived == 0 varnish-7.5.0/bin/varnishtest/tests/b00067.vtc000066400000000000000000000025541457605730600210400ustar00rootroot00000000000000varnishtest "Check timeout_idle" varnish v1 -arg "-p timeout_idle=2" \ -arg "-a ${listen_addr}" \ -arg "-a ${tmpdir}/v1.sock" \ -vcl { backend dummy { .host = "${bad_backend}"; } sub vcl_deliver { if (req.url == "/sess") { set sess.timeout_idle = 4s; } } sub vcl_backend_error { set beresp.status = 200; set beresp.ttl = 1h; } } -start client c1 { txreq rxresp delay 0.2 txreq rxresp expect_close } -start client c2 { txreq -url "/sess" rxresp delay 1.2 txreq rxresp expect_close } -start client c3 { loop 3 { # send a periodic CRLF delay 0.5 sendhex 0d0a } expect_close } -start client c4 { txreq rxresp loop 3 { # send a periodic CRLF delay 0.5 sendhex 0d0a } expect_close } -start client c1 -wait client c2 -wait client c3 -wait client c4 -wait client c1u -connect "${tmpdir}/v1.sock" { txreq rxresp delay 0.2 txreq rxresp expect_close } -start client c2u -connect "${tmpdir}/v1.sock" { txreq -url "/sess" rxresp delay 1.2 txreq rxresp expect_close } -start client c3u -connect "${tmpdir}/v1.sock" { loop 3 { # send a periodic CRLF delay 0.5 sendhex 0d0a } expect_close } -start client c4u -connect "${tmpdir}/v1.sock" { txreq rxresp loop 3 { # send a periodic CRLF delay 0.5 sendhex 0d0a } expect_close } -start client c1u -wait client c2u -wait client c3u -wait client c4u -wait varnish-7.5.0/bin/varnishtest/tests/b00068.vtc000066400000000000000000000045561457605730600210450ustar00rootroot00000000000000varnishtest "Check timeout_linger" feature !workspace_emulator # XXX this test exploits the fact that the struct waited is # left near the free pointer of the session ws when a session # made a tour over the waiter # # Would we want VSL Info about waiter involvement? # varnish v1 -arg "-p timeout_linger=1" \ -arg "-a ${listen_addr}" \ -arg "-a ${tmpdir}/v1.sock" \ -vcl { import std; import vtc; import blob; backend dummy None; sub vcl_recv { std.log(blob.encode(encoding=HEX, blob=vtc.workspace_dump(session, f))); if (req.url == "/longer") { set sess.timeout_linger = 2s; } } sub vcl_backend_error { set beresp.status = 200; set beresp.ttl = 1h; } } -start logexpect l1 -v v1 -g session -q "SessOpen ~ a0 and ReqURL ~ \"^/$\"" { expect * * VCL_call {^RECV} expect 0 = VCL_Log "^0{128}$" expect * * VCL_call {^RECV} expect 0 = VCL_Log "^0{128}$" expect * * VCL_call {^RECV} expect 0 = VCL_Log "[1-9a-f]" } -start logexpect l2 -v v1 -g session -q "SessOpen ~ a1 and ReqURL ~ \"^/$\"" { expect * * VCL_call {^RECV} expect 0 = VCL_Log "^0{128}$" expect * * VCL_call {^RECV} expect 0 = VCL_Log "^0{128}$" expect * * VCL_call {^RECV} expect 0 = VCL_Log "[1-9a-f]" } -start logexpect l3 -v v1 -g session -q "SessOpen ~ a0 and ReqURL ~ \"^/longer\"" { expect * * VCL_call {^RECV} expect 0 = VCL_Log "^0{128}$" expect * * VCL_call {^RECV} expect 0 = VCL_Log "^0{128}$" expect * * VCL_call {^RECV} expect 0 = VCL_Log "[1-9a-f]" } -start logexpect l4 -v v1 -g session -q "SessOpen ~ a1 and ReqURL ~ \"^/longer\"" { expect * * VCL_call {^RECV} expect 0 = VCL_Log "^0{128}$" expect * * VCL_call {^RECV} expect 0 = VCL_Log "^0{128}$" expect * * VCL_call {^RECV} expect 0 = VCL_Log "[1-9a-f]" } -start client c1 { txreq rxresp delay 0.2 txreq rxresp delay 2.0 txreq rxresp } -start client c1u -connect "${tmpdir}/v1.sock" { txreq rxresp delay 0.2 txreq rxresp delay 2.0 txreq rxresp } -start client c2 { txreq -url /longer rxresp delay 0.2 txreq -url /longer rxresp delay 3.0 txreq -url /longer rxresp } -start client c2u -connect "${tmpdir}/v1.sock" { txreq -url /longer rxresp delay 0.2 txreq -url /longer rxresp delay 3.0 txreq -url /longer rxresp } -start client c1 -wait client c1u -wait client c2 -wait client c2u -wait logexpect l1 -wait logexpect l2 -wait logexpect l3 -wait logexpect l4 -wait varnish-7.5.0/bin/varnishtest/tests/b00069.vtc000066400000000000000000000013711457605730600210360ustar00rootroot00000000000000varnishtest "HTTP/1 parsing checks" # Some tricky requests that have been known to cause parsing errors in the past. server s1 -repeat 3 { rxreq txresp } -start varnish v1 -vcl+backend "" -start # This test checks a bug that was dependent on the contents of the buffer left behind # by the previous request client c1 { send "POST / HTTP/1.1\r\nHost: asdf.com\r\nFoo: baar\r\n\r\n\r\n\r\n\r\n" rxresp expect resp.status == 200 send "POST / HTTP/1.1\r\nHost: asdf.com\r\nAsdf: b\n \r\n\r\nSj\r" rxresp expect resp.status == 200 } -run # This tests that the line continuation handling doesn't clear out the end of headers # [CR]LF client c1 { send "POST / HTTP/1.1\r\nHost: asdf.com\r\nAsdf: b\n \r\n\r\nSj" rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/b00070.vtc000066400000000000000000000007151457605730600210270ustar00rootroot00000000000000varnishtest "client.ip vs. string session attrs" varnish v1 -vcl { import debug; import std; backend dummy None; sub vcl_recv { return (synth(200)); } sub vcl_synth { set resp.http.ci = client.ip; set resp.http.s-ci = debug.client_ip(); set resp.http.pi = std.port(client.ip); set resp.http.s-pi = debug.client_port(); } } -start client c1 { txreq rxresp expect resp.http.ci == resp.http.s-ci expect resp.http.pi == resp.http.s-pi } -run varnish-7.5.0/bin/varnishtest/tests/b00071.vtc000066400000000000000000000003721457605730600210270ustar00rootroot00000000000000varnishtest "varnish-cli pid command" varnish v1 -cliexpect "^Master: +[0-9]+\n$" pid varnish v1 -cliok "pid -j" varnish v1 -vcl {backend be none;} -start varnish v1 -cliexpect "^Master: +[0-9]+\nWorker: +[0-9]+\n$" pid varnish v1 -cliok "pid -j" varnish-7.5.0/bin/varnishtest/tests/b00072.vtc000066400000000000000000000003611457605730600210260ustar00rootroot00000000000000varnishtest "failure in vcl_recv" varnish v1 -vcl { import vtc; backend be none; sub vcl_recv { return (fail); } sub vcl_hash { vtc.panic("unreachable"); } } -start client c1 { txreq rxresp expect resp.status == 503 } -run varnish-7.5.0/bin/varnishtest/tests/b00073.vtc000066400000000000000000000025231457605730600210310ustar00rootroot00000000000000varnishtest "backend connection close" server s1 { rxreq expect req.http.connection ~ close expect req.http.beresp-connection !~ close txresp expect_close accept rxreq expect req.http.connection !~ close expect req.http.beresp-connection ~ close txresp expect_close accept rxreq expect req.http.connection !~ close expect req.http.beresp-connection !~ close txresp -hdr "connection: close" expect_close accept rxreq expect req.http.connection ~ close expect req.http.unset-connection == true txresp expect_close } -start varnish v1 -vcl+backend { sub vcl_recv { return (pass); } sub vcl_backend_fetch { set bereq.http.connection = bereq.http.bereq-connection; } sub vcl_backend_response { if (bereq.http.unset-connection) { unset bereq.http.connection; } # NB: this overrides unconditionally on purpose set beresp.http.connection = bereq.http.beresp-connection; } } -start client c1 { txreq -hdr "bereq-connection: close, x-varnish" rxresp expect resp.status == 200 txreq -hdr "beresp-connection: close, x-varnish" rxresp expect resp.status == 200 txreq rxresp expect resp.status == 200 txreq -hdr "bereq-connection: close" -hdr "unset-connection: true" rxresp expect resp.status == 200 } -run server s1 -wait varnish v1 -expect MAIN.backend_recycle == 0 varnish v1 -expect VBE.vcl1.s1.conn == 0 varnish-7.5.0/bin/varnishtest/tests/b00074.vtc000066400000000000000000000023331457605730600210310ustar00rootroot00000000000000varnishtest "Test logexpect fail command" # NOTE: this is a test of varnishtest itself, so it would fall under # the "a" category, but has been filed under the "b" category because # it needs varnishd varnish v1 -vcl { import std; backend proforma None; sub vcl_init { std.log("i0"); std.log("i1"); std.log("i2"); std.log("i3"); } sub vcl_recv { std.log("r0"); std.log("r1"); std.log("r2"); std.log("r3"); } } -start logexpect l1 -v v1 -g vxid -q "vxid == 1001" { fail add * Error "out of workspace" fail add * VCL_Error "Workspace overflow" expect * 1001 End fail clear } -start logexpect l2 -v v1 -err -g vxid -q "vxid == 1001" { fail add * VCL_Log ^r2 } -start logexpect l3 -v v1 -g vxid -q "vxid == 1001" { fail add * VCL_Log ^r2 expect * 1001 VCL_Log ^r0 expect 0 = VCL_Log ^r1 fail clear expect 1 = VCL_Log ^r3 } -start client c1 { txreq rxresp } -run logexpect l4 -v v1 -d 1 -g raw { fail add * VCL_Log ^i2 expect * 0 VCL_Log ^i0 expect 0 = VCL_Log ^i1 fail clear expect 1 = VCL_Log ^i3 fail add * Error "out of workspace" fail add * VCL_Error "Workspace overflow" expect * 1000 End fail clear } -start logexpect l1 -wait logexpect l2 -wait logexpect l3 -wait logexpect l4 -wait varnish-7.5.0/bin/varnishtest/tests/b00075.vtc000066400000000000000000000007511457605730600210340ustar00rootroot00000000000000varnishtest "Test backend preamble" server s1 { rxreq expect req.method == "POST" expect req.url == "/preamble" expect req.proto == "HTTP/7.3" expect req.http.Header == "42" rxreq expect req.method == "GET" expect req.url == "/" expect req.proto == "HTTP/1.1" txresp } -start varnish v1 -vcl { backend s1 { .host = "${s1_sock}"; .preamble = :UE9TVCAvcHJlYW1ibGUgSFRUUC83LjMKSGVhZGVyOiA0MgoKCg==: ; } } -start client c1 { txreq rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/b00076.vtc000066400000000000000000000021421457605730600210310ustar00rootroot00000000000000varnishtest "Non-symbolic HTTP headers names" varnish v1 -errvcl "Invalid character '\\n' in header name" { backend be none; sub vcl_recv { set req.http.{"line break"} = "invalid"; } } varnish v1 -errvcl "Expected '=' got '.'" { backend be none; sub vcl_recv { set req."http".wrong-quote = "invalid"; } } varnish v1 -syntax 4.0 -errvcl "Quoted headers are available for VCL >= 4.1" { backend be none; sub vcl_recv { set req.http."quoted" = "invalid"; } } varnish v1 -vcl { import std; backend be none; sub vcl_recv { std.collect(req.http."..."); return (synth(200)); } sub vcl_synth { set resp.http."123" = "456"; set resp.http."456" = resp.http."123"; set resp.http.{"!!!"} = "???"; set resp.http."""resp.http.foo""" = "bar"; set resp.http.bar = resp.http."resp.http.foo".upper(); set resp.http."..." = req.http."..."; } } -start client c1 { txreq -hdr "...: a" -hdr "...: b" rxresp expect resp.http.123 == 456 expect resp.http.456 == 456 expect resp.http.!!! == ??? expect resp.http.resp.http.foo == bar expect resp.http.bar == BAR expect resp.http.... == "a, b" } -run varnish-7.5.0/bin/varnishtest/tests/b00077.vtc000066400000000000000000000023331457605730600210340ustar00rootroot00000000000000varnishtest "SLT_Hit ongoing fetch" barrier b1 cond 2 -cyclic barrier b2 cond 2 -cyclic server s1 { rxreq txresp -nolen -hdr "Content-Length: 10" send hello barrier b1 sync barrier b2 sync send world rxreq txresp -nolen -hdr "Transfer-Encoding: chunked" chunked hello barrier b1 sync barrier b2 sync chunked world chunked "" } -start varnish v1 -cliok "param.set debug +syncvsl" varnish v1 -cliok "param.set thread_pools 1" varnish v1 -vcl+backend "" -start client c1 { txreq rxresp expect resp.body == helloworld } -start barrier b1 sync logexpect l1 -v v1 -g raw { # vxid TTL grace keep fetch length expect * 1004 Hit "^1002 [0-9.]+ 10.000000 0.000000 [0-5] 10$" } -start client c2 { txreq rxresp expect resp.body == helloworld } -start logexpect l1 -wait barrier b2 sync client c1 -wait client c2 -wait # Recycle almost everything for the chunked variant varnish v1 -cliok "ban obj.status != 0" client c1 -start barrier b1 sync logexpect l2 -v v1 -g raw { # vxid TTL grace keep fetch expect * 1009 Hit "^1007 [0-9.]+ 10.000000 0.000000 [0-5]$" } -start client c2 -start logexpect l2 -wait barrier b2 sync client c1 -wait client c2 -wait varnish-7.5.0/bin/varnishtest/tests/b00078.vtc000066400000000000000000000013231457605730600210330ustar00rootroot00000000000000varnishtest "deprecated parameters" varnish v1 -arg "-b none" -start # permanent alias varnish v1 -cliok "param.set debug +syncvsl" varnish v1 -cliexpect "[+]syncvsl" "param.show deprecated_dummy" varnish v1 -cliexpect "[+]syncvsl" "param.show -j deprecated_dummy" shell -err { varnishadm -n ${v1_name} "param.show" | grep deprecated_dummy } shell -err { varnishadm -n ${v1_name} "param.show -l" | grep deprecated_dummy } shell -err { varnishadm -n ${v1_name} "param.show -j" | grep deprecated_dummy } # temporary aliases varnish v1 -cliexpect vcc_feature "param.show vcc_allow_inline_c" varnish v1 -cliexpect vcc_feature "param.show vcc_err_unref" varnish v1 -cliexpect vcc_feature "param.show vcc_unsafe_path" varnish-7.5.0/bin/varnishtest/tests/b00079.vtc000066400000000000000000000036551457605730600210460ustar00rootroot00000000000000varnishtest "Test backend IMS with weak and strong LM" server s1 { rxreq txresp -hdr "Last-Modified: Wed, 11 Sep 2013 13:36:55 GMT" -nodate -body "1" # When origin does not send a Date, varnish inserts one, prompting IMS rxreq expect req.http.if-modified-since == "Wed, 11 Sep 2013 13:36:55 GMT" txresp -status 304 -hdr "Last-Modified: Wed, 11 Sep 2013 13:36:55 GMT" \ -hdr "Date: Wed, 11 Sep 2013 13:36:55 GMT" \ # LM was the same as Date rxreq expect req.http.if-modified-since == txresp -hdr "Last-Modified: Wed, 11 Sep 2013 13:36:55 GMT" \ -hdr "Date: Wed, 11 Sep 2013 13:36:56 GMT" \ -body "2" # LM was one second older than Date rxreq expect req.http.if-modified-since == "Wed, 11 Sep 2013 13:36:55 GMT" txresp -status 304 -hdr "Last-Modified: Wed, 11 Sep 2013 13:36:55 GMT" \ -hdr "Date: Wed, 11 Sep 2013 13:36:55 GMT" \ -hdr {ETag: "foo"} # LM was the same as Date, but we had an ETag, prompting INM rxreq expect req.http.if-modified-since == expect req.http.if-none-match == {"foo"} txresp -status 304 -hdr "Last-Modified: Wed, 11 Sep 2013 13:36:55 GMT" \ -hdr "Date: Wed, 11 Sep 2013 13:36:55 GMT" \ -hdr {ETag: "foo"} } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.ttl = 1ms; set beresp.grace = 0s; set beresp.keep = 1m; set beresp.http.was-304 = beresp.was_304; } } -start client c1 { txreq rxresp expect resp.status == 200 expect resp.body == "1" expect resp.http.was-304 == "false" delay 0.1 txreq rxresp expect resp.status == 200 expect resp.body == "1" expect resp.http.was-304 == "true" delay 0.1 txreq rxresp expect resp.status == 200 expect resp.body == "2" expect resp.http.was-304 == "false" delay 0.1 txreq rxresp expect resp.status == 200 expect resp.body == "2" expect resp.http.was-304 == "true" delay 0.1 txreq rxresp expect resp.status == 200 expect resp.body == "2" expect resp.http.was-304 == "true" } -run varnish-7.5.0/bin/varnishtest/tests/b00080.vtc000066400000000000000000000041571457605730600210340ustar00rootroot00000000000000varnishtest "Keep-Alive is a hop-by-hop header" server s1 { rxreq expect req.http.req-keep-alive == "true" expect req.http.bereq-keep-alive == "false" expect req.http.Keep-Alive == txresp -hdr "Keep-Alive: 1" rxreq expect req.http.req-keep-alive == "true" expect req.http.bereq-keep-alive == "false" expect req.http.Keep-Alive == txresp -hdr "Connection: Keep-Alive" -hdr "Keep-Alive: 2" rxreq expect req.http.req-keep-alive == "true" expect req.http.bereq-keep-alive == "false" expect req.http.Keep-Alive == expect req.http.Cookie == "3" txresp -hdr "Keep-Alive: 3" rxreq expect req.http.req-keep-alive == "true" expect req.http.bereq-keep-alive == "false" expect req.http.Keep-Alive == expect req.http.Cookie == "4" txresp -hdr "Connection: Keep-Alive" -hdr "Keep-Alive: 4" } -start varnish v1 -vcl+backend { sub vcl_recv { set req.http.req-keep-alive = !(!req.http.Keep-Alive); } sub vcl_backend_fetch { set bereq.http.bereq-keep-alive = !(!bereq.http.Keep-Alive); } sub vcl_backend_response { set beresp.http.beresp-keep-alive = !(!beresp.http.Keep-Alive); } sub vcl_deliver { set resp.http.resp-keep-alive = !(!resp.http.Keep-Alive); } } -start client c1 { txreq -url "/1" -hdr "Keep-Alive: 1" rxresp expect resp.status == 200 expect resp.http.beresp-keep-alive == "true" expect resp.http.resp-keep-alive == "false" expect resp.http.Keep-Alive == txreq -url "/2" -hdr "Connection: Keep-Alive" -hdr "Keep-Alive: 2" rxresp expect resp.status == 200 expect resp.http.beresp-keep-alive == "true" expect resp.http.resp-keep-alive == "false" expect req.http.Keep-Alive == txreq -url "/3" -hdr "Keep-Alive: 3" -hdr "Cookie: 3" rxresp expect resp.status == 200 expect resp.http.beresp-keep-alive == "true" expect resp.http.resp-keep-alive == "false" expect resp.http.Keep-Alive == txreq -url "/4" -hdr "Connection: Keep-Alive" -hdr "Keep-Alive: 4" -hdr "Cookie: 4" rxresp expect resp.status == 200 expect resp.http.beresp-keep-alive == "true" expect resp.http.resp-keep-alive == "false" expect req.http.Keep-Alive == } -run varnish-7.5.0/bin/varnishtest/tests/b00081.vtc000066400000000000000000000106031457605730600210260ustar00rootroot00000000000000varnishtest "test iovec flush counter" server s0 { rxreq txresp } -dispatch # http1_iovs # Simple request/response varnish v1 -vcl+backend {} -start varnish v1 -cliok "param.set http1_iovs 30" client c1 { txreq rxresp } -run varnish v1 -expect MAIN.http1_iovs_flush == 0 # Decreasing http1_iovs causes premature flushes varnish v1 -cliok "param.set http1_iovs 5" client c1 { txreq rxresp } -run varnish v1 -expect MAIN.http1_iovs_flush > 0 ################################################## # Increase number of headers on fetch side varnish v1 -cliok "param.set http1_iovs 30" varnish v1 -stop varnish v1 -vcl+backend { sub vcl_backend_fetch { set bereq.http.hdr1 = "hdr"; set bereq.http.hdr2 = "hdr"; set bereq.http.hdr3 = "hdr"; set bereq.http.hdr4 = "hdr"; set bereq.http.hdr5 = "hdr"; set bereq.http.hdr6 = "hdr"; set bereq.http.hdr7 = "hdr"; set bereq.http.hdr8 = "hdr"; set bereq.http.hdr9 = "hdr"; set bereq.http.hdr10 = "hdr"; set bereq.http.hdr11 = "hdr"; set bereq.http.hdr12 = "hdr"; set bereq.http.hdr13 = "hdr"; set bereq.http.hdr14 = "hdr"; set bereq.http.hdr15 = "hdr"; set bereq.http.hdr16 = "hdr"; set bereq.http.hdr17 = "hdr"; set bereq.http.hdr18 = "hdr"; set bereq.http.hdr19 = "hdr"; set bereq.http.hdr20 = "hdr"; } } -start client c1 { txreq rxresp } -run # http1_iovs parameter does not affect fetch varnish v1 -expect MAIN.http1_iovs_flush == 0 ##################################################### # Increase number of headers on deliver side varnish v1 -stop varnish v1 -vcl+backend { sub vcl_deliver { set resp.http.hdr1 = "hdr"; set resp.http.hdr2 = "hdr"; set resp.http.hdr3 = "hdr"; set resp.http.hdr4 = "hdr"; set resp.http.hdr5 = "hdr"; set resp.http.hdr6 = "hdr"; set resp.http.hdr7 = "hdr"; set resp.http.hdr8 = "hdr"; set resp.http.hdr9 = "hdr"; set resp.http.hdr10 = "hdr"; set resp.http.hdr11 = "hdr"; set resp.http.hdr12 = "hdr"; set resp.http.hdr13 = "hdr"; set resp.http.hdr14 = "hdr"; set resp.http.hdr15 = "hdr"; set resp.http.hdr16 = "hdr"; set resp.http.hdr17 = "hdr"; set resp.http.hdr18 = "hdr"; set resp.http.hdr19 = "hdr"; set resp.http.hdr20 = "hdr"; } } -start varnish v1 -cliok "param.set http1_iovs 30" client c1 { txreq rxresp } -run # http1_iovs parameter affects deliver varnish v1 -expect MAIN.http1_iovs_flush > 0 ################################################## # Compare with workspace_thread # 0.5k is enough for simplest request/response varnish v1 -cliok "param.reset http1_iovs" varnish v1 -stop varnish v1 -cliok "param.set workspace_thread 0.5k" varnish v1 -vcl+backend {} -start client c1 { txreq rxresp } -run varnish v1 -expect MAIN.http1_iovs_flush == 0 # Increase number of headers on fetch side varnish v1 -stop varnish v1 -vcl+backend { sub vcl_backend_fetch { set bereq.http.hdr1 = "hdr"; set bereq.http.hdr2 = "hdr"; set bereq.http.hdr3 = "hdr"; set bereq.http.hdr4 = "hdr"; set bereq.http.hdr5 = "hdr"; set bereq.http.hdr6 = "hdr"; set bereq.http.hdr7 = "hdr"; set bereq.http.hdr8 = "hdr"; set bereq.http.hdr9 = "hdr"; set bereq.http.hdr10 = "hdr"; set bereq.http.hdr11 = "hdr"; set bereq.http.hdr12 = "hdr"; set bereq.http.hdr13 = "hdr"; set bereq.http.hdr14 = "hdr"; set bereq.http.hdr15 = "hdr"; set bereq.http.hdr16 = "hdr"; set bereq.http.hdr17 = "hdr"; set bereq.http.hdr18 = "hdr"; set bereq.http.hdr19 = "hdr"; set bereq.http.hdr20 = "hdr"; } } -start client c1 { txreq rxresp } -run # workspace_thread parameter affects fetch varnish v1 -expect MAIN.http1_iovs_flush > 0 # Increase number of headers on deliver side varnish v1 -stop varnish v1 -vcl+backend { sub vcl_deliver { set resp.http.hdr1 = "hdr"; set resp.http.hdr2 = "hdr"; set resp.http.hdr3 = "hdr"; set resp.http.hdr4 = "hdr"; set resp.http.hdr5 = "hdr"; set resp.http.hdr6 = "hdr"; set resp.http.hdr7 = "hdr"; set resp.http.hdr8 = "hdr"; set resp.http.hdr9 = "hdr"; set resp.http.hdr10 = "hdr"; set resp.http.hdr11 = "hdr"; set resp.http.hdr12 = "hdr"; set resp.http.hdr13 = "hdr"; set resp.http.hdr14 = "hdr"; set resp.http.hdr15 = "hdr"; set resp.http.hdr16 = "hdr"; set resp.http.hdr17 = "hdr"; set resp.http.hdr18 = "hdr"; set resp.http.hdr19 = "hdr"; set resp.http.hdr20 = "hdr"; } } -start client c1 { txreq rxresp } -run # workspace_thread parameter affects deliver varnish v1 -expect MAIN.http1_iovs_flush > 0 varnish-7.5.0/bin/varnishtest/tests/b00082.vtc000066400000000000000000000010311457605730600210220ustar00rootroot00000000000000varnishtest "Backend IMS 304 reponse with Content-Length 0" # this case tests invalid behaviour, which we should handle gracefully anyway server s1 { rxreq txresp -nolen -hdr "Content-Length: 0" -hdr {Etag: "foo"} rxreq txresp -status 304 -nolen -hdr "Content-Length: 0" -hdr {Etag: "foo"} } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.ttl = 1s; } } -start client c1 { txreq rxresp txreq rxresp } -run delay 1 client c2 { txreq rxresp } -run delay 0.1 client c3 { txreq rxresp } -run varnish-7.5.0/bin/varnishtest/tests/b00083.vtc000066400000000000000000000016361457605730600210360ustar00rootroot00000000000000varnishtest "VCP FIN-WAIT2" server s1 { rxreq txresp # Leave the TCP connection open in the FIN-WAIT2 state delay 1000 } -start server s2 { rxreq txresp } -start varnish v1 -vcl { backend s1 { .host = "${s1_sock}"; } } -start # The waiter depend on the backend_idle_timeout for when to give up and # close the connection, so bump it up a bit. varnish v1 -cliok "param.set backend_idle_timeout 120" # The shutdown is done on the CLI thread, and it blocks until the waiter has # killed the connection. So bump cli_timeout up as well varnish v1 -cliok "param.set cli_timeout 120" client c1 { txreq -url "/" rxresp } -run varnish v1 -vcl { backend s2 { .host = "${s2_sock}"; } } varnish v1 -cliok "vcl.use vcl2" varnish v1 -cliok "vcl.discard vcl1" varnish v1 -expect n_backend == 1 varnish v1 -expect backend_conn == 1 varnish v1 -expect backend_reuse == 0 varnish v1 -expect backend_recycle == 1 varnish-7.5.0/bin/varnishtest/tests/c00000.vtc000066400000000000000000000011171457605730600210160ustar00rootroot00000000000000varnishtest "Built-in split subroutine" server s1 { rxreq txresp -hdr "age: 12" \ -hdr "cache-control: public, max-age=10, stale-while-revalidate=20" } -start varnish v1 -vcl+backend { sub vcl_req_cookie { return; # trust beresp headers } sub vcl_beresp_stale { if (beresp.ttl + beresp.grace > 0s) { return; # cache stale responses } } } -start client c1 { txreq rxresp expect resp.status == 200 txreq -hdr "cookie: unrelated=analytics" rxresp expect resp.status == 200 } -run varnish v1 -expect cache_hit == 1 varnish v1 -expect cache_hit == cache_hit_grace varnish-7.5.0/bin/varnishtest/tests/c00001.vtc000066400000000000000000000026161457605730600210240ustar00rootroot00000000000000varnishtest "Test VCL regsub()" server s1 { rxreq txresp \ -hdr "Foobar: _barf_" \ -hdr "Connection: close" \ -body "012345\n" } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.http.Snafu1 = regsub(beresp.http.Foobar, "ar", "\0\0"); set beresp.http.Snafu2 = regsub(beresp.http.Foobar, "(b)(a)(r)(f)", "\4\3\2p"); set beresp.http.Snafu3 = regsub(beresp.http.Foobar, "(b)(a)(r)(f)", "\4\\\3\2p"); set beresp.http.Snafu4 = regsub(beresp.http.Foobar, "(b)(a)(r)(f)", "\4\&\3\2p"); set beresp.http.Snafu5 = regsub(beresp.http.Foobar, "(b)(a)(r)(f)", "\0\4\3\2\\p"); set beresp.http.Snafu6 = regsub(beresp.http.Foobar, "(b)(a)(r)(f)", "\4\&\3\2p\"); set beresp.http.Snafu7 = regsub(beresp.http.Foobar, "ar", bereq.http.nosuchheader); set beresp.http.inval = regsub(beresp.http.Foobar, "(b)(a)(r)f", "\9\8\7\6\5\4\3\2p"); } } -start client c1 { txreq -url "/" rxresp expect resp.status == 200 expect resp.http.X-Varnish == "1001" expect resp.http.foobar == "_barf_" expect resp.http.snafu1 == "_bararf_" expect resp.http.snafu2 == "_frap_" expect resp.http.snafu3 == "_f\\rap_" expect resp.http.snafu4 == "_f&rap_" # NB: have to escape the \\ in the next two lines expect resp.http.snafu5 == "_barffra\\p_" expect resp.http.snafu6 == "_f&rap\\_" expect resp.http.snafu7 == "_bf_" expect resp.http.inval == "_rap_" } -run varnish-7.5.0/bin/varnishtest/tests/c00002.vtc000066400000000000000000000006261457605730600210240ustar00rootroot00000000000000varnishtest "Check that all thread pools all get started and get minimum threads" server s1 { rxreq txresp -hdr "Connection: close" -body "012345\n" } -start varnish v1 \ -arg "-p thread_pool_min=10" \ -arg "-p thread_pool_max=10" \ -arg "-p thread_pools=2" varnish v1 -vcl+backend {} -start client c1 { txreq -url "/" rxresp expect resp.status == 200 } -run varnish v1 -expect threads == 20 varnish-7.5.0/bin/varnishtest/tests/c00003.vtc000066400000000000000000000075371457605730600210350ustar00rootroot00000000000000varnishtest "Check that we fail to start with erroneous -a/-b arguments" # Duplicate -a arguments # XXX: this cannot be tested reliably, we tolerate port collision shell -err -match "have same address|already in use|Address in use" { if echo "${localhost}" | grep ":" >/dev/null ; then varnishd -d -a "[${localhost}]":38484 -a "[${localhost}]":38484 -b None else varnishd -d -a ${localhost}:38484 -a ${localhost}:38484 -b None fi } shell -err -match "have same address|already in use|Address in use" { varnishd -d -a ${tmpdir}/vtc.sock -a ${tmpdir}/vtc.sock -b None } # -a bad protocol specs shell -err -expect "Too many protocol sub-args" { varnishd -a ${localhost}:80000,PROXY,FOO -d } shell -err -expect "Too many protocol sub-args" { varnishd -a ${localhost}:80000,HTTP,FOO -d } # -a UDS path too long shell -err -match "Got no socket" { varnishd -a /Although/sizeof/sockaddr_un/sun_path/is/platform/specific/this/path/is/really/definitely/and/most/assuredly/too/long/on/any/platform/--/any/length/that/results/in/sizeof/sockaddr_un/being/greater/than/128/will/probably/be/enough/to/blow/it/up. -d } # -a relative path for a UDS address not permitted shell -err -expect "Unix domain socket addresses must be absolute paths" { varnishd -a foo/bar.sock -d } # -a args for UDS permissions not permitted with IP addresses shell -err -expect "Invalid sub-arg user=u" { varnishd -a ${localhost}:80000,user=u -d } shell -err -expect "Invalid sub-arg group=g" { varnishd -a ${localhost}:80000,group=g -d } shell -err -expect "Invalid sub-arg mode=660" { varnishd -a ${localhost}:80000,mode=660 -d } shell -err -expect "Invalid sub-arg mode=660" { varnishd -a @abstract,mode=660 -d } # Illegal mode sub-args shell -err -expect "Too many mode sub-args" { varnishd -a ${tmpdir}/vtc.sock,mode=660,mode=600 -d } shell -err -expect "Empty mode sub-arg" { varnishd -a ${tmpdir}/vtc.sock,mode= -d } shell -err -expect "Invalid mode sub-arg 666devilish" { varnishd -a ${tmpdir}/vtc.sock,mode=666devilish -d } shell -err -expect "Invalid mode sub-arg devilish" { varnishd -a ${tmpdir}/vtc.sock,mode=devilish -d } shell -err -expect "Invalid mode sub-arg 999" { varnishd -a ${tmpdir}/vtc.sock,mode=999 -d } shell -err -expect "Cannot parse mode sub-arg 7" { varnishd -a ${tmpdir}/vtc.sock,mode=77777777777777777777777777777777 -d } shell -err -expect "Cannot parse mode sub-arg -7" { varnishd -a ${tmpdir}/vtc.sock,mode=-77777777777777777777777777777777 -d } shell -err -expect "Mode sub-arg 1666 out of range" { varnishd -a ${tmpdir}/vtc.sock,mode=1666 -d } shell -err -expect "Mode sub-arg 0 out of range" { varnishd -a ${tmpdir}/vtc.sock,mode=0 -d } shell -err -expect "Mode sub-arg -1 out of range" { varnishd -a ${tmpdir}/vtc.sock,mode=-1 -d } ## ## user and group sub-args tested in c00086.vtc, where the user and group ## features are enabled. ## # Invalid sub-arg shell -err -expect "Invalid sub-arg foo=bar" { varnishd -a ${tmpdir}/vtc.sock,foo=bar -d } # A sub-arg without '=' is interpreted as a protocol name. shell -err -expect "Unknown protocol" { varnishd -a ${tmpdir}/vtc.sock,foobar -d } shell -err -expect "Invalid sub-arg userfoo=u" { varnishd -a ${tmpdir}/vtc.sock,userfoo=u -d } shell -err -expect "Invalid sub-arg groupfoo=g" { varnishd -a ${tmpdir}/vtc.sock,groupfoo=g -d } shell -err -expect "Invalid sub-arg modefoo=666" { varnishd -a ${tmpdir}/vtc.sock,modefoo=666 -d } shell -err -expect "Invalid sub-arg =foo" { varnishd -a ${tmpdir}/vtc.sock,=foo -d } # This requires non-local binds to be disabled. If you see this fail # and are on Linux, ensure /proc/net/ipv4/ip_nonlocal_bind is set to 0. # All bad listen addresses shell -err -expect "Error: Could not get socket" { varnishd -F -a "${bad_ip}:0" -b '***' -n ${tmpdir} } # old style address list shell -err -expect "Unknown protocol" { varnishd -F -a "${listen_addr},${bad_ip}:0" -b '***' -n ${tmpdir} } varnish-7.5.0/bin/varnishtest/tests/c00004.vtc000066400000000000000000000040711457605730600210240ustar00rootroot00000000000000varnishtest "Test Vary functionality" barrier b1 cond 3 server s1 { rxreq expect req.http.foobar == "1" txresp -hdr "Vary: Foobar" -hdr "Snafu: 1" -body "1111\n" rxreq expect req.http.foobar == "2" txresp -hdr "Vary: Foobar" -hdr "Snafu: 2" -body "2222\n" rxreq expect req.http.foobar == "3" txresp -hdr "Vary: Foobar" -hdr "Snafu: 3" -body "3333\n" rxreq expect req.http.foobar == txresp -hdr "Vary: Foobar" -hdr "Snafu: 4" -body "4444\n" rxreq expect req.http.foobar == "" txresp -hdr "Vary: Foobar" -hdr "Snafu: 5" -body "5555\n" # #3858 test vary prediction turning out wrong # no Vary, HFM and waitinglist rxreq expect req.http.foobar == "x" barrier b1 sync txresp -hdr "Cache-Control: no-cache" rxreq expect req.http.foobar == "x" txresp -hdr "Cache-Control: no-cache" } -start varnish v1 -vcl+backend {} -start client c1 { txreq -hdr "Foobar: 1" rxresp expect resp.status == 200 expect resp.http.X-Varnish == "1001" expect resp.http.snafu == "1" txreq -hdr "Foobar: 2" rxresp expect resp.status == 200 expect resp.http.X-Varnish == "1003" expect resp.http.snafu == "2" txreq -hdr "Foobar: 3" rxresp expect resp.status == 200 expect resp.http.X-Varnish == "1005" expect resp.http.snafu == "3" txreq rxresp expect resp.status == 200 expect resp.http.X-Varnish == "1007" expect resp.http.snafu == "4" txreq -hdr "Foobar: 1 " rxresp expect resp.status == 200 expect resp.http.X-Varnish == "1009 1002" expect resp.http.snafu == "1" txreq -hdr "Foobar: 1 " rxresp expect resp.status == 200 expect resp.http.X-Varnish == "1010 1002" expect resp.http.snafu == "1" # empty header != no header txreq -hdr "Foobar: " rxresp expect resp.status == 200 expect resp.http.X-Varnish == "1011" expect resp.http.snafu == "5" } -run client c1 { txreq -hdr "Foobar: x" barrier b1 sync rxresp expect resp.status == 200 expect resp.http.Vary == } -start client c2 { txreq -hdr "Foobar: x" barrier b1 sync rxresp expect resp.status == 200 expect resp.http.Vary == } -start client c1 -wait client c2 -wait varnish-7.5.0/bin/varnishtest/tests/c00005.vtc000066400000000000000000000225161457605730600210310ustar00rootroot00000000000000varnishtest "Test ACLs" server s1 { rxreq expect req.url == "/" txresp -body "1111\n" rxreq expect req.url == "foo" txresp -body "2222\n" } -start varnish v1 -arg "-p feature=+trace" varnish v1 -errvcl {Name acl1 must have type 'acl'.} { sub vcl_recv { if (client.ip ~ acl1) { set req.url = "/"; } } backend acl1 { .host = "${localhost}"; } } varnish v1 -vcl+backend { sub vcl_recv { if (client.ip ~ acl1) { set req.url = "/"; } } acl acl1 { "${localhost}"; } sub vcl_deliver { set resp.http.acl = acl1; } } -start client c1 { txreq -url "foo" rxresp expect resp.status == 200 expect resp.http.acl == acl1 } -run varnish v1 -vcl+backend { acl acl1 { ! "${localhost}"; "0.0.0.0" / 0; "::" / 0; } sub vcl_recv { if (client.ip ~ acl1) { set req.url = "/"; } } sub vcl_deliver { set resp.http.acl = acl1; } } client c1 -run varnish v1 -cliok "param.set vsl_mask -VCL_trace" varnish v1 -vcl { import std; backend dummy None; acl acl1 +log -pedantic { # bad notation (confusing) "1.2.3.4"/24; "1.2.3.66"/26; # more specific wins "1.4.4.0"/22; "1.3.4.0"/23; "1.3.5.0"/26; "1.3.6.0"/25; "1.3.6.128"/25; "1.3.0.0"/21; "1.4.7"; "1.4.6.0"/24; # bad notation (confusing) "affe::affe:0304"/120; "affe::affe:0342"/122; # more specific wins "bad:cafe::"/32; "bad:cafe::"/31; } sub vcl_recv { return (synth(200)); } sub t { if (std.ip(req.http.ip) ~ acl1) { } } sub vcl_synth { # variables would be nice, but not in core (yet?) set req.http.ip = "1.2.3.0"; call t; set req.http.ip = "1.2.3.63"; call t; set req.http.ip = "1.2.3.64"; call t; set req.http.ip = "1.3.4.255"; call t; set req.http.ip = "1.3.5.0"; call t; set req.http.ip = "1.3.5.255"; call t; set req.http.ip = "1.3.6.0"; call t; set req.http.ip = "1.3.6.140"; call t; set req.http.ip = "1.3.7.255"; call t; set req.http.ip = "1.4.5.255"; call t; set req.http.ip = "1.4.6.64"; call t; set req.http.ip = "1.4.7.64"; call t; set req.http.ip = "affe::affe:0300"; call t; set req.http.ip = "affe::affe:033f"; call t; set req.http.ip = "affe::affe:0340"; call t; set req.http.ip = "0bad:cafe:1234::1"; call t; } } logexpect l1 -v v1 -g raw { expect * 1007 ReqHeader {^\Qip: 1.2.3.0\E$} expect 0 = VCL_acl {^\QMATCH acl1 "1.2.3.4"/24 fixed: 1.2.3.0/24\E$} expect 1 = ReqHeader {^\Qip: 1.2.3.63\E$} expect 0 = VCL_acl {^\QMATCH acl1 "1.2.3.4"/24 fixed: 1.2.3.0/24\E$} expect 1 = ReqHeader {^\Qip: 1.2.3.64\E$} expect 0 = VCL_acl {^\QMATCH acl1 "1.2.3.66"/26 fixed: 1.2.3.64/26\E$} expect 1 = ReqHeader {^\Qip: 1.3.4.255\E$} expect 0 = VCL_acl {^\QMATCH acl1 "1.3.4.0"/23\E$} expect 1 = ReqHeader {^\Qip: 1.3.5.0\E$} expect 0 = VCL_acl {^\QMATCH acl1 "1.3.5.0"/26\E$} expect 1 = ReqHeader {^\Qip: 1.3.5.255\E$} expect 0 = VCL_acl {^\QMATCH acl1 "1.3.4.0"/23\E$} expect 1 = ReqHeader {^\Qip: 1.3.6.0\E$} expect 0 = VCL_acl {^\QMATCH acl1 "1.3.6.0"/25\E$} expect 1 = ReqHeader {^\Qip: 1.3.6.140\E$} expect 0 = VCL_acl {^\QMATCH acl1 "1.3.6.128"/25\E$} expect 1 = ReqHeader {^\Qip: 1.3.7.255\E$} expect 0 = VCL_acl {^\QMATCH acl1 "1.3.0.0"/21\E$} expect 1 = ReqHeader {^\Qip: 1.4.5.255\E$} expect 0 = VCL_acl {^\QMATCH acl1 "1.4.4.0"/22\E$} expect 1 = ReqHeader {^\Qip: 1.4.6.64\E$} expect 0 = VCL_acl {^\QMATCH acl1 "1.4.6.0"/24\E$} expect 1 = ReqHeader {^\Qip: 1.4.7.64\E$} expect 0 = VCL_acl {^\QMATCH acl1 "1.4.7"\E$} expect 1 = ReqHeader {^\Qip: affe::affe:0300\E$} expect 0 = VCL_acl {^\QMATCH acl1 "affe::affe:0304"/120 fixed: affe::affe:300/120\E$} expect 1 = ReqHeader {^\Qip: affe::affe:033f\E$} expect 0 = VCL_acl {^\QMATCH acl1 "affe::affe:0304"/120 fixed: affe::affe:300/120\E$} expect 1 = ReqHeader {^\Qip: affe::affe:0340\E$} expect 0 = VCL_acl {^\QMATCH acl1 "affe::affe:0342"/122 fixed: affe::affe:340/122\E$} expect 1 = ReqHeader {^\Qip: 0bad:cafe:1234::1\E$} expect 0 = VCL_acl {^\QMATCH acl1 "bad:cafe::"/32\E$} } -start client c1 { txreq rxresp } -run logexpect l1 -wait varnish v1 -errvcl {Non-zero bits in masked part} { import std; backend dummy None; acl acl1 +pedantic { "1.2.3.4"/24; } sub vcl_recv { if (client.ip ~ acl1) {} } } varnish v1 -errvcl {Non-zero bits in masked part} { import std; backend dummy None; acl acl1 +pedantic { "affe::affe:0304"/120; } sub vcl_recv { if (client.ip ~ acl1) {} } } # this is both an OK test for pedantic and fold varnish v1 -vcl { import std; backend dummy None; acl acl1 +log +pedantic +fold { # bad notation (confusing) "1.2.3.0"/24; "1.2.3.64"/26; # all contained in 1.3.0.0/21 and 1.4.4.0/22 "1.4.4.0"/22; "1.3.4.0"/23; "1.3.5.0"/26; "1.3.6.0"/25; "1.3.6.128"/25; "1.3.0.0"/21; "1.4.7"; "1.4.6.0"/24; # right,left adjacent "2.3.2.0"/23; "2.3.0.0"/23; # left,right adjacent "2.3.4.0"/23; "2.3.6.0"/23; # 12/14 folded, not 10 "2.10.0.0"/15; "2.12.0.0"/15; "2.14.0.0"/15; # 226/227 folded, not 225 "2.225.0.0"/16; "2.226.0.0"/16; "2.227.0.0"/16; # phks test case "10.0.0.0"/23; "10.0.2.0"/23; "10.1.0.0"/24; "10.1.1.0"/24; "10.2.0.0"/25; "10.2.0.128"/25; # contained "bad:cafe::"/32; "bad:cafe::"/31; } sub vcl_recv { return (synth(200)); } sub t { if (std.ip(req.http.ip) ~ acl1) { } } sub vcl_synth { # variables would be nice, but not in core (yet?) set req.http.ip = "1.2.3.0"; call t; set req.http.ip = "1.2.3.63"; call t; set req.http.ip = "1.2.3.64"; call t; set req.http.ip = "1.3.4.255"; call t; set req.http.ip = "1.3.5.0"; call t; set req.http.ip = "1.3.5.255"; call t; set req.http.ip = "1.3.6.0"; call t; set req.http.ip = "1.3.6.140"; call t; set req.http.ip = "1.3.7.255"; call t; set req.http.ip = "1.4.5.255"; call t; set req.http.ip = "1.4.6.64"; call t; set req.http.ip = "1.4.7.64"; call t; set req.http.ip = "2.3.0.0"; call t; set req.http.ip = "2.3.5.255"; call t; set req.http.ip = "2.2.255.255";call t; set req.http.ip = "2.3.8.0"; call t; set req.http.ip = "2.9.1.1"; call t; set req.http.ip = "2.10.1.1"; call t; set req.http.ip = "2.12.0.0"; call t; set req.http.ip = "2.15.255.255";call t; set req.http.ip = "2.16.1.1"; call t; set req.http.ip = "2.224.1.1"; call t; set req.http.ip = "2.225.1.1"; call t; set req.http.ip = "2.226.1.1"; call t; set req.http.ip = "2.227.1.1"; call t; set req.http.ip = "10.0.3.255"; call t; set req.http.ip = "10.1.1.255"; call t; set req.http.ip = "10.2.0.255"; call t; set req.http.ip = "0bad:cafe:1234::1"; call t; } } logexpect l1 -v v1 -g raw { expect * 1009 ReqHeader {^\Qip: 1.2.3.0\E$} expect 0 = VCL_acl {^\QMATCH acl1 "1.2.3.0"/24\E$} expect 1 = ReqHeader {^\Qip: 1.2.3.63\E$} expect 0 = VCL_acl {^\QMATCH acl1 "1.2.3.0"/24\E$} expect 1 = ReqHeader {^\Qip: 1.2.3.64\E$} expect 0 = VCL_acl {^\QMATCH acl1 "1.2.3.0"/24\E$} expect 1 = ReqHeader {^\Qip: 1.3.4.255\E$} expect 0 = VCL_acl {^\QMATCH acl1 "1.3.0.0"/21\E$} expect 1 = ReqHeader {^\Qip: 1.3.5.0\E$} expect 0 = VCL_acl {^\QMATCH acl1 "1.3.0.0"/21\E$} expect 1 = ReqHeader {^\Qip: 1.3.5.255\E$} expect 0 = VCL_acl {^\QMATCH acl1 "1.3.0.0"/21\E$} expect 1 = ReqHeader {^\Qip: 1.3.6.0\E$} expect 0 = VCL_acl {^\QMATCH acl1 "1.3.0.0"/21\E$} expect 1 = ReqHeader {^\Qip: 1.3.6.140\E$} expect 0 = VCL_acl {^\QMATCH acl1 "1.3.0.0"/21\E$} expect 1 = ReqHeader {^\Qip: 1.3.7.255\E$} expect 0 = VCL_acl {^\QMATCH acl1 "1.3.0.0"/21\E$} expect 1 = ReqHeader {^\Qip: 1.4.5.255\E$} expect 0 = VCL_acl {^\QMATCH acl1 "1.4.4.0"/22\E$} expect 1 = ReqHeader {^\Qip: 1.4.6.64\E$} expect 0 = VCL_acl {^\QMATCH acl1 "1.4.4.0"/22\E$} expect 1 = ReqHeader {^\Qip: 1.4.7.64\E$} expect 0 = VCL_acl {^\QMATCH acl1 "1.4.4.0"/22\E$} expect 1 = ReqHeader {^\Qip: 2.3.0.0\E$} expect 0 = VCL_acl {^\QMATCH acl1 "2.3.0.0"/21 fixed: folded\E} expect 1 = ReqHeader {^\Qip: 2.3.5.255\E$} expect 0 = VCL_acl {^\QMATCH acl1 "2.3.0.0"/21 fixed: folded\E} expect 1 = ReqHeader {^\Qip: 2.2.255.255\E$$} expect 0 = VCL_acl {^\QNO_MATCH acl1\E$} expect 1 = ReqHeader {^\Qip: 2.3.8.0\E$} expect 0 = VCL_acl {^\QNO_MATCH acl1\E$} expect 1 = ReqHeader {^\Qip: 2.9.1.1\E$} expect 0 = VCL_acl {^\QNO_MATCH acl1\E$} expect 1 = ReqHeader {^\Qip: 2.10.1.1\E$} expect 0 = VCL_acl {^\QMATCH acl1 "2.10.0.0"/15\E$} expect 1 = ReqHeader {^\Qip: 2.12.0.0\E$} expect 0 = VCL_acl {^\QMATCH acl1 "2.12.0.0"/14 fixed: folded\E} expect 1 = ReqHeader {^\Qip: 2.15.255.255\E$} expect 0 = VCL_acl {^\QMATCH acl1 "2.12.0.0"/14 fixed: folded\E} expect 1 = ReqHeader {^\Qip: 2.16.1.1\E$} expect 0 = VCL_acl {^\QNO_MATCH acl1\E} expect 1 = ReqHeader {^\Qip: 2.224.1.1\E$} expect 0 = VCL_acl {^\QNO_MATCH acl1\E$} expect 1 = ReqHeader {^\Qip: 2.225.1.1\E$} expect 0 = VCL_acl {^\QMATCH acl1 "2.225.0.0"/16\E$} expect 1 = ReqHeader {^\Qip: 2.226.1.1\E$} expect 0 = VCL_acl {^\QMATCH acl1 "2.226.0.0"/15 fixed: folded\E} expect 1 = ReqHeader {^\Qip: 2.227.1.1\E$} expect 0 = VCL_acl {^\QMATCH acl1 "2.226.0.0"/15 fixed: folded\E} expect 1 = ReqHeader {^\Qip: 10.0.3.255\E$} expect 0 = VCL_acl {^\QMATCH acl1 "10.0.0.0"/22 fixed: folded\E} expect 1 = ReqHeader {^\Qip: 10.1.1.255\E$} expect 0 = VCL_acl {^\QMATCH acl1 "10.1.0.0"/23 fixed: folded\E} expect 1 = ReqHeader {^\Qip: 10.2.0.255\E$} expect 0 = VCL_acl {^\QMATCH acl1 "10.2.0.0"/24 fixed: folded\E} expect 1 = ReqHeader {^\Qip: 0bad:cafe:1234::1\E$} expect 0 = VCL_acl {^\QMATCH acl1 "bad:cafe::"/31} } -start client c1 { txreq rxresp } -run logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/c00006.vtc000066400000000000000000000007171457605730600210310ustar00rootroot00000000000000varnishtest "Test banning a url" server s1 { rxreq expect req.url == "/foo" txresp -body "1111\n" rxreq expect req.url == "/foo" txresp -body "11111\n" } -start varnish v1 -vcl+backend { } -start client c1 { txreq -url "/foo" rxresp expect resp.status == 200 expect resp.bodylen == 5 } client c1 -run varnish v1 -cli "ban req.url ~ foo" client c1 { txreq -url "/foo" rxresp expect resp.status == 200 expect resp.bodylen == 6 } client c1 -run varnish-7.5.0/bin/varnishtest/tests/c00007.vtc000066400000000000000000000001531457605730600210240ustar00rootroot00000000000000varnishtest "Test banning a hash" varnish v1 -arg "-b None" -start varnish v1 -clierr 101 "ban.hash foo" varnish-7.5.0/bin/varnishtest/tests/c00008.vtc000066400000000000000000000031471457605730600210330ustar00rootroot00000000000000varnishtest "Test Client IMS" server s1 { rxreq expect req.url == "/foo" txresp -hdr "Last-Modified: Thu, 26 Jun 2008 12:00:01 GMT" \ -hdr {ETag: "foo"} \ -body "11111\n" rxreq expect req.url == "/bar" txresp -hdr "Last-Modified: Thu, 26 Jun 2008 12:00:01 GMT" \ -hdr {ETag: "bar"} } -start varnish v1 -vcl+backend { } -start client c1 { txreq -url "/foo" rxresp expect resp.status == 200 expect resp.http.etag == {"foo"} expect resp.http.content-length == "6" expect resp.bodylen == 6 txreq -url "/foo" \ -hdr "If-Modified-Since: Thu, 26 Jun 2008 12:00:00 GMT" rxresp expect resp.status == 200 expect resp.http.content-length == "6" expect resp.http.etag == {"foo"} expect resp.bodylen == 6 txreq -url "/foo" \ -hdr "If-Modified-Since: Thu, 26 Jun 2008 12:00:01 GMT" rxresp -no_obj expect resp.status == 304 expect resp.http.etag == {"foo"} expect resp.http.content-length == "" expect resp.bodylen == "" txreq -url "/foo" \ -hdr "If-Modified-Since: Thu, 26 Jun 2008 12:00:02 GMT" rxresp -no_obj expect resp.status == 304 expect resp.http.etag == {"foo"} expect resp.http.content-length == "" expect resp.bodylen == "" txreq -url "/bar" rxresp expect resp.status == 200 expect resp.http.etag == {"bar"} expect resp.http.content-length == "0" expect resp.bodylen == 0 txreq -url "/bar" \ -hdr "If-Modified-Since: Thu, 26 Jun 2008 12:00:01 GMT" rxresp -no_obj expect resp.status == 304 expect resp.http.etag == {"bar"} expect resp.http.content-length == expect resp.bodylen == } client c1 -run client c1 -run varnish-7.5.0/bin/varnishtest/tests/c00009.vtc000066400000000000000000000036571457605730600210420ustar00rootroot00000000000000varnishtest "Test restarts" server s1 { rxreq expect req.url == "/foo" txresp -status 404 } -start server s2 { rxreq expect req.url == "/foo" txresp -body "foobar" } -start varnish v1 -syntax 4.0 -arg "-smysteve=malloc,1m" -vcl+backend { sub vcl_recv { if (req.restarts == 0) { set req.url = "/foo"; set req.method = "POST"; set req.proto = "HTTP/1.2"; set req.http.preserveme = "1"; set req.storage = storage.mysteve; set req.ttl = 42m; set req.esi = false; set req.backend_hint = s2; set req.hash_ignore_busy = true; set req.hash_always_miss = true; } set req.http.restarts = req.restarts; } sub vcl_backend_fetch { if (bereq.http.restarts == "0") { set bereq.backend = s1; } else { set bereq.backend = s2; } } sub vcl_backend_response { if (beresp.status != 200) { set beresp.uncacheable = true; set beresp.ttl = 0s; return (deliver); } } sub vcl_deliver { if (resp.status != 200) { return (restart); } set resp.http.method = req.method; set resp.http.url = req.url; set resp.http.proto = req.proto; set resp.http.preserveme = req.http.preserveme; set resp.http.restarts = req.restarts; set resp.http.storage = req.storage; set resp.http.ttl = req.ttl; set resp.http.esi = req.esi; set resp.http.backend_hint = req.backend_hint; set resp.http.hash-ignore-busy = req.hash_ignore_busy; set resp.http.hash-always-miss = req.hash_always_miss; } } -start client c1 { txreq -url "/" rxresp expect resp.status == 200 expect resp.bodylen == 6 expect resp.http.method == POST expect resp.http.url == /foo expect resp.http.proto == HTTP/1.2 expect resp.http.preserveme == 1 expect resp.http.restarts == 1 expect resp.http.storage == storage.mysteve expect resp.http.ttl == 2520.000 expect resp.http.esi == false expect resp.http.backend_hint == s2 expect resp.http.hash-ignore-busy == true expect resp.http.hash-always-miss == true } client c1 -run varnish-7.5.0/bin/varnishtest/tests/c00010.vtc000066400000000000000000000015301457605730600210160ustar00rootroot00000000000000varnishtest "Test pass from hit" server s1 { rxreq expect req.url == "/foo" txresp -body foobar rxreq expect req.url == "/foo" txresp -body foobar1 rxreq expect req.url == "/pass" txresp -body foobar12 } -start varnish v1 -vcl+backend { sub vcl_recv { if (req.url == "/foo") { return(hash); } else { return(pass); } } sub vcl_hit { return(pass); } } -start client c1 { txreq -url "/foo" rxresp expect resp.status == 200 expect resp.bodylen == 6 expect resp.http.x-varnish == "1001" txreq -url "/foo" rxresp expect resp.status == 200 expect resp.bodylen == 7 expect resp.http.x-varnish == "1003 1004" # a pass following a hit on the same wrk should not be # reported as a hit txreq -url "/pass" rxresp expect resp.status == 200 expect resp.bodylen == 8 expect resp.http.x-varnish == "1005" } client c1 -run varnish-7.5.0/bin/varnishtest/tests/c00011.vtc000066400000000000000000000024721457605730600210250ustar00rootroot00000000000000varnishtest "Test hit for miss (beresp.uncacheable = true)" server s1 { rxreq expect req.url == "/foo" txresp -body foobar rxreq expect req.url == "/foo" txresp -body foobar1 } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.uncacheable = true; } sub vcl_deliver { set resp.http.o_uncacheable = obj.uncacheable; set resp.http.o_age = obj.age; set resp.http.o_ttl = obj.ttl; set resp.http.o_grace = obj.grace; set resp.http.o_keep = obj.keep; } } -start logexpect l1 -v v1 -g vxid { expect * 1003 HitMiss "^1002 119" } -start client c1 { txreq -url "/foo" rxresp expect resp.status == 200 expect resp.bodylen == 6 expect resp.http.x-varnish == "1001" expect resp.http.o_age >= 0 expect resp.http.o_age < 0.5 expect resp.http.o_ttl > 119.5 expect resp.http.o_ttl <= 120 expect resp.http.o_grace == "10.000" expect resp.http.o_keep == "0.000" expect resp.http.o_uncacheable == "true" txreq -url "/foo" rxresp expect resp.status == 200 expect resp.bodylen == 7 expect resp.http.x-varnish == "1003" expect resp.http.o_age >= 0 expect resp.http.o_age < 0.5 expect resp.http.o_ttl > 119.5 expect resp.http.o_ttl <= 120 expect resp.http.o_grace == "10.000" expect resp.http.o_keep == "0.000" expect resp.http.o_uncacheable == "true" } client c1 -run logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/c00012.vtc000066400000000000000000000022371457605730600210250ustar00rootroot00000000000000varnishtest "Test pass from miss" server s1 { rxreq expect req.url == "/foo" txresp -body foobar rxreq expect req.url == "/foo" txresp -body foobar1 } -start varnish v1 -vcl+backend { sub vcl_miss { return(pass); } sub vcl_deliver { set resp.http.o_uncacheable = obj.uncacheable; set resp.http.o_age = obj.age; set resp.http.o_ttl = obj.ttl; set resp.http.o_grace = obj.grace; set resp.http.o_keep = obj.keep; } } -start client c1 { txreq -url "/foo" rxresp expect resp.status == 200 expect resp.bodylen == 6 expect resp.http.x-varnish == "1001" expect resp.http.o_age >= 0 expect resp.http.o_age < 0.5 expect resp.http.o_ttl <= -0 expect resp.http.o_ttl > -0.5 expect resp.http.o_grace == "0.000" expect resp.http.o_keep == "0.000" expect resp.http.o_uncacheable == "true" txreq -url "/foo" rxresp expect resp.status == 200 expect resp.bodylen == 7 expect resp.http.x-varnish == "1003" expect resp.http.o_age >= 0 expect resp.http.o_age < 0.5 expect resp.http.o_ttl <= -0 expect resp.http.o_ttl > -0.5 expect resp.http.o_grace == "0.000" expect resp.http.o_keep == "0.000" expect resp.http.o_uncacheable == "true" } client c1 -run varnish-7.5.0/bin/varnishtest/tests/c00013.vtc000066400000000000000000000017351457605730600210300ustar00rootroot00000000000000varnishtest "Test parking second request on backend delay (waitinglist)" barrier b1 cond 2 barrier b2 cond 2 server s1 { rxreq expect req.url == "/foo" send "HTTP/1.0 200 OK\r\nConnection: close\r\n\r\n" delay .2 barrier b1 sync delay .2 send "line1\n" delay .2 barrier b2 sync send "line2\n" } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_stream = false; } } -start varnish v1 -cliok "param.set debug +syncvsl" client c1 { txreq -url "/foo" -hdr "client: c1" rxresp expect resp.status == 200 expect resp.bodylen == 12 expect resp.http.x-varnish == "1001" } -start barrier b1 sync client c2 { txreq -url "/foo" -hdr "client: c2" delay .2 barrier b2 sync rxresp expect resp.status == 200 expect resp.bodylen == 12 expect resp.http.x-varnish == "1004 1002" } -run client c1 -wait varnish v1 -vsl_catchup varnish v1 -expect client_req == 2 varnish v1 -expect busy_sleep >= 1 varnish v1 -expect busy_wakeup >= 1 varnish v1 -stop varnish-7.5.0/bin/varnishtest/tests/c00014.vtc000066400000000000000000000020551457605730600210250ustar00rootroot00000000000000varnishtest "Test parking second request on backend delay, then pass" barrier b1 cond 2 server s1 { rxreq expect req.url == "/foo" barrier b1 sync send "HTTP/1.1 200 OK\r\nContent-Length: 12\r\n\r\n" send "line1\n" send "line2\n" rxreq expect req.url == "/foo" txresp -body "foobar" } -start varnish v1 -vcl+backend { sub vcl_miss { set req.http.hitmiss = req.is_hitmiss; } sub vcl_deliver { set resp.http.hitmiss = req.http.hitmiss; } sub vcl_backend_response { set beresp.do_stream = false; set beresp.uncacheable = true; } } -start client c1 { txreq -url "/foo" rxresp expect resp.status == 200 expect resp.bodylen == 12 expect resp.http.x-varnish == "1001" expect resp.http.hitmiss == "false" } -start barrier b1 sync delay .2 client c2 { txreq -url "/foo" rxresp expect resp.status == 200 expect resp.bodylen == 6 expect resp.http.x-varnish == "1004" expect resp.http.hitmiss == "true" } -run client c1 -wait varnish v1 -expect cache_hitpass == 0 varnish v1 -expect cache_hitmiss == 1 varnish v1 -expect cache_miss == 2 varnish-7.5.0/bin/varnishtest/tests/c00015.vtc000066400000000000000000000022761457605730600210330ustar00rootroot00000000000000varnishtest "Test switching VCLs" server s1 { rxreq expect req.url == "/foo" txresp -body "foobar" rxreq expect req.url == "/foo" txresp -body "foobar1" } -start varnish v1 -vcl+backend { } -start varnish v1 -vcl+backend { sub vcl_recv { return (pass); } } varnish v1 -cli "vcl.list" varnish v1 -cli "vcl.use vcl1" client c1 { txreq -url "/foo" rxresp expect resp.status == 200 expect resp.bodylen == 6 expect resp.http.x-varnish == "1001" } -run varnish v1 -cli "vcl.use vcl2" client c2 { txreq -url "/foo" rxresp expect resp.status == 200 expect resp.bodylen == 7 expect resp.http.x-varnish == "1004" } -run varnish v1 -cli "vcl.use vcl1" client c3 { txreq -url "/foo" rxresp expect resp.status == 200 expect resp.bodylen == 6 expect resp.http.x-varnish == "1007 1002" } -run varnish v1 -cli "vcl.show vcl2" varnish v1 -cli "vcl.show -v vcl2" varnish v1 -cli "vcl.discard vcl2" varnish v1 -cli "vcl.list" varnish v1 -cli "vcl.show" varnish v1 -cli "vcl.show -v" varnish v1 -clierr 106 "vcl.show -x nowhere" varnish v1 -clierr 106 "vcl.show nothere" varnish v1 -clierr 106 "vcl.use nothere" varnish v1 -clierr 106 "vcl.discard nowhere" varnish v1 -clierr 300 "vcl.discard vcl1" varnish-7.5.0/bin/varnishtest/tests/c00016.vtc000066400000000000000000000033011457605730600210220ustar00rootroot00000000000000varnishtest "Test header filtering Table/Connection header" server s1 { rxreq expect req.url == "/foo" expect req.http.Foo == "bar" expect req.http.FromVCL == "123" expect req.http.Proxy-Authenticate == "" expect req.http.pROXY-aUTHENTICATE == "" expect req.http.Proxy-Authenticat == "3" expect req.http.Proxy-Authenticatd == "4" expect req.http.Proxy-Authenticatef == "5" txresp -hdr "Bar: foo" -body "foobar" rxreq expect req.url == "/bar" expect req.http.Foo == expect req.http.FromVCL == "123" txresp -hdr "Bar: fnry,glyf, FOO ,brok" \ -hdr "Connection: bar" \ -body "foobar" } -start varnish v1 -vcl+backend { sub vcl_recv { set req.http.FromVCL = "123"; } } -start client c1 { txreq -url "/foo" -hdr "Foo: bar" \ -hdr "Proxy-Authenticate: 1" \ -hdr "pROXY-aUTHENTICATE: 2" \ -hdr "Proxy-Authenticat: 3" \ -hdr "Proxy-Authenticatd: 4" \ -hdr "Proxy-Authenticatef: 5" rxresp expect resp.http.Bar == "foo" txreq -url "/bar" \ -hdr "Foo: bar2" \ -hdr "Connection: FromVCL, foo, close" rxresp expect req.http.Bar == } -run client c1 { txreq -url "/bar" \ -hdr "foo: 1" \ -hdr "Age: 200" \ -hdr "Connection: Age" rxresp expect resp.status == 400 } -run server s1 { rxreq expect req.http.baa == "" expect req.http.baax == txresp -hdr "Foox: 1" -hdr "foox: 2" -hdr "Connection: foox" } -start varnish v1 -vcl+backend { sub vcl_backend_fetch { set bereq.http.baa = bereq.http.baax; } sub vcl_deliver { set resp.http.foo = resp.http.foox; } } client c1 { txreq -hdr "Baax: 1" -hdr "Baax: 2" -hdr "Connection: baax" rxresp expect resp.status == 200 expect resp.http.foo == "" expect resp.http.foox == } -run varnish-7.5.0/bin/varnishtest/tests/c00017.vtc000066400000000000000000000013701457605730600210270ustar00rootroot00000000000000varnishtest "Test Backend Polling" barrier b1 cond 2 server s1 { # Probes loop 8 { rxreq expect req.url == "/" txresp -hdr "Bar: foo" -body "foobar" accept } loop 3 { rxreq expect req.url == "/" txresp -status 404 -hdr "Bar: foo" -body "foobar" accept } loop 2 { rxreq expect req.url == "/" txresp -proto "FROBOZ" -status 200 -hdr "Bar: foo" -body "foobar" accept } loop 2 { rxreq expect req.url == "/" send "HTTP/1.1 200 \r\n" accept } barrier b1 sync } -start varnish v1 -vcl { backend foo { .host = "${s1_addr}"; .port = "${s1_port}"; .probe = { .timeout = 1 s; .interval = 0.1 s; } } } -start barrier b1 sync varnish v1 -cliexpect "^CLI RX| -+H{10}-{5}H{2}-{0,5} Happy" "backend.list -p" varnish-7.5.0/bin/varnishtest/tests/c00018.vtc000066400000000000000000000134011457605730600210260ustar00rootroot00000000000000varnishtest "Check Expect headers / 100 Continue" server s1 { rxreq txresp rxreq txresp rxreq txresp rxreq txresp rxreq txresp } -start varnish v1 -vcl+backend { import std; sub vcl_recv { if (req.url == "/") { return (pass); } std.late_100_continue(true); if (req.url ~ "^/err") { return (synth(405)); } if (req.url ~ "^/synthnocl") { return (synth(200)); } if (req.url == "/latecache") { std.cache_req_body(1KB); } return (pass); } sub vcl_synth { if (req.url == "/synthnocl") { unset resp.http.Connection; } } } -start logexpect l1 -v v1 -g raw { # base case: bad Expect expect * 1001 RespStatus 417 # base case: Immediate 100-continue expect * 1003 ReqUnset {^Expect: 100-continue$} expect 0 1003 ReqStart {^.} expect 0 1003 ReqMethod {^POST$} expect 0 1003 ReqURL {^/$} expect 0 1003 ReqProtocol {^HTTP/1.1$} expect * 1003 ReqHeader {^Content-Length: 20$} expect * 1003 VCL_call {^RECV$} expect 0 1003 VCL_return {^pass$} expect 0 1003 RespProtocol {^HTTP/1.1$} expect 0 1003 RespStatus {^100$} expect 0 1003 RespReason {^Continue$} expect 0 1003 VCL_call {^HASH$} # no 100 if client has already sent body (and it fits) expect * 1005 ReqUnset {^Expect: 100-continue$} expect 0 1005 ReqStart {^.} expect 0 1005 ReqMethod {^POST$} expect 0 1005 ReqURL {^/$} expect 0 1005 ReqProtocol {^HTTP/1.1$} expect * 1005 ReqHeader {^Content-Length: 3$} expect * 1005 VCL_call {^RECV$} expect 0 1005 VCL_return {^pass$} expect 0 1005 VCL_call {^HASH$} # late no cache expect * 1007 ReqUnset {^Expect: 100-continue$} expect 0 1007 ReqStart {^.} expect 0 1007 ReqMethod {^POST$} expect 0 1007 ReqURL {^/late$} expect 0 1007 ReqProtocol {^HTTP/1.1$} expect * 1007 ReqHeader {^Content-Length: 20$} expect * 1007 VCL_call {^RECV$} expect 0 1007 VCL_return {^pass$} expect 0 1007 VCL_call {^HASH$} expect 0 1007 VCL_return {^lookup$} expect 0 1007 VCL_call {^PASS$} expect 0 1007 VCL_return {^fetch$} expect 0 1007 Link {^bereq 1008 pass$} expect 0 1007 Storage {^(malloc|umem) Transient$} expect 0 1007 RespProtocol {^HTTP/1.1$} expect 0 1007 RespStatus {^100$} expect 0 1007 RespReason {^Continue$} expect 0 1007 Timestamp {^ReqBody:} expect 0 1007 Timestamp {^Fetch:} # late cache expect * 1009 ReqUnset {^Expect: 100-continue$} expect 0 1009 ReqStart {^.} expect 0 1009 ReqMethod {^POST$} expect 0 1009 ReqURL {^/latecache$} expect 0 1009 ReqProtocol {^HTTP/1.1$} expect * 1009 ReqHeader {^Content-Length: 20$} expect * 1009 VCL_call {^RECV$} expect 0 1009 Storage {^(malloc|umem) Transient$} expect 0 1009 RespProtocol {^HTTP/1.1$} expect 0 1009 RespStatus {^100$} expect 0 1009 RespReason {^Continue$} expect 0 1009 Timestamp {^ReqBody:} expect 0 1009 VCL_return {^pass$} expect 0 1009 VCL_call {^HASH$} # err expect * 1011 ReqUnset {^Expect: 100-continue$} expect 0 1011 ReqStart {^.} expect 0 1011 ReqMethod {^POST$} expect 0 1011 ReqURL {^/err$} expect 0 1011 ReqProtocol {^HTTP/1.1$} expect * 1011 ReqHeader {^Content-Length: 20$} expect * 1011 VCL_call {^RECV$} expect 0 1011 VCL_return {^synth$} expect 0 1011 VCL_call {^HASH$} expect 0 1011 VCL_return {^lookup$} expect 0 1011 RespProtocol {^HTTP/1.1$} expect 0 1011 RespStatus {^405$} expect 0 1011 RespReason {^Method Not Allowed$} expect 0 1011 RespHeader {^Date:} expect 0 1011 RespHeader {^Server: Varnish$} expect 0 1011 RespHeader {^X-Varnish: 1011$} expect * 1011 Timestamp {^Process:} } -start client c1 { # base case: bad Expect txreq -url "/" -req POST -hdr "Expect: 101-continue" -body "foo" rxresp expect resp.status == 417 } -run client c1 { # base case: Immediate 100-continue txreq -url "/" -req POST -hdr "Expect: 100-continue " \ -hdr "Content-Length: 20" rxresp expect resp.status == 100 send "01234567890123456789" rxresp expect resp.status == 200 # no 100 if client has already sent body (and it fits) txreq -url "/" -req POST -hdr "Expect: 100-continue " -body "foo" rxresp expect resp.status == 200 # late no cache txreq -url "/late" -req POST -hdr "Expect: 100-continue " \ -hdr "Content-Length: 20" rxresp expect resp.status == 100 send "01234567890123456789" rxresp expect resp.status == 200 # late cache txreq -url "/latecache" -req POST -hdr "Expect: 100-continue " \ -hdr "Content-Length: 20" rxresp expect resp.status == 100 send "01234567890123456789" rxresp expect resp.status == 200 # err txreq -url "/err" -req POST -hdr "Expect: 100-continue " \ -hdr "Content-Length: 20" rxresp expect resp.status == 405 expect_close } -run client c1 { # Immediate 100-continue with Client Connection: close txreq -url "/" -req POST -hdr "Expect: 100-continue " \ -hdr "Content-Length: 20" \ -hdr "Connection: close" rxresp expect resp.status == 100 send "01234567890123456789" rxresp expect resp.status == 200 expect_close } -run client c1 { # vcl vetoing the Connection: close in synth txreq -url "/synthnocl" -req POST -hdr "Expect: 100-continue " \ -hdr "Content-Length: 20" rxresp expect resp.status == 100 send "01234567890123456789" rxresp expect resp.status == 200 # vcl vetoing the Connection: close in synth but client close txreq -url "/synthnocl" -req POST -hdr "Expect: 100-continue " \ -hdr "Content-Length: 20" \ -hdr "Connection: close" rxresp expect resp.status == 200 expect_close } -run logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/c00019.vtc000066400000000000000000000043531457605730600210350ustar00rootroot00000000000000varnishtest "Check ban counters and duplicate ban elimination" server s1 { rxreq txresp -hdr "foo: 0" -body "foo0" rxreq txresp -hdr "foo: 1" -body "foo1" rxreq txresp -hdr "foo: 2" -body "foo2" rxreq txresp -hdr "foo: 3" -body "foo3" } -start varnish v1 -vcl+backend {} -start varnish v1 -cliok "ban req.url ~ FOO" # There is one "magic" ban from boot varnish v1 -expect bans_added == 2 varnish v1 -cliok "ban.list" # Our fetch is not affected by the ban # as the FOO-ban was preexisting client c1 { txreq -url /BAR rxresp expect resp.http.foo == 0 txreq -url /FOO rxresp expect resp.http.foo == 1 } -run varnish v1 -cliok "ban.list" varnish v1 -expect bans_tested == 0 varnish v1 -expect bans_tests_tested == 0 # Add another ban varnish v1 -cliok "ban req.url ~ FOO" varnish v1 -expect bans_added == 3 varnish v1 -cliok "ban.list" # The cached object will be band, and a new # fetched from the backend client c1 { txreq -url /FOO rxresp expect resp.http.foo == 2 } -run varnish v1 -expect bans_tested == 1 varnish v1 -expect bans_tests_tested == 1 varnish v1 -cliok "ban.list" # Fetch the cached copy, just for grins client c1 { txreq -url /FOO rxresp expect resp.http.foo == 2 } -run # Now add another two bans, Kilroy should not be hit varnish v1 -cliok "ban req.url ~ KILROY" varnish v1 -cliok "ban req.url ~ FOO" varnish v1 -expect bans_added == 5 # Enable dup removal of bans varnish v1 -cliok "param.set ban_dups on" # This should incapacitate the two previous FOO bans. varnish v1 -cliok "ban req.url ~ FOO" varnish v1 -expect bans_added == 6 varnish v1 -expect bans_dups == 3 varnish v1 -cliok "ban.list" # And we should get a fresh object from backend client c1 { txreq -url /FOO rxresp expect resp.http.foo == 3 } -run # With only two objects having ever been compared varnish v1 -expect bans_tested == 2 varnish v1 -expect bans_tests_tested == 2 varnish v1 -cliok "ban.list" varnish v1 -clijson "ban.list -j" # Test a bogus regexp varnish v1 -clierr 106 "ban req.url ~ [[[" # Ban expression with quoting varnish v1 -cliok {ban req.url ~ "BAR"} shell {varnishadm -n ${tmpdir}/v1 ban 'obj.http.Host ~ \"Foo\"'} varnish v1 -cliexpect {(?s)\d+\.\d+\s+\d+\s+-\s+\Qobj.http.Host ~ "Foo"\E} "ban.list" varnish v1 -clijson "ban.list -j" varnish-7.5.0/bin/varnishtest/tests/c00020.vtc000066400000000000000000000026311457605730600210220ustar00rootroot00000000000000varnishtest "Test -h critbit a bit" server s1 { rxreq expect req.url == "/" txresp -hdr "ID: slash" -hdr "Connection: close" -body "012345\n" } -start varnish v1 -arg "-hcritbit" -vcl+backend { } -start client c1 { txreq -url "/" rxresp expect resp.status == 200 expect resp.http.X-Varnish == "1001" expect resp.http.ID == "slash" } -run delay .1 client c2 { txreq -url "/" rxresp expect resp.status == 200 expect resp.http.X-Varnish == "1004 1002" expect resp.http.ID == "slash" } -run delay .1 server s1 { rxreq expect req.url == "/foo" txresp -hdr "ID: foo" -body "012345\n" rxreq expect req.url == "/bar" txresp -hdr "ID: bar" -body "012345\n" } -start client c1 { txreq -url "/foo" rxresp expect resp.status == 200 expect resp.http.X-Varnish == "1006" expect resp.http.ID == "foo" delay .1 txreq -url "/" rxresp expect resp.status == 200 expect resp.http.X-Varnish == "1008 1002" expect resp.http.ID == "slash" delay .1 txreq -url "/bar" rxresp expect resp.status == 200 expect resp.http.X-Varnish == "1009" expect resp.http.ID == "bar" delay .1 txreq -url "/foo" rxresp expect resp.status == 200 expect resp.http.X-Varnish == "1011 1007" expect resp.http.ID == "foo" } -run varnish v1 -expect sess_conn == 3 varnish v1 -expect cache_hit == 3 varnish v1 -expect cache_miss == 3 varnish v1 -expect client_req == 6 varnish v1 -expect s_sess == 3 varnish v1 -expect s_fetch == 3 varnish-7.5.0/bin/varnishtest/tests/c00021.vtc000066400000000000000000000046261457605730600210310ustar00rootroot00000000000000varnishtest "Test banning a url with cli:ban" server s1 { rxreq expect req.url == "/foo" txresp -hdr "foo: bar5" -body "1111\n" rxreq expect req.url == "/foo" txresp -hdr "foo: bar6" -body "11111\n" rxreq expect req.url == "/foo" txresp -hdr "foo: bar7" -body "111111\n" rxreq expect req.url == "/foo" txresp -hdr "foo: bar8" -body "1111111\n" } -start varnish v1 -vcl+backend { } -start client c1 { txreq -url "/foo" rxresp expect resp.status == 200 expect resp.http.foo == bar5 expect resp.bodylen == 5 } -run # syntax checks varnish v1 -clierr 104 "ban" varnish v1 -clierr 104 "ban foo" varnish v1 -clierr 104 "ban foo bar" varnish v1 -clierr 106 "ban a b c && a" varnish v1 -clierr 106 "ban a b c && a b" varnish v1 -clierr 106 "ban a b c || a b c" varnish v1 -cliok "ban.list" # exact match, not matching varnish v1 -cliok "ban req.url == foo" varnish v1 -cliok "ban.list" client c1 { txreq -url "/foo" rxresp expect resp.status == 200 expect resp.http.foo == bar5 expect resp.bodylen == 5 } -run # exact match, matching varnish v1 -cliok "ban req.url == /foo" varnish v1 -cliok "ban.list" client c1 { txreq -url "/foo" rxresp expect resp.status == 200 expect resp.http.foo == bar6 expect resp.bodylen == 6 } -run # regexp nonmatch varnish v1 -cliok "ban req.url ~ bar" varnish v1 -cliok "ban.list" client c1 { txreq -url "/foo" rxresp expect resp.status == 200 expect resp.http.foo == bar6 expect resp.bodylen == 6 } -run # regexp match varnish v1 -cliok "ban req.url ~ foo" varnish v1 -cliok "ban.list" client c1 { txreq -url "/foo" rxresp expect resp.status == 200 expect resp.http.foo == bar7 expect resp.bodylen == 7 } -run # header check, nonmatch varnish v1 -cliok "ban obj.http.foo != bar7" varnish v1 -cliok "ban.list" client c1 { txreq -url "/foo" rxresp expect resp.status == 200 expect resp.http.foo == bar7 expect resp.bodylen == 7 } -run # header check, match varnish v1 -cliok "ban req.http.foo == barcheck" varnish v1 -cliok "ban.list" client c1 { txreq -url "/foo" -hdr "foo: barcheck" rxresp expect resp.status == 200 expect resp.http.foo == bar8 expect resp.bodylen == 8 } -run # header check, no header varnish v1 -cliok "ban req.url ~ foo && obj.http.bar == barcheck" varnish v1 -cliok "ban.list" varnish v1 -clijson "ban.list -j" client c1 { txreq -url "/foo" rxresp expect resp.status == 200 expect resp.http.foo == bar8 expect resp.bodylen == 8 } -run varnish-7.5.0/bin/varnishtest/tests/c00022.vtc000066400000000000000000000063631457605730600210320ustar00rootroot00000000000000varnishtest "Test banning a url with VCL ban" server s1 { rxreq expect req.url == "/foo" txresp -hdr "foo: bar5" -body "1111\n" rxreq expect req.url == "/foo" txresp -hdr "foo: bar6" -body "11111\n" rxreq expect req.url == "/foo" txresp -hdr "foo: bar7" -body "111111\n" rxreq expect req.url == "/foo" txresp -hdr "foo: bar8" -body "1111111\n" } -start # code from purging.rst varnish v1 -vcl+backend { import std; sub vcl_recv { if (req.method == "BAN") { if (std.ban("req.http.host == " + req.http.host + " && req.url == " + req.url)) { return(synth(204, "Ban added")); } else { # return ban error in 400 response return(synth(400, std.ban_error())); } } if (req.method == "BANSTR") { if (std.ban(req.http.ban)) { return(synth(204, "Ban added")); } else { # return ban error in 400 response return(synth(400, std.ban_error())); } } } } -start # Fetch into cache client c1 { txreq -url "/foo" rxresp expect resp.status == 200 expect resp.http.foo == bar5 expect resp.bodylen == 5 } -run # Ban: something else client c1 { txreq -req BAN -url /foox rxresp expect resp.status == 204 } -run varnish v1 -cliok "ban.list" # Still in cache client c1 { txreq -url "/foo" rxresp expect resp.status == 200 expect resp.http.foo == bar5 expect resp.bodylen == 5 } -run # Ban: it client c1 { txreq -req BAN -url /foo rxresp expect resp.status == 204 } -run varnish v1 -cliok "ban.list" # New obj client c1 { txreq -url "/foo" rxresp expect resp.status == 200 expect resp.http.foo == bar6 expect resp.bodylen == 6 } -run # Ban: everything else client c1 { txreq -req BANSTR -hdr "ban: req.url != /foo" rxresp expect resp.status == 204 } -run varnish v1 -cliok "ban.list" # still there client c1 { txreq -url "/foo" rxresp expect resp.status == 200 expect resp.http.foo == bar6 expect resp.bodylen == 6 } -run # Ban: it client c1 { txreq -req BANSTR -hdr "Ban: obj.http.foo == bar6" rxresp expect resp.status == 204 } -run varnish v1 -cliok "ban.list" # New one client c1 { txreq -url "/foo" rxresp expect resp.status == 200 expect resp.http.foo == bar7 expect resp.bodylen == 7 } -run # Ban: something else client c1 { txreq -req BANSTR -hdr "Ban: obj.http.foo == bar6" rxresp expect resp.status == 204 } -run varnish v1 -cliok "ban.list" # Still there client c1 { txreq -url "/foo" rxresp expect resp.status == 200 expect resp.http.foo == bar7 expect resp.bodylen == 7 } -run # Header match client c1 { txreq -req BANSTR -hdr "Ban: req.http.foo == barcheck" rxresp expect resp.status == 204 } -run varnish v1 -cliok "ban.list" client c1 { txreq -url "/foo" -hdr "foo: barcheck" rxresp expect resp.status == 200 expect resp.http.foo == bar8 expect resp.bodylen == 8 } -run # Header match client c1 { txreq -req BANSTR -hdr "Ban: obj.http.foo == barcheck" rxresp expect resp.status == 204 } -run varnish v1 -cliok "ban.list" varnish v1 -clijson "ban.list -j" client c1 { txreq -url "/foo" rxresp expect resp.status == 200 expect resp.http.foo == bar8 expect resp.bodylen == 8 } -run # Error client c1 { txreq -req BANSTR -hdr "Ban: xobj.http.foo == barcheck" rxresp expect resp.status == 400 expect resp.reason == {Unknown or unsupported field "xobj.http.foo"} } -run varnish-7.5.0/bin/varnishtest/tests/c00023.vtc000066400000000000000000000056501457605730600210310ustar00rootroot00000000000000varnishtest "Test -h critbit for digest edges" server s1 { rxreq expect req.url == "/1" txresp -body "\n" rxreq expect req.url == "/2" txresp -body "x\n" rxreq expect req.url == "/3" txresp -body "xx\n" rxreq expect req.url == "/4" txresp -body "xxx\n" rxreq expect req.url == "/5" txresp -body "xxxx\n" rxreq expect req.url == "/6" txresp -body "xxxxx\n" rxreq expect req.url == "/7" txresp -body "xxxxxx\n" rxreq expect req.url == "/8" txresp -body "xxxxxxx\n" rxreq expect req.url == "/9" txresp -body "xxxxxxxx\n" } -start varnish v1 -arg "-hcritbit" -vcl+backend { } -start varnish v1 -cliok "param.set debug +hashedge" client c1 { txreq -url "/1" rxresp expect resp.status == 200 expect resp.bodylen == 1 expect resp.http.X-Varnish == "1001" txreq -url "/2" rxresp expect resp.bodylen == 2 expect resp.status == 200 expect resp.http.X-Varnish == "1003" txreq -url "/3" rxresp expect resp.bodylen == 3 expect resp.status == 200 expect resp.http.X-Varnish == "1005" txreq -url "/4" rxresp expect resp.bodylen == 4 expect resp.status == 200 expect resp.http.X-Varnish == "1007" txreq -url "/5" rxresp expect resp.bodylen == 5 expect resp.status == 200 expect resp.http.X-Varnish == "1009" txreq -url "/6" rxresp expect resp.bodylen == 6 expect resp.status == 200 expect resp.http.X-Varnish == "1011" txreq -url "/7" rxresp expect resp.bodylen == 7 expect resp.status == 200 expect resp.http.X-Varnish == "1013" txreq -url "/8" rxresp expect resp.bodylen == 8 expect resp.status == 200 expect resp.http.X-Varnish == "1015" txreq -url "/9" rxresp expect resp.bodylen == 9 expect resp.status == 200 expect resp.http.X-Varnish == "1017" } -run client c1 { txreq -url "/1" rxresp expect resp.status == 200 expect resp.bodylen == 1 expect resp.http.X-Varnish == "1020 1002" txreq -url "/2" rxresp expect resp.bodylen == 2 expect resp.status == 200 expect resp.http.X-Varnish == "1021 1004" txreq -url "/3" rxresp expect resp.bodylen == 3 expect resp.status == 200 expect resp.http.X-Varnish == "1022 1006" txreq -url "/4" rxresp expect resp.bodylen == 4 expect resp.status == 200 expect resp.http.X-Varnish == "1023 1008" txreq -url "/5" rxresp expect resp.bodylen == 5 expect resp.status == 200 expect resp.http.X-Varnish == "1024 1010" txreq -url "/6" rxresp expect resp.bodylen == 6 expect resp.status == 200 expect resp.http.X-Varnish == "1025 1012" txreq -url "/7" rxresp expect resp.bodylen == 7 expect resp.status == 200 expect resp.http.X-Varnish == "1026 1014" txreq -url "/8" rxresp expect resp.bodylen == 8 expect resp.status == 200 expect resp.http.X-Varnish == "1027 1016" txreq -url "/9" rxresp expect resp.bodylen == 9 expect resp.status == 200 expect resp.http.X-Varnish == "1028 1018" } -run varnish v1 -expect sess_conn == 2 varnish v1 -expect cache_hit == 9 varnish v1 -expect cache_miss == 9 varnish v1 -expect client_req == 18 varnish-7.5.0/bin/varnishtest/tests/c00024.vtc000066400000000000000000000005671457605730600210340ustar00rootroot00000000000000varnishtest "Test restart in vcl_synth" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { sub vcl_recv { if (req.restarts == 0) { return (synth(701, "FOO")); } } sub vcl_synth { if (req.restarts < 1) { return (restart); } else { set resp.status = 201; } } } -start client c1 { txreq -url "/" rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/c00025.vtc000066400000000000000000000031641457605730600210310ustar00rootroot00000000000000varnishtest "Test If-None-Match" server s1 { rxreq expect req.url == / txresp -hdr {ETag: "123456789"} -bodylen 10 rxreq expect req.url == /other txresp -hdr {ETag: W/"123456789"} -bodylen 10 } -start varnish v1 -vcl+backend { } -start client c1 { txreq rxresp expect resp.status == 200 expect resp.bodylen == 10 expect resp.http.etag == {"123456789"} txreq -hdr {If-None-Match: "12345678"} rxresp expect resp.status == 200 txreq -hdr {If-None-Match: "123456789"} rxresp -no_obj expect resp.status == 304 txreq -hdr "Range: bytes=1-2" -hdr {If-None-Match: "123456789"} rxresp -no_obj expect resp.status == 304 txreq -hdr {If-None-Match: W/"12345678"} rxresp expect resp.status == 200 txreq -hdr {If-None-Match: W/"123456789"} rxresp -no_obj expect resp.status == 304 txreq -hdr "Range: bytes=1-2" -hdr {If-None-Match: W/"123456789"} rxresp expect resp.status == 206 txreq -url /other rxresp expect resp.status == 200 expect resp.bodylen == 10 expect resp.http.etag == {W/"123456789"} txreq -url /other -hdr {If-None-Match: "12345678"} rxresp expect resp.status == 200 txreq -url /other -hdr {If-None-Match: "123456789"} rxresp -no_obj expect resp.status == 304 txreq -url /other -hdr "Range: bytes=1-2" \ -hdr {If-None-Match: "123456789"} rxresp expect resp.status == 206 txreq -url /other -hdr {If-None-Match: W/"12345678"} rxresp expect resp.status == 200 txreq -url /other -hdr {If-None-Match: W/"123456789"} rxresp -no_obj expect resp.status == 304 txreq -url /other -hdr "Range: bytes=1-2" \ -hdr {If-None-Match: W/"123456789"} rxresp expect resp.status == 206 } -run varnish-7.5.0/bin/varnishtest/tests/c00026.vtc000066400000000000000000000021201457605730600210210ustar00rootroot00000000000000varnishtest "Test client If-None-Match and If-Modified-Since together" server s1 { rxreq expect req.url == / txresp -hdr {ETag: "123456789"} \ -hdr "Last-Modified: Thu, 26 Jun 2008 12:00:01 GMT" \ -bodylen 10 rxreq expect req.url == /other txresp -hdr "Last-Modified: Thu, 26 Jun 2008 12:00:01 GMT" \ -bodylen 10 } -start varnish v1 -vcl+backend { } -start client c1 { txreq rxresp expect resp.status == 200 expect resp.bodylen == 10 expect resp.http.etag == {"123456789"} txreq -hdr "If-Modified-Since: Thu, 26 Jun 2008 12:00:00 GMT" \ -hdr {If-None-Match: "123456789"} rxresp -no_obj expect resp.status == 304 txreq -hdr "If-Modified-Since: Thu, 26 Jun 2008 12:00:01 GMT" \ -hdr {If-None-Match: "12345678"} rxresp expect resp.status == 200 txreq -hdr "If-Modified-Since: Thu, 26 Jun 2008 12:00:01 GMT" \ -hdr {If-None-Match: "123456789"} rxresp -no_obj expect resp.status == 304 txreq -url /other \ -hdr "If-Modified-Since: Thu, 26 Jun 2008 12:00:01 GMT" \ -hdr {If-None-Match: "123456789"} rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/c00027.vtc000066400000000000000000000014351457605730600210320ustar00rootroot00000000000000varnishtest "Test eviction" server s1 { rxreq expect req.url == "/1" txresp -body "${string,repeat,1024,1}" rxreq expect req.url == "/2" txresp -body "${string,repeat,1024,1}" rxreq expect req.url == "/3" txresp -body "${string,repeat,1024,1}" rxreq expect req.url == "/4" txresp -body "${string,repeat,1024,1}" rxreq expect req.url == "/5" txresp -body "${string,repeat,1024,1}" } -start varnish v1 -arg "-s default,1M" -vcl+backend { sub vcl_backend_response { set beresp.ttl = 10m; } } -start client c1 { txreq -url "/1" rxresp expect resp.status == 200 txreq -url "/2" rxresp expect resp.status == 200 txreq -url "/3" rxresp expect resp.status == 200 txreq -url "/4" rxresp expect resp.status == 200 txreq -url "/5" rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/c00028.vtc000066400000000000000000000006611457605730600210330ustar00rootroot00000000000000varnishtest "Test that we can't recurse restarts forever" varnish v1 -vcl { backend bad { .host = "${bad_backend}"; } sub vcl_backend_fetch { set bereq.backend = bad; } sub vcl_backend_error { return (abandon); } sub vcl_synth { set resp.http.restarts = req.restarts; return (restart); } } -start client c1 { txreq -url "/" rxresp expect resp.status == 503 expect resp.http.restarts == 5 } -run varnish-7.5.0/bin/varnishtest/tests/c00031.vtc000066400000000000000000000003411457605730600210200ustar00rootroot00000000000000varnishtest "Worker thread stack size setting" server s1 { rxreq txresp } -start varnish v1 -arg "-p thread_pool_stack=262144" -vcl+backend {} -start client c1 { txreq -url "/" rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/c00034.vtc000066400000000000000000000146221457605730600210320ustar00rootroot00000000000000varnishtest "Range headers" server s1 { rxreq txresp -bodylen 100 } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_stream = false; } } -start varnish v1 -cliok "param.set http_range_support off" client c1 { txreq -hdr "Range: bytes=0-9" rxresp expect resp.http.accept-ranges == expect resp.status == 200 expect resp.bodylen == 100 } -run varnish v1 -expect s_resp_bodybytes == 100 varnish v1 -vsl_catchup varnish v1 -cliok "param.set http_range_support on" client c2 { # Invalid range requests txreq -hdr "Range: bytes =0-9" rxresp expect resp.status == 416 expect resp.bodylen == 0 expect resp.http.content-length == "0" expect resp.http.transfer-encoding == "" expect resp.http.content-range == "bytes */100" txreq -hdr "Range: bytes=0- 9" rxresp expect resp.status == 416 expect resp.bodylen == 0 txreq -hdr "Range: bytes =-9" rxresp expect resp.status == 416 expect resp.bodylen == 0 txreq -hdr "Range: bytes =0-a" rxresp expect resp.status == 416 expect resp.bodylen == 0 txreq -hdr "Range: bytes=-" rxresp expect resp.status == 416 expect resp.bodylen == 0 txreq -hdr "Range: bytes=5-2" rxresp expect resp.status == 416 expect resp.bodylen == 0 txreq -hdr "Range: bytes=-0" rxresp expect resp.status == 416 expect resp.bodylen == 0 txreq -hdr "Range: bytes=100-" rxresp expect resp.status == 416 expect resp.bodylen == 0 } -run varnish v1 -expect s_resp_bodybytes == 100 varnish v1 -vsl_catchup client c3 { # Valid range requests txreq -hdr "Range: bytes=0-49" rxresp expect resp.status == 206 expect resp.bodylen == 50 expect resp.http.content-range == "bytes 0-49/100" expect resp.http.accept-ranges == "bytes" txreq -hdr "Range: bytes=50-99" rxresp expect resp.status == 206 expect resp.bodylen == 50 expect resp.http.content-range == "bytes 50-99/100" txreq -hdr "Range: bytes=-50" rxresp expect resp.status == 206 expect resp.bodylen == 50 expect resp.http.content-range == "bytes 50-99/100" txreq -hdr "Range: bytes=50-" rxresp expect resp.status == 206 expect resp.bodylen == 50 expect resp.http.content-range == "bytes 50-99/100" txreq -hdr "Range: bytes=0-0" rxresp expect resp.status == 206 expect resp.bodylen == 1 expect resp.http.content-range == "bytes 0-0/100" txreq -hdr "Range: bytes=-2000" rxresp expect resp.status == 206 expect resp.bodylen == 100 expect resp.http.content-range == "bytes 0-99/100" txreq -hdr "Range: bytes=0-" rxresp expect resp.status == 206 expect resp.bodylen == 100 expect resp.http.content-range == "bytes 0-99/100" } -run varnish v1 -expect s_resp_bodybytes == 501 varnish v1 -vsl_catchup # Test Range streaming with streaming objects with C-L server s1 { loop 2 { rxreq txresp -nolen -hdr "Content-Length: 100" send "0123456789" send "0123456789" send "0123456789" send "0123456789" send "0123456789" send "0123456789" send "0123456789" send "0123456789" send "0123456789" delay 2 send "0123456789" } } -start varnish v1 -vcl+backend { sub vcl_backend_response { if (bereq.url == "/2") { set beresp.do_gzip = true; } } } client c4 { # Open ended range to force C-L usage txreq -url /1 \ -hdr "Range: bytes=20-" \ -hdr "Connection: close" rxresp expect resp.status == 206 expect resp.http.Content-Range == "bytes 20-99/100" expect resp.http.Content-Length == 80 expect resp.bodylen == 80 expect_close } -run varnish v1 -vsl_catchup client c5 { # Closed C-L because we cannot use C-L txreq -url /2 \ -hdr "Range: bytes=2-5" \ -hdr "Accept-encoding: gzip" rxresp expect resp.status == 206 expect resp.http.Content-Range == "bytes 2-5/*" expect resp.http.Content-Length == expect resp.bodylen == 4 } -run # Test partial range with http2 server s1 { rxreq expect req.url == "/3" txresp -hdr "Content-length: 3" -body "BLA" } -start varnish v1 -cliok "param.set feature +http2" varnish v1 -vcl+backend "" client c6 { stream 1 { txreq -url /3 -hdr "range" "bytes=0-1" rxresp expect resp.status == 206 expect resp.http.content-length == 2 expect resp.body == BL } -run } -run varnish v1 -vsl_catchup server s1 { rxreq txresp rxreq txresp -hdr "Accept-Ranges: foobar" } -start varnish v1 -vcl+backend { sub vcl_recv { return (pass); } sub vcl_deliver { set resp.http.foobar = resp.http.accept-ranges; } } client c7 { txreq rxresp expect resp.http.foobar == "" expect resp.http.accept-ranges == txreq rxresp expect resp.http.foobar == foobar expect resp.http.accept-ranges == foobar } -run server s1 { rxreq expect req.http.range == "bytes=10-19" txresp -status 206 -hdr "content-range: bytes 0-49/100" -bodylen 40 rxreq expect req.http.range == "bytes=10-19" txresp -status 206 -hdr "content-range: bytes 10-19/100" -bodylen 40 rxreq expect req.http.range == "bytes=90-119" txresp -status 416 -hdr "content-range: bytes */100" -bodylen 100 rxreq expect req.url == "/?unexpected=content-range" expect req.http.range == txresp -hdr "content-range: bytes 0-49/100" -bodylen 40 rxreq expect req.url == "/?unexpected=unsatisfied-range" expect req.http.range == txresp -hdr "content-range: bytes */100" -bodylen 100 rxreq expect req.url == "/?invalid=content-range" expect req.http.range == txresp -hdr "content-range: bytes=0-99/100" -bodylen 100 rxreq expect req.http.range == "bytes=0-0" txresp -hdr "content-range: bytes */*" -bodylen 100 } -start varnish v1 -vcl+backend { sub vcl_recv { if (req.http.return == "pass") { return (pass); } } } client c8 { # range vs content-range vs content-length txreq -hdr "range: bytes=10-19" -hdr "return: pass" rxresp expect resp.status == 503 txreq -hdr "range: bytes=10-19" -hdr "return: pass" rxresp expect resp.status == 503 txreq -hdr "range: bytes=90-119" -hdr "return: pass" rxresp expect resp.status == 416 txreq -url "/?unexpected=content-range" rxresp expect resp.status == 503 txreq -url "/?unexpected=unsatisfied-range" rxresp expect resp.status == 503 txreq -url "/?invalid=content-range" rxresp expect resp.status == 503 txreq -hdr "range: bytes=0-0" -hdr "return: pass" rxresp expect resp.status == 503 } -run # range filter without a range header server s1 { rxreq txresp -bodylen 256 } -start varnish v1 -vcl+backend { sub vcl_deliver { set resp.filters = "range"; } } client c9 { txreq -url /5 rxresp expect resp.status == 200 expect resp.bodylen == 256 } -run varnish-7.5.0/bin/varnishtest/tests/c00035.vtc000066400000000000000000000011571457605730600210320ustar00rootroot00000000000000varnishtest "Dropping polling of a backend" server s1 -repeat 40 { non_fatal rxreq txresp } -start varnish v1 -vcl { probe default { .window = 8; .initial = 7; .threshold = 8; .interval = 0.1s; } backend s1 { .host = "${s1_addr}"; .port = "${s1_port}"; } } -start delay 1 varnish v1 -vcl+backend { } -cliok "vcl.use vcl2" -cliok "vcl.discard vcl1" delay 1 varnish v1 -cliok "vcl.list" varnish v1 -cliok "backend.list -p" server s1 -break { rxreq expect req.url == /foo txresp -bodylen 4 } -start client c1 { txreq -url /foo rxresp expect resp.status == 200 expect resp.bodylen == 4 } -run varnish-7.5.0/bin/varnishtest/tests/c00036.vtc000066400000000000000000000023721457605730600210330ustar00rootroot00000000000000varnishtest "Backend close retry" server s1 -repeat 1 { rxreq txresp -noserver -nodate -bodylen 5 rxreq accept rxreq txresp -noserver -nodate -bodylen 6 } -start varnish v1 -vcl+backend { sub vcl_recv { return(pass); } } -start logexpect l1 -v v1 -q "vxid == 1004" { expect * 1004 VCL_return {^fetch} expect 0 1004 Timestamp {^Fetch:} expect 0 1004 Timestamp {^Connected:} expect 0 1004 BackendOpen {^\d+ s1} expect 0 1004 Timestamp {^Bereq:} # purpose of this vtc: test the internal retry when the # backend goes away on a keepalive TCP connection: expect 0 1004 FetchError {^HTC eof .Unexpected end of input.} expect 0 1004 BackendClose {^\d+ s1} expect 0 1004 Timestamp {^Connected:} expect 0 1004 BackendOpen {^\d+ s1} expect 0 1004 Timestamp {^Bereq:} expect 0 1004 BerespProtocol {^HTTP/1.1} expect 0 1004 BerespStatus {^200} expect 0 1004 BerespReason {^OK} expect 0 1004 BerespHeader {^Content-Length: 6} expect 0 1004 Timestamp {^Beresp:} } -start client c1 { txreq rxresp expect resp.status == 200 expect resp.bodylen == 5 txreq rxresp expect resp.status == 200 expect resp.bodylen == 6 } -run varnish v1 -expect backend_retry == 1 logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/c00037.vtc000066400000000000000000000011551457605730600210320ustar00rootroot00000000000000varnishtest "Test req.hash_always_miss in vcl_recv" server s1 { rxreq txresp -hdr "Inc: 1" rxreq txresp -hdr "Inc: 2" } -start varnish v1 -vcl+backend { sub vcl_recv { if (req.http.x-missit == "1") { set req.hash_always_miss = true; } } } -start client c1 { txreq -url "/" rxresp expect resp.status == 200 expect resp.http.Inc == "1" txreq -url "/" rxresp expect resp.status == 200 expect resp.http.Inc == "1" txreq -url "/" -hdr "x-missit: 1" rxresp expect resp.status == 200 expect resp.http.Inc == "2" txreq -url "/" rxresp expect resp.status == 200 expect resp.http.Inc == "2" } -run varnish-7.5.0/bin/varnishtest/tests/c00038.vtc000066400000000000000000000015301457605730600210300ustar00rootroot00000000000000varnishtest "Test req.hash_ignore_busy in vcl_recv" barrier b1 cond 2 server s1 { rxreq barrier b1 sync delay 1 txresp -hdr "Server: 1" } -start server s2 { rxreq txresp -hdr "Server: 2" } -start varnish v1 -vcl+backend { sub vcl_recv { if (req.http.x-ignorebusy == "1") { set req.hash_ignore_busy = true; } } sub vcl_backend_fetch { if (bereq.http.x-client == "1") { set bereq.backend = s1; } if (bereq.http.x-client == "2") { set bereq.backend = s2; } } } -start client c1 { txreq -url "/" -hdr "x-client: 1" rxresp expect resp.status == 200 expect resp.http.Server == "1" } -start client c2 { barrier b1 sync txreq -url "/" -hdr "x-client: 2" -hdr "x-ignorebusy: 1" txreq -url "/" -hdr "x-client: 2" rxresp expect resp.status == 200 expect resp.http.Server == "2" } -start client c1 -wait client c2 -wait varnish-7.5.0/bin/varnishtest/tests/c00039.vtc000066400000000000000000000033361457605730600210370ustar00rootroot00000000000000varnishtest "request req and hdr length limits" server s1 { rxreq expect req.url == "/1" txresp -bodylen 5 rxreq expect req.url == "/2" txresp -bodylen 5 rxreq txresp -bodylen 5 } -start varnish v1 \ -vcl+backend { } -start varnish v1 -cliok "param.set http_req_size 256" varnish v1 -cliok "param.set http_req_hdr_len 40" client c1 { txreq -url "/1" -hdr "host: 127.0.0.1" -hdr "1...5: ..0....5....0....5....0....5....0" rxresp expect resp.status == 200 txreq -url "/1" -hdr "host: 127.0.0.1" -hdr "1...5....0....5....0....5....0....5....0." rxresp expect resp.status == 400 } -run client c1 { txreq -url "/2" -hdr "host: 127.0.0.1" -hdr "1...5: ..0....5\n ..0....5....0....5....0" rxresp expect resp.status == 200 txreq -url "/2" -hdr "host: 127.0.0.1" -hdr "1...5....0....5\n ..0....5....0....5....0." rxresp expect resp.status == 400 } -run client c1 { # Each line is 32 bytes. Total: 32 * 8 == 256 send "GET /..... HTTP/1.1\r\nHost: foo\r\n" send "1...5: ..0....5....0....5....0\r\n" send "1...5: ..0....5....0....5....0\r\n" send "1...5: ..0....5....0....5....0\r\n" send "1...5: ..0....5....0....5....0\r\n" send "1...5: ..0....5....0....5....0\r\n" send "1...5: ..0....5....0....5....0\r\n" send "1...5: ..0....5....0....5...\r\n\r\n" rxresp expect resp.status == 200 # Each line is 32 except last, which is 33. Total: 32 * 7 + 33 == 257 send "GET /..... HTTP/1.1\r\nHost: foo\r\n" send "1...5: ..0....5....0....5....0\r\n" send "1...5: ..0....5....0....5....0\r\n" send "1...5: ..0....5....0....5....0\r\n" send "1...5: ..0....5....0....5....0\r\n" send "1...5: ..0....5....0....5....0\r\n" send "1...5: ..0....5....0....5....0\r\n" send "1...5: ..0....5....0....5....\r\n\r\n" expect_close } -run varnish-7.5.0/bin/varnishtest/tests/c00040.vtc000066400000000000000000000036361457605730600210320ustar00rootroot00000000000000varnishtest "request resp and hdr length limits" server s1 { rxreq expect req.url == "/1" txresp \ -hdr "1...5: ..0....5....0....5....0....5....0" \ -bodylen 1 rxreq expect req.url == "/2" txresp \ -hdr "1...5: ..0....5....0....5....0....5....0." \ -bodylen 2 accept rxreq expect req.url == "/3" txresp \ -hdr "1...5: ..0....5....0\n ..5....0....5....0" \ -bodylen 3 rxreq expect req.url == "/4" txresp \ -hdr "1...5: ..0....5....0\n ..5....0....5....0." \ -bodylen 4 accept rxreq expect req.url == "/5" # Each line is 32 bytes. Total: 32 * 8 == 256 send "HTTP/1.1 200 OK....0....5....0\r\n" send "Content-Length: 4\r\n" send "1...5: ..0....5....0....5....0\r\n" send "1...5: ..0....5....0....5....0\r\n" send "1...5: ..0....5....0....5....0\r\n" send "1...5: ..0....5....0....5....0\r\n" send "1...5: ..0....5....0....5....0\r\n" send "1...5: ..0....5....0....5...\r\n\r\n" non_fatal send "asdf" rxreq expect req.url == "/6" # Each line is 32 except last, which is 33. Total: 32 * 7 + 33 == 257 send "HTTP/1.1 200 OK....0....5....0\r\n" send "Content-Length: 4\r\n" send "1...5: ..0....5....0....5....0\r\n" send "1...5: ..0....5....0....5....0\r\n" send "1...5: ..0....5....0....5....0\r\n" send "1...5: ..0....5....0....5....0\r\n" send "1...5: ..0....5....0....5....0\r\n" send "1...5: ..0....5....0....5....\r\n\r\n" non_fatal send "asdf" } -start varnish v1 \ -vcl+backend { } -start varnish v1 -cliok "param.set http_resp_size 256" varnish v1 -cliok "param.set http_resp_hdr_len 40" client c1 { txreq -url "/1" rxresp expect resp.status == 200 txreq -url "/2" rxresp expect resp.status == 503 } -run client c1 { txreq -url "/3" rxresp expect resp.status == 200 txreq -url "/4" rxresp expect resp.status == 503 } -run client c1 { txreq -url "/5" rxresp expect resp.status == 200 txreq -url "/6" rxresp expect resp.status == 503 } -run varnish-7.5.0/bin/varnishtest/tests/c00041.vtc000066400000000000000000000042721457605730600210300ustar00rootroot00000000000000varnishtest "test purging from vcl" server s1 { rxreq expect req.url == "/1" expect req.http.foo == "foo1" txresp -hdr "Vary: foo" -bodylen 1 rxreq expect req.url == "/1" expect req.http.foo == "foo2" txresp -hdr "Vary: foo" -bodylen 2 rxreq expect req.url == "/1" expect req.http.foo == "foo2" txresp -hdr "Vary: foo" -bodylen 12 rxreq expect req.url == "/1" expect req.http.foo == "foo1" txresp -hdr "Vary: foo" -bodylen 11 rxreq expect req.url == "/1" expect req.http.foo == "foo3" txresp -hdr "Vary: foo" -bodylen 23 rxreq expect req.url == "/1" expect req.http.foo == "foo1" txresp -hdr "Vary: foo" -bodylen 21 rxreq expect req.url == "/1" expect req.http.foo == "foo2" txresp -hdr "Vary: foo" -bodylen 22 } -start varnish v1 -vcl+backend { sub vcl_recv { if (req.http.purge == "yes") { return(purge); } } sub vcl_purge { if (req.http.restart == "yes") { unset req.http.purge; unset req.http.restart; return(restart); } } } -start client c1 { txreq -url "/1" -hdr "foo: foo1" rxresp expect resp.status == 200 expect resp.http.x-varnish == 1001 expect resp.bodylen == 1 delay .1 txreq -url "/1" -hdr "Foo: foo2" rxresp expect resp.status == 200 expect resp.http.x-varnish == 1003 expect resp.bodylen == 2 delay .1 txreq -url "/1" -hdr "foo: foo1" rxresp expect resp.status == 200 expect resp.http.x-varnish == "1005 1002" expect resp.bodylen == 1 delay .1 txreq -url "/1" -hdr "Foo: foo2" rxresp expect resp.status == 200 expect resp.http.x-varnish == "1006 1004" expect resp.bodylen == 2 delay .1 # Purge on hit txreq -url "/1" -hdr "Foo: foo2" -hdr "purge: yes" -hdr "restart: yes" rxresp expect resp.status == 200 expect resp.bodylen == 12 delay .1 txreq -url "/1" -hdr "foo: foo1" rxresp expect resp.status == 200 expect resp.bodylen == 11 delay .1 # Purge on miss txreq -url "/1" -hdr "Foo: foo3" -hdr "purge: yes" -hdr "restart: yes" rxresp expect resp.status == 200 expect resp.bodylen == 23 delay .1 txreq -url "/1" -hdr "foo: foo1" rxresp expect resp.status == 200 expect resp.bodylen == 21 delay .1 txreq -url "/1" -hdr "Foo: foo2" rxresp expect resp.status == 200 expect resp.bodylen == 22 delay .1 } -run varnish-7.5.0/bin/varnishtest/tests/c00042.vtc000066400000000000000000000066521457605730600210350ustar00rootroot00000000000000varnishtest "Test vcl defined via backends" server s1 { loop 5 { rxreq txresp -hdr "Server: s1" } } -start server s2 { rxreq txresp -hdr "Server: s2" } -start # the use case for via-proxy is to have a(n ha)proxy make a (TLS) # connection on our behalf. For the purpose of testing, we use another # varnish in place - but we are behaving realistically in that we do # not use any prior information for the actual backend connection - # just the information from the proxy protocol varnish v2 -proto PROXY -vcl+backend { import std; import proxy; sub vcl_recv { if (server.ip == "${s1_addr}" && std.port(server.ip) == ${s1_port}) { set req.backend_hint = s1; } else if (server.ip == "${s2_addr}" && std.port(server.ip) == ${s2_port}) { set req.backend_hint = s2; } else { return (synth(404, "unknown backend")); } std.log("PROXY " + req.url + " -> " + req.backend_hint); return (pass); } sub vcl_deliver { set resp.http.Authority = proxy.authority(); } } -start varnish v2 -cliok "param.set debug +syncvsl" varnish v1 -vcl { backend v2 { .host = "${v2_addr}"; .port = "${v2_port}"; } backend s1 { .via = v2; .host = "${s1_addr}"; .port = "${s1_port}"; } backend s2 { .via = v2; .host = "${s2_addr}"; .port = "${s2_port}"; } sub vcl_recv { if (req.url ~ "^/s1/") { set req.backend_hint = s1; } else if (req.url ~ "^/s2/") { set req.backend_hint = s2; } else { return (synth(400)); } } } -start client c1 { txreq -url /s1/1 rxresp expect resp.status == 200 expect resp.http.Authority == "${s1_addr}" expect resp.http.Server == "s1" txreq -url /s2/1 rxresp expect resp.status == 200 expect resp.http.Authority == "${s2_addr}" expect resp.http.Server == "s2" txreq -url /s1/2 rxresp expect resp.status == 200 expect resp.http.Authority == "${s1_addr}" expect resp.http.Server == "s1" } -run varnish v1 -vcl { backend v2 { .host = "${v2_addr}"; .port = "${v2_port}"; } backend s1 { .via = v2; .host = "${s1_addr}"; .port = "${s1_port}"; .authority = "authority.com"; } sub vcl_recv { set req.backend_hint = s1; } } client c1 { txreq -url /s1/3 rxresp expect resp.status == 200 expect resp.http.Authority == "authority.com" } -run varnish v1 -vcl { backend v2 { .host = "${v2_addr}"; .port = "${v2_port}"; } backend s1 { .via = v2; .host = "${s1_addr}"; .port = "${s1_port}"; .host_header = "host.com"; } sub vcl_recv { set req.backend_hint = s1; } } client c1 { txreq -url /s1/4 rxresp expect resp.status == 200 expect resp.http.Authority == "host.com" } -run # Setting .authority = "" disables sending the TLV. varnish v1 -vcl { backend v2 { .host = "${v2_addr}"; .port = "${v2_port}"; } backend s1 { .via = v2; .host = "${s1_addr}"; .port = "${s1_port}"; .authority = ""; } sub vcl_recv { set req.backend_hint = s1; } } client c1 { txreq -url /s1/5 rxresp expect resp.status == 200 # vmod_proxy returns the empty string if the TLV is absent. expect resp.http.Authority == "" } -run varnish v1 -errvcl "Cannot set both .via and .path" { backend v2 { .host = "${v2_addr}"; .port = "${v2_port}"; } backend s1 { .via = v2; .path = "/path/to/uds"; } } varnish v1 -errvcl "Can not stack .via backends" { backend a { .host = "${v2_addr}"; .port = "${v2_port}"; } backend b { .via = a; .host = "127.0.0.1"; } backend c { .via = b; .host = "127.0.0.2"; } sub vcl_backend_fetch { set bereq.backend = c; } } varnish-7.5.0/bin/varnishtest/tests/c00043.vtc000066400000000000000000000012711457605730600210260ustar00rootroot00000000000000varnishtest "predictive vary" barrier b1 cond 2 barrier b2 cond 2 server s1 { rxreq txresp -hdr "Vary: foo" -bodylen 1 rxreq barrier b2 sync barrier b1 sync txresp -hdr "Vary: foo" -bodylen 2 } -start server s2 { rxreq txresp -hdr "Vary: foo" -bodylen 3 } -start varnish v1 -vcl+backend { sub vcl_backend_fetch { if (bereq.http.bar) { set bereq.backend = s2; } } } -start client c1 { txreq -hdr "Foo: vary1" rxresp expect resp.bodylen == 1 txreq -hdr "Foo: vary2" rxresp expect resp.bodylen == 2 } -start client c2 { barrier b2 sync txreq -hdr "Foo: vary3" -hdr "bar: yes" rxresp barrier b1 sync expect resp.bodylen == 3 } -start client c1 -wait client c2 -wait varnish-7.5.0/bin/varnishtest/tests/c00044.vtc000066400000000000000000000037221457605730600210320ustar00rootroot00000000000000varnishtest "Object/LRU/Stevedores" server s1 { rxreq txresp -bodylen 1048290 rxreq txresp -bodylen 1048291 rxreq txresp -bodylen 1048292 rxreq txresp -bodylen 1047293 rxreq txresp -bodylen 1047294 } -start varnish v1 \ -arg "-ss1=default,1m" \ -arg "-ss2=default,1m" \ -arg "-ss0=default,1m" \ -arg "-sTransient=default" \ -vcl+backend { sub vcl_backend_response { set beresp.do_stream = false; # Unset Date header to not change the object sizes unset beresp.http.Date; } } -start client c1 { txreq -url /foo rxresp expect resp.status == 200 expect resp.bodylen == 1048290 } -run varnish v1 -expect SM?.Transient.g_bytes == 0 varnish v1 -expect SM?.s0.g_bytes == 0 varnish v1 -expect SM?.s0.g_space > 1000000 varnish v1 -expect SM?.s1.g_bytes > 1000000 varnish v1 -expect SM?.s1.g_space < 200 varnish v1 -expect SM?.s2.g_bytes == 0 varnish v1 -expect SM?.s2.g_space > 1000000 client c1 { txreq -url /bar rxresp expect resp.status == 200 expect resp.bodylen == 1048291 } -run varnish v1 -expect SM?.Transient.g_bytes == 0 varnish v1 -expect SM?.s0.g_bytes == 0 varnish v1 -expect SM?.s0.g_space > 1000000 varnish v1 -expect SM?.s1.g_bytes > 1000000 varnish v1 -expect SM?.s1.g_space < 200 varnish v1 -expect SM?.s2.g_bytes > 1000000 varnish v1 -expect SM?.s2.g_space < 200 client c1 { txreq -url /burp rxresp expect resp.status == 200 expect resp.bodylen == 1048292 } -run varnish v1 -expect SM?.Transient.g_bytes == 0 varnish v1 -expect SM?.s0.g_bytes > 1000000 varnish v1 -expect SM?.s0.g_space < 200 varnish v1 -expect SM?.s1.g_bytes > 1000000 varnish v1 -expect SM?.s1.g_space < 200 varnish v1 -expect SM?.s2.g_bytes > 1000000 varnish v1 -expect SM?.s2.g_space < 200 client c1 { txreq -url /foo1 rxresp expect resp.status == 200 expect resp.bodylen == 1047293 } -run varnish v1 -expect n_lru_nuked == 1 client c1 { txreq -url /foo rxresp expect resp.status == 200 expect resp.bodylen == 1047294 } -run varnish v1 -expect n_lru_nuked == 2 varnish-7.5.0/bin/varnishtest/tests/c00045.vtc000066400000000000000000000033631457605730600210340ustar00rootroot00000000000000varnishtest "Object/LRU/Stevedores with storage set" server s1 { rxreq txresp -bodylen 1048288 rxreq txresp -bodylen 1047289 rxreq txresp -bodylen 1047290 } -start varnish v1 \ -arg "-sdefault,1m" \ -arg "-sdefault,1m" \ -arg "-sdefault,1m" \ -arg "-sTransient=default" \ -syntax 4.0 \ -vcl+backend { sub vcl_backend_response { set beresp.do_stream = false; set beresp.storage = storage.s0; # Unset Date header to not change the object sizes unset beresp.http.Date; } } -start client c1 { txreq -url /foo rxresp expect resp.status == 200 expect resp.bodylen == 1048288 } -run varnish v1 -expect SM?.Transient.g_bytes == 0 varnish v1 -expect SM?.s0.g_bytes > 1000000 varnish v1 -expect SM?.s0.g_space < 193 varnish v1 -expect SM?.s1.g_bytes == 0 varnish v1 -expect SM?.s1.g_space > 1000000 varnish v1 -expect SM?.s2.g_bytes == 0 varnish v1 -expect SM?.s2.g_space > 1000000 client c1 { txreq -url /bar rxresp expect resp.status == 200 expect resp.bodylen == 1047289 } -run varnish v1 -expect n_lru_nuked == 1 varnish v1 -expect SM?.Transient.g_bytes == 0 varnish v1 -expect SM?.s0.g_bytes > 1000000 varnish v1 -expect SM?.s0.g_space < 1192 varnish v1 -expect SM?.s1.g_bytes == 0 varnish v1 -expect SM?.s1.g_space > 1000000 varnish v1 -expect SM?.s2.g_bytes == 0 varnish v1 -expect SM?.s2.g_space > 1000000 client c1 { txreq -url /foo rxresp expect resp.status == 200 expect resp.bodylen == 1047290 } -run varnish v1 -expect n_lru_nuked == 2 varnish v1 -expect SM?.Transient.g_bytes == 0 varnish v1 -expect SM?.s0.g_bytes > 1000000 varnish v1 -expect SM?.s0.g_space < 1194 varnish v1 -expect SM?.s1.g_bytes == 0 varnish v1 -expect SM?.s1.g_space > 1000000 varnish v1 -expect SM?.s2.g_bytes == 0 varnish v1 -expect SM?.s2.g_space > 1000000 varnish-7.5.0/bin/varnishtest/tests/c00046.vtc000066400000000000000000000034761457605730600210420ustar00rootroot00000000000000varnishtest "Object/LRU/Stevedores with storage set and body alloc failures" server s1 { rxreq txresp -hdr "Connection: close" -bodylen 1000000 } -start varnish v1 \ -arg "-sdefault,1m" \ -arg "-sdefault,1m" \ -arg "-sdefault,1m" \ -arg "-sTransient=default" \ -syntax 4.0 \ -vcl+backend { sub vcl_backend_response { set beresp.storage = storage.s0; } } -start client c1 { txreq -url /foo rxresp expect resp.status == 200 expect resp.bodylen == 1000000 } -run varnish v1 -expect SM?.Transient.g_bytes == 0 varnish v1 -expect SM?.s0.g_bytes > 1000000 varnish v1 -expect SM?.s0.g_space < 100000 varnish v1 -expect SM?.s1.g_bytes == 0 varnish v1 -expect SM?.s1.g_space > 1000000 varnish v1 -expect SM?.s2.g_bytes == 0 varnish v1 -expect SM?.s2.g_space > 1000000 server s1 -wait { rxreq non_fatal txresp -hdr "Connection: close" -bodylen 1000001 } -start client c1 { txreq -url /bar rxresp expect resp.status == 200 expect resp.bodylen == 1000001 } -run varnish v1 -expect n_lru_nuked == 1 varnish v1 -expect SM?.Transient.g_bytes == 0 varnish v1 -expect SM?.s0.g_bytes > 1000000 varnish v1 -expect SM?.s0.g_space < 100000 varnish v1 -expect SM?.s1.g_bytes == 0 varnish v1 -expect SM?.s1.g_space > 1000000 varnish v1 -expect SM?.s2.g_bytes == 0 varnish v1 -expect SM?.s2.g_space > 1000000 server s1 -wait { rxreq # non_fatal txresp -hdr "Connection: close" -bodylen 1000002 } -start client c1 { txreq -url /foo rxresp expect resp.status == 200 expect resp.bodylen == 1000002 } -run varnish v1 -expect n_lru_nuked == 2 varnish v1 -expect SM?.Transient.g_bytes == 0 varnish v1 -expect SM?.s0.g_bytes > 1000000 varnish v1 -expect SM?.s0.g_space < 100000 varnish v1 -expect SM?.s1.g_bytes == 0 varnish v1 -expect SM?.s1.g_space > 1000000 varnish v1 -expect SM?.s2.g_bytes == 0 varnish v1 -expect SM?.s2.g_space > 1000000 varnish-7.5.0/bin/varnishtest/tests/c00047.vtc000066400000000000000000000017061457605730600210350ustar00rootroot00000000000000varnishtest "Test VCL regsuball()" server s1 { rxreq txresp \ -hdr "foo: barbar" \ -hdr "bar: bbbar" } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.http.baz1 = regsuball(beresp.http.foo, "barb", "zz"); set beresp.http.baz2 = regsuball(beresp.http.foo, "ar", "zz"); set beresp.http.baz3 = regsuball(beresp.http.foo, "^", "baz"); set beresp.http.baz4 = regsuball(beresp.http.foo, "^[;]*", "baz"); set beresp.http.baz5 = regsuball(beresp.http.bar, "^b*", "b"); set beresp.http.baz6 = regsuball(beresp.http.foo, "^b*", "z"); set beresp.http.baz7 = regsuball(beresp.http.foo, "ping", "pong"); } } -start client c1 { txreq -url "/" rxresp expect resp.status == 200 expect resp.http.baz1 == "zzar" expect resp.http.baz2 == "bzzbzz" expect resp.http.baz3 == "bazbarbar" expect resp.http.baz4 == "bazbarbar" expect resp.http.baz5 == "bar" expect resp.http.baz6 == "zarbar" expect resp.http.baz7 == "barbar" } -run varnish-7.5.0/bin/varnishtest/tests/c00048.vtc000066400000000000000000000031231457605730600210310ustar00rootroot00000000000000varnishtest "Forcing health of backends" barrier b1 cond 2 server s1 { # probe rxreq txresp # req accept rxreq txresp rxreq txresp -hdr "Connection: close" # probe sick accept rxreq txresp -status 500 barrier b1 sync accept # req rxreq txresp } -start varnish v1 -vcl { backend s1 { .host = "${s1_addr}"; .port = "${s1_port}"; .probe = { .window = 8; .initial = 7; .threshold = 8; .interval = 5s; } } sub vcl_recv { return(pass); } } -start varnish v1 -vsl_catchup varnish v1 -cliok "vcl.list" varnish v1 -cliok "backend.list -p" varnish v1 -cliok "backend.set_health s1 auto" varnish v1 -cliok "backend.list -p" client c1 { txreq rxresp expect resp.status == 200 } -run varnish v1 -vsl_catchup varnish v1 -cliok "backend.list" varnish v1 -cliok "backend.set_health s1 sick" varnish v1 -cliok "backend.list" client c1 { txreq rxresp expect resp.status == 503 } -run varnish v1 -vsl_catchup varnish v1 -cliok "backend.list" varnish v1 -cliok "backend.set_health s1 healthy" varnish v1 -cliok "backend.list" client c1 { txreq rxresp expect resp.status == 200 } -run # wait for sick probe barrier b1 sync # healthy overrides probe varnish v1 -cliok "backend.list" client c1 { txreq rxresp expect resp.status == 200 } -run varnish v1 -vsl_catchup varnish v1 -clierr 106 "backend.set_health s1 foo" varnish v1 -clierr 106 "backend.set_health s2 foo" varnish v1 -clierr 106 "backend.set_health s2 auto" varnish v1 -cliok "vcl.list" varnish v1 -cliok "backend.list *" varnish v1 -cliok "backend.list *.foo" varnish v1 -cliok "backend.list vcl1.*" varnish-7.5.0/bin/varnishtest/tests/c00049.vtc000066400000000000000000000131351457605730600210360ustar00rootroot00000000000000varnishtest "New ban-lurker test" server s1 { rxreq expect req.url == /1 txresp -hdr "Foo: bar1" rxreq expect req.url == /2 txresp -hdr "Foo: bar2" rxreq expect req.url == /3 txresp -hdr "Foo: bar3" rxreq expect req.url == /4 txresp -hdr "Foo: bar4" rxreq expect req.url == /5 txresp -hdr "Foo: bar5" rxreq expect req.url == /6 txresp -hdr "Foo: bar6" rxreq expect req.url == /7 txresp -hdr "Foo: bar7" rxreq expect req.url == /4 txresp -hdr "Foo: bar4.1" rxreq expect req.url == /r1 txresp rxreq expect req.url == /r2 txresp rxreq expect req.url == /r3 txresp } -start varnish v1 -vcl+backend {} -start varnish v1 -cliok "param.set ban_lurker_age 0" varnish v1 -cliok "param.set ban_lurker_sleep 0" varnish v1 -cliok "param.set debug +lurker" varnish v1 -cliok "param.set debug +syncvsl" client c1 { txreq -url /1 rxresp expect resp.http.foo == bar1 txreq -url /2 rxresp expect resp.http.foo == bar2 } -run varnish v1 -cliok "ban obj.http.foo == bar1" client c1 { txreq -url /3 rxresp expect resp.http.foo == bar3 } -run varnish v1 -cliok "ban obj.http.foo == bar2 && obj.http.foo != foof" client c1 { txreq -url /4 rxresp expect resp.http.foo == bar4 } -run varnish v1 -cliok "ban req.http.kill == yes" client c1 { txreq -url /5 rxresp expect resp.http.foo == bar5 } -run varnish v1 -cliok "ban obj.http.foo == bar5" client c1 { txreq -url /6 rxresp expect resp.http.foo == bar6 } -run varnish v1 -cliok "ban obj.http.foo == bar6" client c1 { txreq -url /7 rxresp expect resp.http.foo == bar7 } -run # Get the VSL out of the way delay 1 varnish v1 -cliok "ban.list" varnish v1 -expect bans == 6 varnish v1 -expect bans_completed == 1 varnish v1 -expect bans_req == 1 varnish v1 -expect bans_obj == 4 varnish v1 -expect bans_added == 6 varnish v1 -expect bans_deleted == 0 varnish v1 -expect bans_tested == 0 varnish v1 -expect bans_tests_tested == 0 varnish v1 -expect bans_obj_killed == 0 varnish v1 -expect bans_lurker_tested == 0 varnish v1 -expect bans_lurker_tests_tested == 0 varnish v1 -expect bans_lurker_obj_killed == 0 varnish v1 -expect bans_dups == 0 varnish v1 -cliok "param.set ban_lurker_sleep .01" delay 1 varnish v1 -cliok "ban.list" delay 1 varnish v1 -cliok "ban.list" varnish v1 -expect bans == 4 varnish v1 -expect bans_completed == 3 varnish v1 -expect bans_req == 1 varnish v1 -expect bans_obj == 3 varnish v1 -expect bans_added == 6 varnish v1 -expect bans_deleted == 2 varnish v1 -expect bans_tested == 0 varnish v1 -expect bans_tests_tested == 0 varnish v1 -expect bans_obj_killed == 0 varnish v1 -expect bans_lurker_tested == 10 varnish v1 -expect bans_lurker_tests_tested == 11 varnish v1 -expect bans_lurker_obj_killed == 4 varnish v1 -expect bans_dups == 0 client c1 { txreq -url /3 rxresp expect resp.http.foo == bar3 } -run # Give lurker time to trim tail delay 1 varnish v1 -cliok "ban.list" varnish v1 -expect bans == 4 varnish v1 -expect bans_completed == 3 varnish v1 -expect bans_req == 1 varnish v1 -expect bans_obj == 3 varnish v1 -expect bans_added == 6 varnish v1 -expect bans_deleted == 2 varnish v1 -expect bans_tested == 1 varnish v1 -expect bans_tests_tested == 1 varnish v1 -expect bans_obj_killed == 0 varnish v1 -expect bans_lurker_tested == 10 varnish v1 -expect bans_lurker_tests_tested == 11 varnish v1 -expect bans_lurker_obj_killed == 4 varnish v1 -expect bans_dups == 0 client c1 { txreq -url /4 -hdr "kill: yes" rxresp expect resp.http.foo == bar4.1 } -run # Give lurker time to trim tail delay 1 varnish v1 -cliok "ban.list" varnish v1 -expect bans == 1 varnish v1 -expect bans_completed == 1 varnish v1 -expect bans_req == 0 varnish v1 -expect bans_obj == 1 varnish v1 -expect bans_added == 6 varnish v1 -expect bans_deleted == 5 varnish v1 -expect bans_tested == 2 varnish v1 -expect bans_tests_tested == 2 varnish v1 -expect bans_obj_killed == 1 varnish v1 -expect bans_lurker_tested == 10 varnish v1 -expect bans_lurker_tests_tested == 11 varnish v1 -expect bans_lurker_obj_killed == 4 varnish v1 -expect bans_dups == 0 varnish v1 -expect n_object == 3 # adding more bans than the cutoff purges all untested objects # (here: all objects) varnish v1 -cliok "param.set ban_lurker_age 2" varnish v1 -cliok "param.set ban_cutoff 4" varnish v1 -cliok "ban.list" varnish v1 -cliok "ban obj.http.nomatch == 1" varnish v1 -cliok "ban obj.http.nomatch == 2" varnish v1 -cliok "ban obj.http.nomatch == 3" varnish v1 -cliok "ban obj.http.nomatch == 4" varnish v1 -cliok "ban obj.http.nomatch == 5" varnish v1 -cliok "ban.list" varnish v1 -cliok "param.set ban_lurker_age .1" delay 3 varnish v1 -clijson "ban.list -j" varnish v1 -cliok "ban.list" varnish v1 -expect bans == 1 varnish v1 -expect bans_completed == 1 varnish v1 -expect bans_req == 0 varnish v1 -expect bans_obj == 1 varnish v1 -expect bans_added == 11 varnish v1 -expect bans_deleted == 10 varnish v1 -expect bans_tested == 2 varnish v1 -expect bans_tests_tested == 2 varnish v1 -expect bans_obj_killed == 1 varnish v1 -expect bans_lurker_tested == 10 varnish v1 -expect bans_lurker_tests_tested == 11 varnish v1 -expect bans_lurker_obj_killed == 4 varnish v1 -expect bans_lurker_obj_killed_cutoff == 3 varnish v1 -expect bans_dups == 0 varnish v1 -expect n_object == 0 client c1 { txreq -url /r1 rxresp } -run varnish v1 -cliok "ban req.http.nevermatch == 1" client c1 { txreq -url /r2 rxresp } -run varnish v1 -cliok "ban req.http.nevermatch == 2" client c1 { txreq -url /r3 rxresp } -run varnish v1 -cliok "ban req.http.nevermatch == 3" varnish v1 -cliok "ban.list" varnish v1 -cliok "ban obj.http.status != 0" delay 1 varnish v1 -cliok "ban.list" varnish v1 -expect n_object == 0 varnish-7.5.0/bin/varnishtest/tests/c00050.vtc000066400000000000000000000010431457605730600210210ustar00rootroot00000000000000varnishtest "Memory pool gymnastics" server s1 { } -start varnish v1 -vcl+backend {} -start delay 2 varnish v1 -expect MEMPOOL.req0.pool == 10 varnish v1 -cliok "param.set pool_req 90,100,100" delay 2 varnish v1 -expect MEMPOOL.req0.pool == 90 varnish v1 -cliok "param.set pool_req 50,80,100" delay 2 varnish v1 -expect MEMPOOL.req0.pool == 80 varnish v1 -expect MEMPOOL.req0.surplus == 10 varnish v1 -cliok "param.set pool_req 10,80,1" delay 2 varnish v1 -expect MEMPOOL.req0.pool == 10 varnish v1 -expect MEMPOOL.req0.timeout == 70 varnish-7.5.0/bin/varnishtest/tests/c00051.vtc000066400000000000000000000002501457605730600210210ustar00rootroot00000000000000varnishtest "test parameter protection" varnish v1 -arg "-r cli_timeout" varnish v1 -cliok "param.show cli_timeout" varnish v1 -clierr 107 "param.set cli_timeout 1m" varnish-7.5.0/bin/varnishtest/tests/c00052.vtc000066400000000000000000000013071457605730600210260ustar00rootroot00000000000000varnishtest "Test disabling inline C code" server s1 { rxreq txresp } -start varnish v1 varnish v1 -cliok "param.set vcc_feature +allow_inline_c" varnish v1 -vcl+backend { C{ /*...*/ }C } varnish v1 -cliok "param.set vcc_feature -allow_inline_c" varnish v1 -errvcl {Inline-C not allowed} { backend default { .host = "${s1_sock}"; } C{ /*...*/ }C } varnish v1 -errvcl {Inline-C not allowed} { backend default { .host = "${s1_sock}"; } sub vcl_recv { C{ /*...*/ }C } } varnish v1 -cliok "param.set vcc_feature +allow_inline_c" varnish v1 -vcl+backend { sub vcl_recv { C{ /*...*/ }C } } varnish v1 -vcl+backend { C{ /*...*/ }C } varnish v1 -start client c1 { txreq rxresp } -run varnish-7.5.0/bin/varnishtest/tests/c00053.vtc000066400000000000000000000026151457605730600210320ustar00rootroot00000000000000varnishtest "Test include vs. unsafe_path and include glob-ing" server s1 { rxreq txresp } -start shell "echo > ${tmpdir}/_.c00053" varnish v1 -vcl+backend { include "${tmpdir}/_.c00053"; } varnish v1 -cliok "param.set vcc_feature -unsafe_path" varnish v1 -errvcl {' is unsafe} { backend default { .host = "${s1_sock}"; } include "${tmpdir}/_.c00053"; } varnish v1 -cliok "param.set vcl_path ${tmpdir}" varnish v1 -vcl+backend { include "_.c00053"; } shell "rm -f ${tmpdir}/_.c00053" # Testing of include +glob varnish v1 -cliok "param.set vcc_feature +unsafe_path" varnish v1 -errvcl "glob pattern matched no files." { vcl 4.0; include +glob "${tmpdir}/Q*.vcl"; } shell { echo 'sub vcl_deliver { set resp.http.foo = "foo"; }' > ${tmpdir}/sub_foo.vcl echo 'sub vcl_deliver { set resp.http.bar = "bar"; }' > ${tmpdir}/sub_bar.vcl echo 'vcl 4.0; backend default { .host = "0:0"; } include +glob "./sub_*.vcl";' > ${tmpdir}/top.vcl } varnish v1 -vcl+backend { include +glob "${tmpdir}/sub_*.vcl"; } -start client c1 { txreq rxresp expect resp.http.foo == foo expect resp.http.bar == bar } -run varnish v1 -errvcl {needs absolute filename of including file.} { include +glob "./sub_*.vcl"; backend default none; } varnish v1 -cliok "vcl.load foo ${tmpdir}/top.vcl" varnish v1 -cliok "vcl.use foo" client c1 { txreq rxresp expect resp.http.foo == foo expect resp.http.bar == bar } -run varnish-7.5.0/bin/varnishtest/tests/c00054.vtc000066400000000000000000000016531457605730600210340ustar00rootroot00000000000000varnishtest "bitmap params masking" varnish v1 -cliok "param.show vsl_mask" varnish v1 -cliok "param.set vsl_mask -VCL_trace" varnish v1 -cliok "param.show vsl_mask" varnish v1 -cliok "param.set vsl_mask -WorkThread,-TTL" varnish v1 -cliok "param.show vsl_mask" varnish v1 -cliok "param.set vsl_mask +WorkThread,+TTL,+Hash" varnish v1 -cliok "param.show vsl_mask" varnish v1 -cliexpect {"value": "none"} "param.set -j feature none" varnish v1 -cliexpect {"value": "all"} "param.set -j vsl_mask all" varnish v1 -cliexpect {"value": "all(,-\w+)+"} "param.reset -j vsl_mask" varnish v1 -clierr 106 "param.set vsl_mask FooBar" varnish v1 -clierr 106 "param.set vsl_mask -FooBar" varnish v1 -clierr 106 {param.set vsl_mask \"} varnish v1 -cliok "param.set debug +workspace" varnish v1 -cliok "param.show debug" varnish v1 -cliok "param.show feature" varnish v1 -cliok "param.set feature +short_panic" varnish v1 -cliok "param.show feature" varnish-7.5.0/bin/varnishtest/tests/c00055.vtc000066400000000000000000000023001457605730600210230ustar00rootroot00000000000000varnishtest "test caching of req.body" server s1 { rxreq expect req.bodylen == 3 txresp -hdr "Connection: close" -hdr "Foo: BAR" -body "1234" expect_close accept rxreq expect req.bodylen == 3 txresp -hdr "Foo: Foo" -body "56" } -start varnish v1 -vcl+backend { import std; sub vcl_recv { set req.http.stored = std.cache_req_body(1KB); return (pass); } sub vcl_deliver { if (resp.http.foo == "BAR") { return (restart); } set resp.http.stored = req.http.stored; } } -start varnish v1 -cliok "param.set debug +syncvsl" client c1 { txreq -req "POST" -body "FOO" rxresp expect resp.http.Foo == "Foo" expect resp.bodylen == 2 expect resp.http.stored == true } -run # check log for the aborted POST logexpect l1 -v v1 { expect * 1006 Begin expect * = FetchError "^straight insufficient bytes" } -start client c2 { txreq -req POST -hdr "Content-Length: 52" } -run logexpect l1 -wait delay .1 varnish v1 -expect MGT.child_died == 0 # no req body server s1 { rxreq txresp } -start client c4 { txreq rxresp expect resp.status == 200 expect resp.http.stored == true } -run # req body overflow client c5 { txreq -req POST -hdr "Content-Length: 1025" expect_close } -run varnish-7.5.0/bin/varnishtest/tests/c00056.vtc000066400000000000000000000014141457605730600210310ustar00rootroot00000000000000varnishtest "vcl_backend_response{} retry" server s1 { rxreq txresp -hdr "foo: 1" accept rxreq txresp -hdr "foo: 2" } -start varnish v1 -vcl+backend { sub vcl_recv { return (pass); } sub vcl_backend_response { set beresp.http.bar = bereq.retries; if (beresp.http.foo != bereq.http.stop) { return (retry); } } } -start varnish v1 -cliok "param.set debug +syncvsl" client c1 { txreq -hdr "stop: 2" rxresp expect resp.http.foo == 2 } -run delay .1 server s1 { rxreq txresp -hdr "foo: 1" accept rxreq txresp -hdr "foo: 2" accept rxreq txresp -hdr "foo: 3" } -start varnish v1 -cliok "param.set max_retries 2" client c1 { txreq -hdr "stop: 3" rxresp expect resp.http.foo == 3 } -run # XXX: Add a test which exceeds max_retries and gets 503 back varnish-7.5.0/bin/varnishtest/tests/c00057.vtc000066400000000000000000000032241457605730600210330ustar00rootroot00000000000000varnishtest "test sigsegv handler" # Under ASAN, the stack layout is different and STACK OVERFLOW is # never printed. feature !asan server s1 { rxreq txresp } -start varnish v1 \ -arg "-p feature=+no_coredump" \ -arg "-p vcc_feature=+allow_inline_c" \ -arg "-p thread_pool_stack=128k" \ -vcl+backend { C{ #include #include #include static void _accessor(volatile char *p) { p[0] = 'V'; p[1] = '\0'; fprintf(stderr, "%p %s\n", p, p); } static void (*accessor)(volatile char *p) = _accessor; }C sub vcl_recv { C{ const int stkkb = 128; int i; volatile char overflow[stkkb * 1024]; /* for downwards stack, take care to hit a single guard page */ for (i = (stkkb - 1) * 1024; i >= 0; i -= 1024) accessor(overflow + i); /* NOTREACHED */ sleep(2); }C } } -start client c1 { txreq expect_close } -run delay 3 varnish v1 -cliexpect "STACK OVERFLOW" "panic.show" varnish v1 -clijson "panic.show -j" varnish v1 -cliok "panic.clear" # Also check without the handler varnish v1 -cliok "param.set sigsegv_handler off" varnish v1 -vcl+backend {} -start client c1 { txreq rxresp } -run varnish v1 -expectexit 0x40 #################### varnish v2 \ -arg "-p feature=+no_coredump" \ -arg "-p vcc_feature=+allow_inline_c" \ -vcl+backend { C{ #include }C sub vcl_recv { C{ int *i = (void *)VRT_GetHdr; *i = 42; }C } } -start client c2 -connect ${v2_sock} { txreq expect_close } -run delay 3 varnish v2 -cliexpect "[bB]us error|Segmentation [fF]ault" "panic.show" varnish v2 -clijson "panic.show -j" varnish v2 -cliok "panic.clear" varnish v2 -expectexit 0x40 varnish-7.5.0/bin/varnishtest/tests/c00058.vtc000066400000000000000000000013041457605730600210310ustar00rootroot00000000000000varnishtest "Test v4 grace" barrier b1 cond 2 server s1 { rxreq txresp -hdr "Last-Modified: Mon, 09 Feb 2015 09:32:47 GMT" -bodylen 3 rxreq txresp -bodylen 6 barrier b1 sync } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.ttl = 0.5s; set beresp.grace = 10s; } } -start client c1 { txreq rxresp expect resp.bodylen == 3 delay 1 } -run varnish v1 -expect n_object == 1 client c1 { # We still get old object txreq rxresp expect resp.bodylen == 3 # But bg fetch was kicked off barrier b1 sync delay .5 # And now we get the new object txreq rxresp expect resp.bodylen == 6 } -run # and the old one has got superseded varnish v1 -expect n_object == 1 varnish-7.5.0/bin/varnishtest/tests/c00059.vtc000066400000000000000000000064171457605730600210440ustar00rootroot00000000000000varnishtest "test ban obj.* except obj.http.*" # see c00021.vtc for obj.http.* tests server s1 { rxreq txresp -bodylen 1 rxreq txresp -bodylen 2 rxreq txresp -bodylen 3 rxreq txresp -bodylen 4 rxreq txresp -bodylen 5 rxreq txresp -bodylen 6 } -start varnish v1 -vcl+backend {} -start client c1 { txreq rxresp expect resp.bodylen == 1 } -run varnish v1 -cliok "ban obj.status == 201" client c1 { txreq rxresp expect resp.bodylen == 1 } -run varnish v1 -cliok "ban obj.status == 200" varnish v1 -cliok "ban.list" varnish v1 -clijson "ban.list -j" client c1 { txreq rxresp expect resp.bodylen == 2 } -run varnish v1 -cliok "ban obj.keep == 0s && obj.ttl > 1d" varnish v1 -cliok "ban obj.keep == 0s && obj.ttl > 1d" # BANS_FLAG_NODEDUP varnish v1 -cliexpect {(?s) obj\.ttl > 1d\b.*obj\.ttl > 1d\b} "ban.list" client c1 { txreq rxresp expect resp.bodylen == 2 } -run varnish v1 -cliok "ban obj.ttl <= 2m" client c1 { txreq rxresp expect resp.bodylen == 3 } -run varnish v1 -cliok "ban obj.age > 1d" varnish v1 -cliok "ban obj.age > 1d" # BANS_FLAG_NODEDUP varnish v1 -cliexpect {(?s) obj\.age > 1d\b.*obj\.age > 1d\b} "ban.list" client c1 { txreq rxresp expect resp.bodylen == 3 } -run varnish v1 -cliok "ban obj.age < 1m" client c1 { txreq rxresp expect resp.bodylen == 4 } -run varnish v1 -cliok "ban obj.grace != 10s" varnish v1 -cliok "ban obj.grace != 10s" # ! BANS_FLAG_NODEDUP varnish v1 -cliexpect {(?s) obj\.grace != 10s\b.* \d C\b} "ban.list" client c1 { txreq rxresp expect resp.bodylen == 4 } -run varnish v1 -cliok "ban obj.grace == 10s" client c1 { txreq rxresp expect resp.bodylen == 5 } -run varnish v1 -cliok "ban obj.keep != 0s" varnish v1 -cliok "ban obj.keep != 0s" # ! BANS_FLAG_NODEDUP varnish v1 -cliexpect {(?s) obj\.keep != 0s\b.* \d C\b} "ban.list" client c1 { txreq rxresp expect resp.bodylen == 5 } -run varnish v1 -cliok "ban obj.keep == 0s" client c1 { txreq rxresp expect resp.bodylen == 6 } -run # duration formatting - 0s is being tested above varnish v1 -cliok "ban obj.keep == 123ms" varnish v1 -cliexpect {(?s) obj\.keep == 123ms\b} "ban.list" varnish v1 -cliok "ban obj.keep == 0.456s" varnish v1 -cliexpect {(?s) obj\.keep == 456ms\b} "ban.list" varnish v1 -cliok "ban obj.keep == 6.789s" varnish v1 -cliexpect {(?s) obj\.keep == 6.789s\b} "ban.list" varnish v1 -cliok "ban obj.keep == 42y" varnish v1 -cliexpect {(?s) obj\.keep == 42y\b} "ban.list" varnish v1 -cliok "ban obj.keep == 365d" varnish v1 -cliexpect {(?s) obj\.keep == 1y\b} "ban.list" varnish v1 -cliok "ban obj.keep == 9w" varnish v1 -cliexpect {(?s) obj\.keep == 9w\b} "ban.list" varnish v1 -cliok "ban obj.keep == 7d" varnish v1 -cliexpect {(?s) obj\.keep == 1w\b} "ban.list" varnish v1 -cliok "ban obj.keep == 3d" varnish v1 -cliexpect {(?s) obj\.keep == 3d\b} "ban.list" varnish v1 -cliok "ban obj.keep == 24h" varnish v1 -cliexpect {(?s) obj\.keep == 1d\b} "ban.list" varnish v1 -cliok "ban obj.keep == 18h" varnish v1 -cliexpect {(?s) obj\.keep == 18h\b} "ban.list" varnish v1 -cliok "ban obj.keep == 1.5h" varnish v1 -cliexpect {(?s) obj\.keep == 90m\b} "ban.list" varnish v1 -cliok "ban obj.keep == 10m" varnish v1 -cliexpect {(?s) obj\.keep == 10m\b} "ban.list" varnish v1 -cliok "ban obj.keep == 0.5m" varnish v1 -cliexpect {(?s) obj\.keep == 30s\b} "ban.list" varnish-7.5.0/bin/varnishtest/tests/c00060.vtc000066400000000000000000000016721457605730600210320ustar00rootroot00000000000000varnishtest "Backend IMS header merging" server s1 { rxreq txresp -hdr "Last-Modified: Thu, 26 Jun 2008 12:00:01 GMT" \ -hdr "Foobar: foo" \ -hdr "Snafu: 1" \ -bodylen 13 rxreq expect req.http.if-modified-since == "Thu, 26 Jun 2008 12:00:01 GMT" txresp -status "304" \ -hdr "Last-Modified: Thu, 26 Jun 2008 12:00:01 GMT" \ -hdr "Snafu: 2" \ -hdr "Grifle: 3" \ -nolen } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.grace = 0s; set beresp.keep = 60s; set beresp.ttl = 1s; if (beresp.http.foobar == "foo") { set beresp.http.foobar = "foo0"; } if (beresp.http.snafu == "2") { set beresp.http.snafu = "2a"; } } } -start client c1 { txreq rxresp expect resp.bodylen == 13 expect resp.http.foobar == foo0 expect resp.http.snafu == 1 delay 3 txreq rxresp expect resp.bodylen == 13 expect resp.http.foobar == foo0 expect resp.http.grifle == 3 expect resp.http.snafu == 2a } -run varnish-7.5.0/bin/varnishtest/tests/c00061.vtc000066400000000000000000000007401457605730600210260ustar00rootroot00000000000000varnishtest "retry in vcl_backend_error" varnish v1 -vcl { backend b1 { .host = "${bad_backend}"; } sub vcl_backend_error { return (retry); } sub vcl_synth { set resp.status = 504; } } -start varnish v1 -cliok "param.set debug +syncvsl" varnish v1 -cliok "param.set connect_timeout 1" varnish v1 -cliok "param.set max_retries 2" client c1 { txreq rxresp expect resp.status == 504 } -run varnish v1 -expect backend_fail == 3 varnish v1 -expect fetch_failed == 3 varnish-7.5.0/bin/varnishtest/tests/c00062.vtc000066400000000000000000000007261457605730600210330ustar00rootroot00000000000000varnishtest "Check that aborted backend body aborts client in streaming mode" barrier b1 cond 2 barrier b2 cond 2 server s1 { rxreq txresp -nolen -hdr "Transfer-encoding: chunked" chunked {} barrier b1 sync chunked {} barrier b2 sync } -start varnish v1 -cliok "param.set debug +syncvsl" -vcl+backend { } -start client c1 { txreq rxresphdrs expect resp.status == 200 rxchunk barrier b1 sync rxchunk barrier b2 sync expect_close } -run varnish-7.5.0/bin/varnishtest/tests/c00063.vtc000066400000000000000000000012151457605730600210260ustar00rootroot00000000000000varnishtest "cache backend synth object" varnish v1 -vcl { backend b { .host = "${bad_backend}"; } sub vcl_backend_error { set beresp.ttl = 1s; set beresp.grace = 3s; set beresp.http.foobar = "BLA" + bereq.xid; set beresp.body = beresp.http.foobar; return (deliver); } } -start varnish v1 -cliok "param.set connect_timeout 1.0" client c1 { txreq rxresp expect resp.http.foobar == "BLA1002" delay 2 txreq rxresp expect resp.http.foobar == "BLA1002" expect resp.body == "BLA1002" delay 3 txreq rxresp expect resp.http.foobar == "BLA1004" expect resp.body == "BLA1004" } -run delay 4 varnish v1 -expect n_objectcore == 0 varnish-7.5.0/bin/varnishtest/tests/c00064.vtc000066400000000000000000000010311457605730600210230ustar00rootroot00000000000000varnishtest "Connection: close in vcl_synth{}" server s1 { rxreq txresp -status 400 } -start varnish v1 -vcl+backend { sub vcl_miss { if (req.url == "/333") { return (synth(333, "FOO")); } else { return (synth(334, "BAR")); } } sub vcl_synth { if (resp.status == 333) { set resp.http.connection = "close"; } } } -start client c1 { txreq -url /334 rxresp expect resp.status == 334 txreq -url /334 rxresp expect resp.status == 334 txreq -url /333 rxresp expect resp.status == 333 expect_close } -run varnish-7.5.0/bin/varnishtest/tests/c00065.vtc000066400000000000000000000004311457605730600210270ustar00rootroot00000000000000varnishtest "Connection: close in vcl_deliver{}" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { sub vcl_deliver { set resp.http.Connection = "close"; } } -start client c1 { txreq rxresp expect_close } -run client c1 { txreq rxresp expect_close } -run varnish-7.5.0/bin/varnishtest/tests/c00066.vtc000066400000000000000000000021321457605730600210300ustar00rootroot00000000000000varnishtest "Check that we always deliver Date headers" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { backend bad { .host = "${bad_backend}"; } sub vcl_recv { if (req.url == "/synth") { return (synth(200, "Synth test")); } if (req.url == "/error") { set req.backend_hint = bad; } } sub vcl_backend_response { set beresp.do_stream = false; } } -start varnish v1 -cliok "param.set connect_timeout 1" logexpect l1 -v v1 -g request { expect 0 1001 Begin "^req .* rxreq" expect * = ReqURL "/" expect * = RespHeader "^Date: " expect * = End expect * 1003 Begin "^req .* rxreq" expect * = ReqURL "/synth" expect * = RespHeader "^Date: " expect * = End expect * 1004 Begin "^req .* rxreq" expect * = ReqURL "/error" expect * = RespHeader "^Date: " expect * = End } -start client c1 { txreq -url "/" rxresp expect resp.status == 200 expect resp.reason == "OK" delay .1 txreq -url "/synth" rxresp expect resp.status == 200 expect resp.reason == "Synth test" delay .1 txreq -url "/error" rxresp expect resp.status == 503 } -run logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/c00067.vtc000066400000000000000000000036221457605730600210360ustar00rootroot00000000000000varnishtest "chunked req.body" server s1 { rxreq expect req.bodylen == 106 txresp -body "ABCD" rxreq expect req.url == "/2" expect req.bodylen == 109 txresp -body "ABCDE" rxreq expect req.url == "/3" expect req.bodylen == 110 txresp -body "ABCDEF" # the last request fails on the client side and # does not reach the backend } -start varnish v1 -vcl+backend { } -start varnish v1 -cliok "param.set debug +syncvsl" client c1 { txreq -req POST -nolen -hdr "Transfer-encoding: chunked" chunked {BLA} delay .2 chunkedlen 100 delay .2 chunked {FOO} delay .2 chunkedlen 0 rxresp expect resp.status == 200 expect resp.bodylen == 4 } -run delay .2 varnish v1 -vcl+backend { import std; sub vcl_recv { if (std.cache_req_body(110B)) { } } } logexpect l2 -v v1 -g vxid -q {ReqURL ~ "^/2"} { expect * * ReqUnset {^Transfer-encoding: chunked} expect 0 = ReqHeader {^Content-Length: 109} expect * = ReqAcct {^\d+ 109 } } -start logexpect l3 -v v1 -g vxid -q {ReqURL ~ "^/3"} { expect * * ReqUnset {^Transfer-encoding: chunked} expect 0 = ReqHeader {^Content-Length: 110} expect * = ReqAcct {^\d+ 110 } } -start logexpect l4 -v v1 -g vxid -q {ReqURL ~ "^/4"} { expect * * FetchError {^Request body too big to cache} expect * = ReqAcct {^\d+ 111 } } -start client c1 { txreq -url "/2" -req POST -nolen -hdr "Transfer-encoding: chunked" chunkedlen 50 delay .2 chunkedlen 59 delay .2 chunkedlen 0 rxresp expect resp.status == 200 expect resp.bodylen == 5 txreq -url "/3" -req POST -nolen -hdr "Transfer-encoding: chunked" chunkedlen 50 delay .2 chunkedlen 60 delay .2 chunkedlen 0 rxresp expect resp.status == 200 expect resp.bodylen == 6 txreq -url "/4" -req POST -nolen -hdr "Transfer-encoding: chunked" chunked {BLAST} delay .2 chunkedlen 106 expect_close } -run logexpect l2 -wait logexpect l3 -wait logexpect l4 -waitvarnish-7.5.0/bin/varnishtest/tests/c00068.vtc000066400000000000000000000024021457605730600210320ustar00rootroot00000000000000varnishtest "synth in deliver" server s1 { rxreq txresp -status 200 rxreq txresp -status 200 rxreq txresp -status 200 } -start varnish v1 -vcl+backend { import std; sub vcl_backend_response { if (bereq.url == "/333") { set beresp.status = 333; set beresp.reason = "FOO"; } } sub vcl_deliver { if (req.url == "/332") { return (synth(332, "F" + "OO" + std.tolower("FOO"))); } else if (req.url == "/333") { return (synth(resp.status + 1000, resp.reason)); } else { return (synth(334, "BAR")); } } sub vcl_synth { # internal response status >1000 will be taken modulo # 1000 when sent if (resp.status == 1333) { set resp.http.connection = "close"; } else if (resp.status == 332) { if (req.restarts == 0) { return (restart); } else { set resp.http.restarts = req.restarts; set resp.body = req.restarts; } } return (deliver); } } -start client c1 { txreq -url /334 rxresp expect resp.status == 334 # cache hit txreq -url /334 rxresp expect resp.status == 334 txreq -url /333 rxresp expect resp.status == 333 expect_close } -run client c2 { txreq -url /332 rxresp expect resp.status == 332 expect resp.reason == "FOOfoo" expect resp.http.restarts == 1 expect resp.bodylen == 1 } -run varnish-7.5.0/bin/varnishtest/tests/c00069.vtc000066400000000000000000000016221457605730600210360ustar00rootroot00000000000000varnishtest "Test resp.is_streaming" barrier b1 sock 2 barrier b2 sock 2 server s1 { rxreq txresp -nolen -hdr "Content-Length: 10" barrier b1 sync barrier b2 sync send "1234567890" } -start varnish v1 -vcl+backend { import vtc; sub vcl_recv { if (req.url == "/synth") { return(synth(200, "OK")); } } sub vcl_backend_response { vtc.barrier_sync("${b1_sock}"); return (deliver); } sub vcl_synth { set resp.http.streaming = resp.is_streaming; } sub vcl_deliver { set resp.http.streaming = resp.is_streaming; if (obj.hits == 0) { vtc.barrier_sync("${b2_sock}"); } } } -start logexpect l1 -v v1 -q "Begin ~ bereq" -i End { expect 0 1002 End } -start client c1 { txreq rxresp expect resp.http.streaming == true } -run logexpect l1 -wait client c2 { txreq rxresp expect resp.http.streaming == false txreq -url /synth rxresp expect resp.http.streaming == false } -run varnish-7.5.0/bin/varnishtest/tests/c00070.vtc000066400000000000000000000025001457605730600210220ustar00rootroot00000000000000varnishtest "Test workspace functions in vmod_vtc" server s1 { rxreq txresp rxreq txresp } -start varnish v1 -vcl+backend { import vtc; sub vcl_recv { if (req.url == "/overflow") { vtc.workspace_alloc(client, 100000000); } } sub vcl_backend_response { set beresp.http.free_backend = vtc.workspace_free(backend); } sub vcl_deliver { set resp.http.free_session = vtc.workspace_free(session); set resp.http.free_thread = vtc.workspace_free(thread); set resp.http.overflowed = vtc.workspace_overflowed(client); vtc.workspace_alloc(client, 2048); if (req.url == "/bar") { vtc.workspace_overflow(client); } } } -start logexpect l1 -v v1 -d 1 -g vxid -q "Error ~ 'overflow'" { expect 0 * Begin expect * = Error "workspace_client overflow" expect * = End } -start client c1 { txreq -url /foo rxresp expect resp.http.overflowed == "false" expect resp.http.free_backend > 9000 expect resp.http.free_session >= 200 expect resp.http.free_thread > 2000 } -run client c1 { txreq -url /bar rxresp expect resp.status == 500 } -run logexpect l1 -wait logexpect l2 -v v1 -d 1 -g vxid { expect * 1007 VCL_Error } -start client c1 { txreq -url /overflow rxresp expect resp.status == 503 } -run logexpect l2 -wait varnish v1 -expect client_resp_500 == 1 varnish v1 -expect ws_client_overflow == 2 varnish-7.5.0/bin/varnishtest/tests/c00071.vtc000066400000000000000000000027651457605730600210400ustar00rootroot00000000000000varnishtest "Test actual client workspace overflow" server s1 { rxreq txresp rxreq txresp rxreq txresp } -start varnish v1 -arg "-p debug=+workspace" -vcl+backend { import vtc; import std; sub vcl_deliver { vtc.workspace_alloc(client, -192); if (req.url ~ "/bar") { set resp.http.x-foo = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"; vtc.workspace_alloc(client, -10); } else if (req.url ~ "/baz") { set resp.http.x-foo = regsub(req.url, "baz", "b${string,repeat,91,a}z"); std.log("dummy"); std.syslog(8 + 7, "vtc: " + 1 + 2); } set resp.http.x-of = vtc.workspace_overflowed(client); } } -start client c1 { txreq -url /foo rxresp expect resp.status == 200 expect resp.http.x-of == "false" txreq -url /bar rxresp expect resp.status == 500 expect resp.http.x-of == } -run varnish v1 -vsl_catchup logexpect l1 -v v1 -g vxid -q "vxid == 1006" { expect * 1006 VCL_call {^DELIVER$} expect * = LostHeader {^x-foo:$} # std.log does not need workspace expect 0 = VCL_Log {^dummy$} expect * = Debug {^WS_Snapshot.* = overflowed} expect * = Debug {^WS_Reset.*, overflowed} # the workspace is overflowed, but still has space expect * = RespHeader {^x-of: true$} expect * = Error {^workspace_client overflow} } -start client c2 { txreq -url /baz rxresp expect resp.status == 500 expect resp.http.x-of == } -run logexpect l1 -wait varnish v1 -expect client_resp_500 == 2 varnish v1 -expect ws_client_overflow == 2 varnish-7.5.0/bin/varnishtest/tests/c00072.vtc000066400000000000000000000021141457605730600210250ustar00rootroot00000000000000varnishtest "purge stale on refresh with and without IMS" server s1 { rxreq expect req.url == /no-ims txresp -hdr "foo: bar" -body "1" rxreq expect req.url == /ims txresp -hdr "foo: bar" -hdr {ETag: "asdf"} -body "12" rxreq txresp -hdr "foo: baz" -body "123" rxreq expect req.http.if-none-match == {"asdf"} txresp -status 304 -nolen -hdr "foo: bazf" } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.ttl = 0.1s; set beresp.grace = 0s; set beresp.keep = 60s; } } -start client c1 { txreq -url /no-ims rxresp expect resp.bodylen == 1 expect resp.http.foo == "bar" txreq -url /ims rxresp expect resp.bodylen == 2 expect resp.http.foo == "bar" } -run # Wait for ttl to expire on both objects delay 0.2 varnish v1 -expect n_object == 2 client c1 { txreq -url /no-ims rxresp expect resp.bodylen == 3 expect resp.http.foo == "baz" txreq -url /ims rxresp expect resp.bodylen == 2 expect resp.http.foo == "bazf" } -run # Make sure expiry is done delay 1 # Only one of each of /no-ims and /ims should be left varnish v1 -expect n_object == 2 varnish-7.5.0/bin/varnishtest/tests/c00073.vtc000066400000000000000000000007101457605730600210260ustar00rootroot00000000000000varnishtest "Test object trimming" barrier b1 cond 2 barrier b2 cond 2 server s1 { rxreq txresp -nolen -hdr "Transfer-encoding: chunked" delay .2 chunkedlen 4096 barrier b1 sync barrier b2 sync chunkedlen 0 } -start varnish v1 \ -arg "-s default,1m" -vcl+backend { } -start client c1 { txreq rxresp } -start barrier b1 sync varnish v1 -expect SM?.s0.g_bytes > 10000 barrier b2 sync client c1 -wait varnish v1 -expect SM?.s0.g_bytes < 6000 varnish-7.5.0/bin/varnishtest/tests/c00074.vtc000066400000000000000000000015511457605730600210330ustar00rootroot00000000000000varnishtest "Test WS_Reset off-by-one when workspace is full" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { import std; import vtc; sub vcl_recv { set req.http.ws-free1 = vtc.workspace_free(session); vtc.workspace_alloc(session, std.integer(req.http.ws-free1, 0)); set req.http.ws-free2 = vtc.workspace_free(session); vtc.workspace_snapshot(session); vtc.workspace_reset(session); set req.http.ws-free3 = vtc.workspace_free(session); # Cover an obscure case in VRT_String() while here set req.http.bar = vtc.typesize( req.http.h1 + req.http.h2 + req.http.h3); } } -start logexpect l1 -v v1 -g request { expect * * ReqHeader {^ws-free1:} expect 0 = ReqHeader {^ws-free2: 0$} expect 0 = ReqHeader {^ws-free3: 0$} } -start client c1 { txreq -url /foo -hdr "h3: dd" rxresp expect resp.status == 200 } -run logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/c00075.vtc000066400000000000000000000066521457605730600210430ustar00rootroot00000000000000varnishtest "Test large pass deleted during streaming" barrier ba1 cond 3 barrier ba2 cond 2 barrier bb1 cond 3 barrier bb2 cond 2 barrier bc1 cond 3 barrier bc2 cond 2 barrier bd1 cond 3 barrier bd2 cond 2 server s1 { rxreq expect req.url == "/hfm" txresp -nolen -hdr "Transfer-Encoding: chunked" \ -hdr "Cache-Control: no-cache" chunkedlen 32768 barrier ba1 sync chunkedlen 32768 chunkedlen 0 rxreq expect req.url == "/hfm" txresp -nolen -hdr "Transfer-Encoding: chunked" \ -hdr "Cache-Control: no-cache" chunkedlen 32768 barrier bb1 sync chunkedlen 32768 chunkedlen 0 rxreq expect req.url == "/hfp" txresp -nolen -hdr "Transfer-Encoding: chunked" \ -hdr "Cache-Control: no-cache" chunkedlen 32768 barrier bc1 sync chunkedlen 32768 chunkedlen 0 rxreq expect req.url == "/hfp" txresp -nolen -hdr "Transfer-Encoding: chunked" \ -hdr "Cache-Control: no-cache" chunkedlen 32768 barrier bd1 sync chunkedlen 32768 chunkedlen 0 } -start varnish v1 \ -arg "-s default,1m" -vcl+backend { sub vcl_backend_response { set beresp.http.be-hitmiss = bereq.is_hitmiss; set beresp.http.be-hitpass = bereq.is_hitpass; if (bereq.url == "/hfp") { return (pass(10m)); } } sub vcl_deliver { set resp.http.is-hitmiss = req.is_hitmiss; set resp.http.is-hitpass = req.is_hitpass; } } -start varnish v1 -cliok "debug.fragfetch 8192" client c1 { txreq -url "/hfm" rxresphdrs rxrespbody -max 8192 barrier ba1 sync barrier ba2 sync rxrespbody expect resp.bodylen == 65536 expect resp.http.is-hitmiss == false expect resp.http.is-hitpass == false expect resp.http.be-hitmiss == resp.http.is-hitmiss expect resp.http.be-hitpass == resp.http.is-hitpass expect_pattern } -start barrier ba1 sync varnish v1 -expect SM?.Transient.g_bytes < 24576 barrier ba2 sync client c1 -wait # HFM object varnish v1 -expect SM?.Transient.g_bytes < 500 # pass on the HFM client c1 { txreq -url "/hfm" rxresphdrs rxrespbody -max 8192 barrier bb1 sync barrier bb2 sync rxrespbody expect resp.bodylen == 65536 expect resp.http.is-hitmiss == true expect resp.http.is-hitpass == false expect resp.http.be-hitmiss == resp.http.is-hitmiss expect resp.http.be-hitpass == resp.http.is-hitpass expect_pattern } -start barrier bb1 sync varnish v1 -expect SM?.Transient.g_bytes < 24576 barrier bb2 sync client c1 -wait # HFM object varnish v1 -expect SM?.Transient.g_bytes < 500 ##### hfp client c1 { txreq -url "/hfp" rxresphdrs rxrespbody -max 8192 barrier bc1 sync barrier bc2 sync rxrespbody expect resp.bodylen == 65536 expect resp.http.is-hitmiss == false expect resp.http.is-hitpass == false expect resp.http.be-hitmiss == resp.http.is-hitmiss expect resp.http.be-hitpass == resp.http.is-hitpass expect_pattern } -start barrier bc1 sync varnish v1 -expect SM?.Transient.g_bytes < 24576 barrier bc2 sync client c1 -wait # HFM + HFP object varnish v1 -expect SM?.Transient.g_bytes < 1000 # pass on the HFP client c1 { txreq -url "/hfp" rxresphdrs rxrespbody -max 8192 barrier bd1 sync barrier bd2 sync rxrespbody expect resp.bodylen == 65536 expect resp.http.is-hitmiss == false expect resp.http.is-hitpass == true expect resp.http.be-hitmiss == resp.http.is-hitmiss expect resp.http.be-hitpass == resp.http.is-hitpass expect_pattern } -start barrier bd1 sync varnish v1 -expect SM?.Transient.g_bytes < 24576 barrier bd2 sync client c1 -wait # HFM + HFP object varnish v1 -expect SM?.Transient.g_bytes < 1000 varnish-7.5.0/bin/varnishtest/tests/c00076.vtc000066400000000000000000000032751457605730600210420ustar00rootroot00000000000000varnishtest "Complex son-of-hit-for-miss test" server s1 { rxreq txresp -hdr "Connection: close" -hdr "Set-Cookie: c1" -bodylen 1 } -start varnish v1 -vcl+backend {} -start client c1 { txreq rxresp expect resp.bodylen == 1 delay .1 } -run server s1 { rxreq expect req.http.foo == xyz txresp -hdr "Set-Cookie: c2" -hdr "Vary: foo" -bodylen 2 rxreq expect req.http.foo == abc txresp -hdr "Connection: close" -hdr "Set-Cookie: c2" -hdr "Vary: foo" -bodylen 3 } -start client c1 { txreq -hdr "foo: xyz" rxresp expect resp.bodylen == 2 delay .1 } -run client c1 { txreq -hdr "foo: abc" rxresp expect resp.bodylen == 3 delay .1 } -run server s1 { rxreq expect req.http.foo == 123 txresp -hdr "Connection: close" -hdr "Vary: foo" -bodylen 4 } -start client c1 { txreq -hdr "foo: 123" rxresp expect resp.bodylen == 4 delay .1 } -run client c1 { # Cache hit txreq -hdr "foo: 123" rxresp expect resp.bodylen == 4 delay .1 } -run server s1 { rxreq expect req.http.foo == 987 txresp -hdr "Connection: close" -hdr "Vary: foo" -bodylen 5 } -start client c1 { txreq -hdr "foo: 987" rxresp expect resp.bodylen == 5 delay .1 } -run client c1 { # Cache hit txreq -hdr "foo: 123" rxresp expect resp.bodylen == 4 delay .1 } -run server s1 { rxreq expect req.http.foo == 000 txresp -hdr "Connection: close" -bodylen 6 } -start client c1 { txreq -hdr "foo: 000" rxresp expect resp.bodylen == 6 delay .1 txreq -hdr "foo: 123" rxresp expect resp.bodylen == 6 delay .1 txreq -hdr "foo: abc" rxresp expect resp.bodylen == 6 delay .1 txreq -hdr "foo: 987" rxresp expect resp.bodylen == 6 delay .1 txreq -hdr "foo: xyz" rxresp expect resp.bodylen == 6 delay .1 } -run varnish-7.5.0/bin/varnishtest/tests/c00077.vtc000066400000000000000000000026421457605730600210400ustar00rootroot00000000000000varnishtest "Switching VCL from VCL" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { sub vcl_deliver { set resp.http.vcl = "vclA"; } } -start varnish v1 -clierr 106 "vcl.label vcl.A vcl1" varnish v1 -cliok "vcl.label vclA vcl1" # labeling twice #2834 varnish v1 -cliok "vcl.label vclA vcl1" varnish v1 -vcl+backend { sub vcl_recv { if (req.http.vcl == "vcl1") { return (vcl(vclA)); } } sub vcl_deliver { set resp.http.vcl = "vcl2"; } } varnish v1 -cliok "vcl.label vclB vcl2" varnish v1 -cliok "vcl.list" client c1 { txreq rxresp expect resp.http.vcl == vcl2 txreq -hdr "vcl: vcl1" rxresp expect resp.http.vcl == vclA } -run varnish v1 -clierr 300 "vcl.discard vcl1" varnish v1 -vcl+backend { sub vcl_recv { return (vcl(vclB)); } } client c1 { txreq -hdr "vcl: vcl1" rxresp expect resp.status == 503 } -run delay .2 varnish v1 -vcl+backend { import std; sub vcl_recv { return (vcl(vclA)); } } varnish v1 -vcl+backend { import debug; sub vcl_recv { return (vcl(vclA)); } } varnish v1 -vcl+backend { sub vcl_recv { return (vcl(vclA)); } } varnish v1 -vcl+backend { sub vcl_recv { return (vcl(vclA)); } } varnish v1 -clierr 300 "vcl.discard vclA" varnish v1 -vcl+backend { } varnish v1 -clierr 106 "vcl.label vclA vcl3" varnish v1 -cliok "vcl.symtab" varnish v1 -clierr 300 "vcl.discard vcl*" varnish v1 -clierr 300 "vcl.discard vcl[1-7]" varnish v1 -cliok "vcl.discard vcl[1-7] vcl[A-B]" varnish-7.5.0/bin/varnishtest/tests/c00078.vtc000066400000000000000000000031151457605730600210350ustar00rootroot00000000000000varnishtest "Stevedores RR, beresp.storage" server s1 -repeat 7 { rxreq txresp } -start varnish v1 \ -arg "-ss1=default,1m" \ -arg "-ss2=default,1m" \ -arg "-ss0=default,1m" \ -syntax 4.0 \ -vcl+backend { import vtc; sub vcl_backend_response { if (bereq.url == "/2") { set beresp.storage = storage.s1; } else if (bereq.url == "/6") { set beresp.storage = vtc.no_stevedore(); } else if (bereq.url == "/deprecated") { set beresp.storage_hint = "s1"; } set beresp.http.storage = beresp.storage; set beresp.http.storage-hint = beresp.storage_hint; } } -start client c1 { txreq -url /1 rxresp expect resp.http.storage == "storage.s1" expect resp.http.storage == resp.http.storage-hint txreq -url /2 rxresp expect resp.http.storage == "storage.s1" expect resp.http.storage == resp.http.storage-hint txreq -url /3 rxresp expect resp.http.storage == "storage.s0" expect resp.http.storage == resp.http.storage-hint txreq -url /4 rxresp expect resp.http.storage == "storage.s1" expect resp.http.storage == resp.http.storage-hint txreq -url /5 rxresp expect resp.http.storage == "storage.s2" expect resp.http.storage == resp.http.storage-hint txreq -url /6 rxresp expect resp.http.storage == expect resp.http.storage == resp.http.storage-hint txreq -url /deprecated rxresp expect resp.http.storage == "storage.s1" expect resp.http.storage == resp.http.storage-hint } -run varnish v1 \ -syntax 4.1 \ -errvcl "Only available when VCL syntax <= 4.0" { import vtc; sub vcl_backend_response { set beresp.storage_hint = "foo"; } } varnish-7.5.0/bin/varnishtest/tests/c00079.vtc000066400000000000000000000013321457605730600210350ustar00rootroot00000000000000varnishtest "thread_pool_reserve max adjustment" server s1 { } -start varnish v1 \ -arg "-p thread_pool_min=10" \ -arg "-p thread_pool_max=100" \ -arg "-p thread_pools=1" \ -arg "-p thread_pool_timeout=10" \ -vcl+backend {} varnish v1 -start varnish v1 -cliok "param.set thread_pool_reserve 0" varnish v1 -cliok "param.set thread_pool_reserve 1" varnish v1 -cliok "param.set thread_pool_reserve 9" varnish v1 -clierr 106 "param.set thread_pool_reserve 10" varnish v1 -cliok "param.set thread_pool_min 100" varnish v1 -cliok "param.set thread_pool_reserve 0" varnish v1 -cliok "param.set thread_pool_reserve 1" varnish v1 -cliok "param.set thread_pool_reserve 95" varnish v1 -clierr 106 "param.set thread_pool_reserve 96" varnish-7.5.0/bin/varnishtest/tests/c00080.vtc000066400000000000000000000016661457605730600210370ustar00rootroot00000000000000varnishtest "Deconfigure thread pool" # First with default waiter server s1 { rxreq txresp } -start varnish v1 -vcl+backend {} -start varnish v1 -cliok "param.set experimental +drop_pools" varnish v1 -cliok "param.set debug +slow_acceptor" varnish v1 -cliok "param.set thread_pools 1" delay 2 client c1 -repeat 2 { txreq rxresp } -run delay 2 varnish v1 -vsc *thr* varnish v1 -vsc *poo* varnish v1 -expect MAIN.pools == 1 client c1 { txreq rxresp } -run # Then with poll waiter server s1 { rxreq txresp } -start varnish v2 -arg "-Wpoll" -vcl+backend {} -start varnish v2 -cliok "param.set experimental +drop_pools" varnish v2 -cliok "param.set debug +slow_acceptor" varnish v2 -cliok "param.set thread_pools 1" delay 2 client c2 -connect ${v2_sock} -repeat 2 { txreq rxresp } -run delay 2 varnish v2 -vsc *thr* varnish v2 -vsc *poo* varnish v2 -expect MAIN.pools == 1 client c2 -connect ${v2_sock} { txreq rxresp } -run varnish-7.5.0/bin/varnishtest/tests/c00081.vtc000066400000000000000000000015071457605730600210320ustar00rootroot00000000000000varnishtest "Hit-for-pass (mk II)" server s1 { rxreq txresp -hdr "foo: 1" rxreq txresp -hdr "foo: 2" rxreq txresp -hdr "foo: 3" } -start varnish v1 -vcl+backend { sub vcl_miss { set req.http.miss = true; } sub vcl_pass { set req.http.hitpass = req.is_hitpass; } sub vcl_backend_response { return (pass(2s)); } sub vcl_deliver { set resp.http.miss = req.http.miss; set resp.http.hitpass = req.http.hitpass; } } -start logexpect l1 -v v1 -g vxid { expect * 1003 HitPass "^1002 1" } -start client c1 { txreq rxresp expect resp.http.miss == true txreq rxresp expect resp.http.hitpass == true delay 3 txreq rxresp expect resp.http.miss == true } -run logexpect l1 -wait varnish v1 -expect MAIN.cache_hitpass == 1 varnish v1 -expect MAIN.cache_miss == 2 varnish v1 -expect MAIN.cache_hitmiss == 0 varnish-7.5.0/bin/varnishtest/tests/c00082.vtc000066400000000000000000000023221457605730600210270ustar00rootroot00000000000000varnishtest "hash_always_miss overrides hit-for-pass" server s1 { rxreq expect req.http.Uncacheable == false txresp -hdr "Hit-Pass: forever" rxreq expect req.http.Uncacheable == true txresp rxreq expect req.http.Uncacheable == false txresp } -start varnish v1 -vcl+backend { sub vcl_recv { set req.hash_always_miss = (req.http.Hash == "always-miss"); } sub vcl_backend_fetch { set bereq.http.Uncacheable = bereq.uncacheable; } sub vcl_backend_response { if (beresp.http.Hit-Pass == "forever") { return (pass(1y)); } } sub vcl_deliver { set resp.http.Obj-Hits = obj.hits; } } -start logexpect l1 -v v1 -g request { expect * 1001 VCL_return lookup expect * 1001 VCL_call MISS expect * 1002 TTL "^HFP 31536000 " expect * 1003 VCL_return lookup expect * 1003 VCL_call PASS expect * 1005 VCL_return lookup expect * 1005 VCL_call MISS expect * 1007 VCL_return lookup expect * 1007 VCL_call HIT } -start client c1 { txreq rxresp expect resp.http.Obj-Hits == 0 delay .1 txreq rxresp expect resp.http.Obj-Hits == 0 delay .1 txreq -hdr "Hash: always-miss" rxresp expect resp.http.Obj-Hits == 0 delay .1 txreq rxresp expect resp.http.Obj-Hits == 1 } -run logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/c00083.vtc000066400000000000000000000067051457605730600210410ustar00rootroot00000000000000varnishtest "Test VSM _.index rewrite when too many deletes" varnish v1 -vcl { backend default { .host = "${bad_ip}"; } } -start delay 1 process p1 { nlines=`wc -l < ${tmpdir}/v1/_.vsm_child/_.index` nminus=`grep -c '^-' ${tmpdir}/v1/_.vsm_child/_.index` echo CHILD NLINES $nlines NMINUS $nminus } -dump -run # Useful for debugging #process p2 {tail -F ${tmpdir}/v1/_.vsm_child/_.index} -dump -start #process p3 {tail -F ${tmpdir}/v1/_.vsm_mgt/_.index} -dump -start varnish v1 -vcl { backend b00 { .host = "${bad_ip}"; } backend b01 { .host = "${bad_ip}"; } backend b02 { .host = "${bad_ip}"; } backend b03 { .host = "${bad_ip}"; } sub vcl_recv { set req.backend_hint = b00; set req.backend_hint = b01; set req.backend_hint = b02; set req.backend_hint = b03; } } varnish v1 -cliok vcl.list process p1 -run varnish v1 -cliok "vcl.use vcl1" varnish v1 -cliok "vcl.discard vcl2" delay 1 process p1 -run # The child process starts out with approx 37 VSM segments # and spent 10 lines on vcl2, so it takes ~ 15 backends to # cause a _.index rewrite. # Make it 20 to be on the safe side. varnish v1 -vcl { backend b00 { .host = "${bad_ip}"; } backend b01 { .host = "${bad_ip}"; } backend b02 { .host = "${bad_ip}"; } backend b03 { .host = "${bad_ip}"; } backend b04 { .host = "${bad_ip}"; } backend b05 { .host = "${bad_ip}"; } backend b06 { .host = "${bad_ip}"; } backend b07 { .host = "${bad_ip}"; } backend b08 { .host = "${bad_ip}"; } backend b09 { .host = "${bad_ip}"; } backend b10 { .host = "${bad_ip}"; } backend b11 { .host = "${bad_ip}"; } backend b12 { .host = "${bad_ip}"; } backend b13 { .host = "${bad_ip}"; } backend b14 { .host = "${bad_ip}"; } backend b15 { .host = "${bad_ip}"; } backend b16 { .host = "${bad_ip}"; } backend b17 { .host = "${bad_ip}"; } backend b18 { .host = "${bad_ip}"; } backend b19 { .host = "${bad_ip}"; } sub vcl_recv { set req.backend_hint = b00; set req.backend_hint = b01; set req.backend_hint = b02; set req.backend_hint = b03; set req.backend_hint = b04; set req.backend_hint = b05; set req.backend_hint = b06; set req.backend_hint = b07; set req.backend_hint = b08; set req.backend_hint = b09; set req.backend_hint = b10; set req.backend_hint = b11; set req.backend_hint = b12; set req.backend_hint = b13; set req.backend_hint = b14; set req.backend_hint = b15; set req.backend_hint = b16; set req.backend_hint = b17; set req.backend_hint = b18; set req.backend_hint = b19; } } varnish v1 -cliok vcl.list varnish v1 -cliok backend.list delay 1 process p1 -run varnish v1 -cliok "vcl.use vcl1" varnish v1 -cliok "vcl.discard vcl3" delay 1 # Check that the _.index rewrite did happen process p1 { nlines=`wc -l < ${tmpdir}/v1/_.vsm_child/_.index` nminus=`grep -c '^-' ${tmpdir}/v1/_.vsm_child/_.index` echo CHILD NLINES $nlines NMINUS $nminus # cat ${tmpdir}/v1/_.vsm_child/_.index test $nminus -lt 20 } -run # Now check the management process VSM process p1 { nlines=`wc -l < ${tmpdir}/v1/_.vsm_mgt/_.index` nminus=`grep -c '^-' ${tmpdir}/v1/_.vsm_mgt/_.index` echo MGT NLINES $nlines NMINUS $nminus # cat ${tmpdir}/v1/_.vsm_mgt/_.index test $nminus -eq 0 } -run varnish v1 -cliok "stop" delay 1 process p1 { nlines=`wc -l < ${tmpdir}/v1/_.vsm_mgt/_.index` nminus=`grep -c '^-' ${tmpdir}/v1/_.vsm_mgt/_.index` echo MGT NLINES $nlines NMINUS $nminus # cat ${tmpdir}/v1/_.vsm_mgt/_.index test $nminus -eq 2 } -run varnish v1 -cliok "start" delay 1 process p1 -run varnish-7.5.0/bin/varnishtest/tests/c00084.vtc000066400000000000000000000012151457605730600210310ustar00rootroot00000000000000varnishtest "legal symbol names" varnish v1 \ -syntax 4.0 \ -arg "-s my-store=default" \ -vcl { import directors; acl my-acl { "${localhost}"; } probe my-pb { } backend my-be { .host = "${bad_backend}"; .probe = my-pb; } sub vcl_init { new my-dir = directors.round_robin(); my-dir.add_backend(my-be); } sub vcl_recv { call my-sub; } sub my-sub { if (client.ip ~ my-acl) { } set req.storage = storage.my-store; set req.backend_hint = my-dir.backend(); } } -start varnish v1 -cli "vcl.label my-label vcl1" varnish v1 -vcl { backend dummy { .host = "${bad_backend}"; } sub vcl_recv { return (vcl(my-label)); } } varnish-7.5.0/bin/varnishtest/tests/c00085.vtc000066400000000000000000000011051457605730600210300ustar00rootroot00000000000000varnishtest "relaxed date parsing" server s1 { rxreq txresp \ -hdr "Date: Wed, 31 Jan 2018 14:26:02 GMT" \ -hdr "Last-Modified: Thu, 4 Jan 2018 06:26:29 GMT" \ -hdr "Expires: Fri, 2 Mar 2018 14:26:02 GMT" } -start varnish v1 -arg "-p feature=+http_date_postel" -vcl+backend { import std; sub vcl_backend_response { set beresp.http.ttl = beresp.ttl; set beresp.http.lm = std.time(beresp.http.last-modified, now); } } -start client c1 { txreq rxresp expect resp.http.ttl == "2592000.000" expect resp.http.lm == "Thu, 04 Jan 2018 06:26:29 GMT" } -run varnish-7.5.0/bin/varnishtest/tests/c00086.vtc000066400000000000000000000037431457605730600210430ustar00rootroot00000000000000varnishtest "-a sub-args user, group and mode; and warn if stat(EACCES) fails for UDS" feature user_vcache feature group_varnish feature root shell -err -expect "Too many user sub-args" { varnishd -a ${tmpdir}/vtc.sock,user=vcache,user=vcache -d } shell -err -expect "Too many group sub-args" { varnishd -a ${tmpdir}/vtc.sock,group=varnish,group=varnish -d } # Assuming that empty user and group names always fail getpwnam and getgrnam shell -err -expect "Unknown user " { varnishd -a ${tmpdir}/vtc.sock,user= -d } shell -err -expect "Unknown group " { varnishd -a ${tmpdir}/vtc.sock,group= -d } server s1 {} -start varnish v1 -arg "-a ${tmpdir}/v1.sock,user=vcache,group=varnish,mode=660" \ -vcl+backend {} shell -match "rw-rw----.+vcache.+varnish" { ls -l ${tmpdir}/v1.sock } varnish v2 -arg "-a ${tmpdir}/v2.sock,user=vcache,mode=600" -vcl+backend {} shell -match "rw-------.+vcache" { ls -l ${tmpdir}/v2.sock } varnish v3 -arg "-a ${tmpdir}/v3.sock,group=varnish,mode=660" -vcl+backend {} shell -match "rw---- .+root.+varnish" { ls -l ${tmpdir}/v3.sock } varnish v4 -arg "-a ${tmpdir}/v4.sock,mode=666" -vcl+backend {} shell -match "rw-rw-rw-.+root" { ls -l ${tmpdir}/v4.sock } varnish v5 -arg "-a ${tmpdir}/v5.sock,user=vcache,group=varnish" -vcl+backend {} shell -match "vcache.+varnish" { ls -l ${tmpdir}/v5.sock } varnish v6 -arg "-a ${tmpdir}/v6.sock,user=vcache" -vcl+backend {} shell -match "vcache" { ls -l ${tmpdir}/v6.sock } varnish v7 -arg "-a ${tmpdir}/v7.sock,group=varnish" -vcl+backend {} shell -match "root.+varnish" { ls -l ${tmpdir}/v7.sock } # VCC warns, but does not fail, if stat(UDS) fails with EACCES. shell { mkdir ${tmpdir}/dir } server s2 -listen ${tmpdir}/dir/s1.sock {} -start shell { chmod go-rx ${tmpdir}/dir } varnish v8 \ -jail "-junix,user=varnish,ccgroup=varnish,workuser=vcache" \ -vcl { backend b None; } varnish v8 -cliexpect "(?s)Cannot stat:.+That was just a warning" \ {vcl.inline test "vcl 4.1; backend b {.path=\"${tmpdir}/dir/s1.sock\";}"} varnish-7.5.0/bin/varnishtest/tests/c00087.vtc000066400000000000000000000060751457605730600210450ustar00rootroot00000000000000varnishtest "VCL *.ip vars as bogo-IPs when -a is UDS, and test ACL matches" server s1 -listen "${tmpdir}/s1.sock" { rxreq txresp } -start varnish v1 -syntax 4.1 -arg "-a foo=${tmpdir}/v1.sock" -vcl+backend { acl acl1 +log { "${localhost}"; } sub vcl_backend_response { set beresp.http.b-client = client.ip; set beresp.http.b-server = server.ip; set beresp.http.b-local = local.ip; set beresp.http.b-remote = remote.ip; set beresp.http.b-compare = local.ip == remote.ip; set beresp.http.b-endpoint = local.endpoint; set beresp.http.b-socket = local.socket; } sub vcl_deliver { set resp.http.c-client = client.ip; set resp.http.c-server = server.ip; set resp.http.c-local = local.ip; set resp.http.c-remote = remote.ip; set resp.http.c-compare = local.ip == remote.ip; set resp.http.client_acl = client.ip ~ acl1; set resp.http.server_acl = server.ip ~ acl1; set resp.http.local_acl = local.ip ~ acl1; set resp.http.remote_acl = remote.ip ~ acl1; set resp.http.c-endpoint = local.endpoint; set resp.http.c-socket = local.socket; } } -start client c1 -connect "${tmpdir}/v1.sock" { txreq rxresp expect resp.status == 200 expect resp.http.c-client == "0.0.0.0" expect resp.http.c-server == "0.0.0.0" expect resp.http.c-local == "0.0.0.0" expect resp.http.c-remote == "0.0.0.0" expect resp.http.c-compare == "true" expect resp.http.c-endpoint == "${tmpdir}/v1.sock" expect resp.http.c-socket == "foo" expect resp.http.b-client == "0.0.0.0" expect resp.http.b-server == "0.0.0.0" expect resp.http.b-local == "0.0.0.0" expect resp.http.b-remote == "0.0.0.0" expect resp.http.b-compare == "true" expect resp.http.b-endpoint == "${tmpdir}/v1.sock" expect resp.http.b-socket == "foo" expect resp.http.client_acl == "false" expect resp.http.server_acl == "false" expect resp.http.local_acl == "false" expect resp.http.remote_acl == "false" } -run logexpect l1 -v v1 -d 1 -g vxid -q "VCL_acl" { expect 0 * Begin req expect * = VCL_acl "^NO_MATCH acl1$" expect * = VCL_acl "^NO_MATCH acl1$" expect * = VCL_acl "^NO_MATCH acl1$" expect * = VCL_acl "^NO_MATCH acl1$" expect * = End } -run varnish v1 -vcl { backend b None; acl acl1 { "0.0.0.0"; } sub vcl_recv { return(synth(200)); } sub vcl_synth { set resp.http.client = client.ip ~ acl1; set resp.http.server = server.ip ~ acl1; set resp.http.local = local.ip ~ acl1; set resp.http.remote = remote.ip ~ acl1; } } client c1 -connect "${tmpdir}/v1.sock" { txreq -url "foo" rxresp expect resp.http.client == "true" expect resp.http.server == "true" expect resp.http.local == "true" expect resp.http.remote == "true" } -run varnish v1 -errvcl {.../mask is not numeric.} { backend b None; acl acl1 { "${tmpdir}/v1.sock"; } } # client.ip == 0.0.0.0 appears in X-Forwarded-For server s1 { rxreq expect req.http.X-Forwarded-For == "0.0.0.0" txresp rxreq expect req.http.X-Forwarded-For == "1.2.3.4, 0.0.0.0" txresp } -start varnish v1 -vcl+backend {} client c1 -connect "${tmpdir}/v1.sock" { txreq -url /1 rxresp txreq -url /2 -hdr "X-forwarded-for: 1.2.3.4" rxresp } -run varnish-7.5.0/bin/varnishtest/tests/c00088.vtc000066400000000000000000000020341457605730600210350ustar00rootroot00000000000000varnishtest "Change UDS backend: change path, drop poll" server s1 -listen "${tmpdir}/s1.sock" { non_fatal timeout 3 loop 40 { rxreq txresp accept } } -start server s2 -listen "${tmpdir}/s2.sock" { non_fatal timeout 3 loop 40 { rxreq txresp accept } } -start varnish v1 -vcl { probe default { .window = 8; .initial = 7; .threshold = 8; .interval = 0.1s; } backend s1 { .path = "${s2_sock}"; } } -start delay 1 varnish v1 -vcl { probe default { .window = 8; .initial = 7; .threshold = 8; .interval = 0.1s; } backend s1 { .path = "${s1_sock}"; } } -cliok "vcl.use vcl2" -cliok "vcl.discard vcl1" delay 1 varnish v1 -vcl { backend s1 { .path = "${s1_sock}"; } } -cliok "vcl.use vcl3" -cliok "vcl.discard vcl2" delay 1 varnish v1 -cliok "vcl.list" varnish v1 -cliok "backend.list -p" server s1 -break { rxreq expect req.url == /foo txresp -bodylen 4 } -start delay 1 client c1 { txreq -url /foo rxresp txreq -url /foo rxresp expect resp.status == 200 expect resp.bodylen == 4 } -run varnish-7.5.0/bin/varnishtest/tests/c00089.vtc000066400000000000000000000006721457605730600210440ustar00rootroot00000000000000varnishtest "Backend close retry with a UDS" server s1 -listen "${tmpdir}/s1.sock" -repeat 1 { rxreq txresp -bodylen 5 rxreq accept rxreq txresp -bodylen 6 } -start varnish v1 -vcl+backend { sub vcl_recv { return(pass); } } -start client c1 { txreq rxresp expect resp.status == 200 expect resp.bodylen == 5 txreq rxresp expect resp.status == 200 expect resp.bodylen == 6 } -run varnish v1 -expect backend_retry == 1 varnish-7.5.0/bin/varnishtest/tests/c00090.vtc000066400000000000000000000022621457605730600210310ustar00rootroot00000000000000varnishtest "Forcing health of backends listening at UDS" server s1 -listen "${tmpdir}/s1.sock" -repeat 3 { rxreq txresp } -start varnish v1 -vcl { backend s1 { .path = "${s1_sock}"; .probe = { .window = 8; .initial = 7; .threshold = 8; .interval = 10s; } } sub vcl_recv { return(pass); } } -start delay 1 varnish v1 -cliok "vcl.list" varnish v1 -cliok "backend.list -p" varnish v1 -cliok "backend.set_health s1 auto" varnish v1 -cliok "backend.list -p" client c1 { txreq rxresp expect resp.status == 200 } -run varnish v1 -cliok "backend.list" varnish v1 -cliok "backend.set_health s1 sick" varnish v1 -cliok "backend.list" client c1 { txreq rxresp expect resp.status == 503 } -run varnish v1 -cliok "backend.list" varnish v1 -cliok "backend.set_health s1 healthy" varnish v1 -cliok "backend.list" client c1 { txreq rxresp expect resp.status == 200 } -run varnish v1 -clierr 106 "backend.set_health s1 foo" varnish v1 -clierr 106 "backend.set_health s2 foo" varnish v1 -clierr 106 "backend.set_health s2 auto" varnish v1 -cliok "vcl.list" varnish v1 -cliok "backend.list *" varnish v1 -cliok "backend.list *.foo" varnish v1 -cliok "backend.list vcl1.*" varnish-7.5.0/bin/varnishtest/tests/c00091.vtc000066400000000000000000000014161457605730600210320ustar00rootroot00000000000000varnishtest "vcl_backend_response{} retry with a UDS backend" server s0 -listen "${tmpdir}/s1.sock" { rxreq txresp -hdr "connection: close" } -dispatch varnish v1 -vcl+backend { sub vcl_recv { return (pass); } sub vcl_backend_response { set beresp.http.retries = bereq.retries; if (bereq.http.stop != beresp.http.retries) { return (retry); } } } -start client c1 { txreq -hdr "stop: 2" rxresp expect resp.status == 200 expect resp.http.retries == 2 } -run varnish v1 -cliok "param.set max_retries 2" client c2 { txreq -hdr "stop: 2" rxresp expect resp.status == 200 expect resp.http.retries == 2 } -run client c3 { txreq -hdr "stop: 3" rxresp expect resp.status == 503 expect resp.http.retries == } -run varnish v1 -expect backend_conn == 9 varnish-7.5.0/bin/varnishtest/tests/c00092.vtc000066400000000000000000000006321457605730600210320ustar00rootroot00000000000000varnishtest "Check aborted backend body with a backend listening at UDS" barrier b1 cond 2 server s1 -listen "${tmpdir}/s1.sock" { rxreq txresp -nolen -hdr "Transfer-encoding: chunked" chunked {} barrier b1 sync } -start varnish v1 -cliok "param.set debug +syncvsl" -vcl+backend { } -start client c1 { txreq rxresphdrs expect resp.status == 200 rxchunk barrier b1 sync expect_close } -run varnish-7.5.0/bin/varnishtest/tests/c00093.vtc000066400000000000000000000014371457605730600210370ustar00rootroot00000000000000varnishtest "Test resp.is_streaming with a UDS backend" # Same as c00069 without the synth case barrier b1 sock 2 barrier b2 sock 2 server s1 -listen "${tmpdir}/s1.sock" { rxreq txresp -nolen -hdr "Content-Length: 10" barrier b1 sync barrier b2 sync send "1234567890" } -start varnish v1 -vcl+backend { import vtc; sub vcl_backend_response { vtc.barrier_sync("${b1_sock}"); return (deliver); } sub vcl_deliver { set resp.http.streaming = resp.is_streaming; if (obj.hits == 0) { vtc.barrier_sync("${b2_sock}"); } } } -start logexpect l1 -v v1 -q "Begin ~ bereq" -i End { expect 0 1002 End } -start client c1 { txreq rxresp expect resp.http.streaming == true } -run logexpect l1 -wait delay .5 client c2 { txreq rxresp expect resp.http.streaming == false } -run varnish-7.5.0/bin/varnishtest/tests/c00094.vtc000066400000000000000000000016511457605730600210360ustar00rootroot00000000000000varnishtest "Test Backend Polling with a backend listening at a UDS" barrier b1 cond 2 server s1 -listen "${tmpdir}/s1.sock" { timeout 8 fatal # Probes loop 8 { rxreq expect req.url == "/" txresp -hdr "Bar: foo" -body "foobar" accept } loop 3 { rxreq expect req.url == "/" txresp -status 404 -hdr "Bar: foo" -body "foobar" accept } loop 2 { rxreq expect req.url == "/" txresp -proto "FROBOZ" -status 200 -hdr "Bar: foo" -body "foobar" accept } loop 2 { rxreq expect req.url == "/" send "HTTP/1.1 200 \r\n" accept } barrier b1 sync } -start varnish v1 -cliok "param.set debug +syncvsl" varnish v1 -vcl { backend foo { .path = "${s1_sock}"; .probe = { .timeout = 7 s; .interval = 0.5 s; } } } -start barrier b1 sync varnish v1 -cliexpect "^CLI RX| -+U+-{0,5} Good UNIX" "backend.list -p" varnish v1 -cliexpect "^CLI RX| -+H{10}-{5}H{2}-{0,5} Happy" "backend.list -p" varnish-7.5.0/bin/varnishtest/tests/c00095.vtc000066400000000000000000000026731457605730600210440ustar00rootroot00000000000000varnishtest "vcl_keep and vmod_so_keep debug bits" feature topbuild server s1 { } -start varnish v1 -vcl+backend { } -start # Test valid and invalid VCL with vcl_keep unset varnish v1 -cliok "param.set debug -vcl_keep" varnish v1 -vcl+backend { } shell -err "test -f ./v1/vcl_vcl2.*/vgc.c" varnish v1 -errvcl {No backends or directors found} { } shell -err "test -f ./v1/vcl_vcl3.*/vgc.c" # Same but with vcl_keep set varnish v1 -cliok "param.set debug +vcl_keep" varnish v1 -vcl+backend { } shell { test -f ./v1/vcl_vcl4.*/vgc.c && test -f ./v1/vcl_vcl4.*/vgc.so } varnish v1 -errvcl {No backends or directors found} { } shell { test -f ./v1/vcl_vcl5.*/vgc.c && test -f ./v1/vcl_vcl5.*/vgc.so } # Test vmod with vmod_so_keep set varnish v1 -cliok "param.set debug +vmod_so_keep" varnish v1 -vcl+backend { import std; } shell "test -f ./v1/vmod_cache/_vmod_std.*" varnish v1 -stop varnish v1 -cleanup # Ensure these are not deleted on exit shell { test -f ./v1/vcl_vcl4.*/vgc.c && test -f ./v1/vcl_vcl4.*/vgc.so && test -f ./v1/vcl_vcl5.*/vgc.c && test -f ./v1/vcl_vcl5.*/vgc.so && test -f ./v1/vmod_cache/_vmod_std.* } varnish v2 -vcl+backend { } -start # And test vmod with vmod_so_keep unset varnish v2 -cliok "param.set debug -vmod_so_keep" varnish v2 -vcl+backend { import std; } shell "test -f ./v2/vmod_cache/_vmod_std.*" varnish v2 -stop varnish v2 -cleanup shell -err "test -f ./v2/vmod_cache/_vmod_std.*" varnish-7.5.0/bin/varnishtest/tests/c00096.vtc000066400000000000000000000043041457605730600210360ustar00rootroot00000000000000varnishtest "Test thread creation on acceptor thread queuing" # This tests that we are able to spawn new threads in the event that the # cache acceptor has been queued. It does this by starting 6 long lasting # fetches, which will consume 12 threads. That exceeds the initial # allotment of 10 threads, giving some probability that the acceptor # thread is queued. Then a single quick fetch is done, which should be # served since we are well below the maximum number of threads allowed. # Barrier b1 blocks the slow servers from finishing until the quick fetch # is done. barrier b1 cond 7 # Barrier b2 blocks the start of the quick fetch until all slow fetches # are known to hold captive two threads each. barrier b2 cond 7 server s0 { rxreq txresp -nolen -hdr "Content-Length: 10" -hdr "Connection: close" send "123" barrier b1 sync send "4567890" expect_close } -dispatch server stest { rxreq txresp -body "All good" } -start varnish v1 -arg "-p debug=+syncvsl -p debug=+flush_head" varnish v1 -arg "-p thread_pools=1 -p thread_pool_min=10" varnish v1 -arg "-p thread_pool_add_delay=0.01" varnish v1 -vcl+backend { sub vcl_backend_fetch { if (bereq.url == "/test") { set bereq.backend = stest; } else { set bereq.backend = s0; } } } -start # XXX: to minimize the amount of synchronization in the pool_herder # logic, it may over-breed threads, resulting in more than 10 # threads. This is currently mitigated by increasing the # thread_pool_add_delay parameter. varnish v1 -expect MAIN.threads == 10 client c1 { txreq -url /1 rxresphdrs barrier b2 sync rxrespbody } -start client c2 { txreq -url /2 rxresphdrs barrier b2 sync rxrespbody } -start client c3 { txreq -url /3 rxresphdrs barrier b2 sync rxrespbody } -start client c4 { txreq -url /4 rxresphdrs barrier b2 sync rxrespbody } -start client c5 { txreq -url /5 rxresphdrs barrier b2 sync rxrespbody } -start client c6 { txreq -url /6 rxresphdrs barrier b2 sync rxrespbody } -start client ctest { barrier b2 sync txreq -url "/test" rxresp expect resp.status == 200 expect resp.body == "All good" } -run barrier b1 sync client c1 -wait client c2 -wait client c3 -wait client c4 -wait client c5 -wait client c6 -wait varnish-7.5.0/bin/varnishtest/tests/c00097.vtc000066400000000000000000000023141457605730600210360ustar00rootroot00000000000000varnishtest "Streaming delivery and waitinglist rushing" # Barrier to make sure that c1 connects to s1 barrier b1 cond 2 # Barrier to make sure that all requests are on waitinglist before # HSH_Unbusy is called barrier b2 cond 2 # Barrier to control that all requests start streaming before the object # finishes. This tests that waitinglists are rushed before # HSH_DerefObjCore(). barrier b3 sock 4 server s1 { rxreq barrier b1 sync barrier b2 sync txresp -nolen -hdr "Transfer-Encoding: chunked" chunkedlen 10 barrier b3 sync chunkedlen 10 chunkedlen 0 } -start varnish v1 -arg "-p thread_pools=1" -arg "-p thread_pool_min=20" -arg "-p rush_exponent=2" -arg "-p debug=+syncvsl" -arg "-p debug=+waitinglist" -vcl+backend { import vtc; sub vcl_hit { vtc.barrier_sync("${b3_sock}"); } } -start client c1 { txreq rxresp } -start barrier b1 sync client c2 { txreq rxresp } -start client c3 { txreq rxresp } -start client c4 { txreq rxresp } -start # Wait until c2-c4 are on the waitinglist varnish v1 -vsl_catchup varnish v1 -expect busy_sleep == 3 # Open up the response headers from s1, and as a result HSH_Unbusy barrier b2 sync client c1 -wait client c2 -wait client c3 -wait client c4 -wait varnish-7.5.0/bin/varnishtest/tests/c00098.vtc000066400000000000000000000047201457605730600210420ustar00rootroot00000000000000varnishtest "Hit-for-pass and waitinglist rushing" # Barrier to make sure that s1 is run first barrier b1 cond 2 # Barrier to make sure that all requests are on waitinglist before # HSH_Unbusy is called barrier b2 cond 2 # Barrier to control that all backends are reached before any request # finishes. This tests that waitinglists are rushed before # HSH_DerefObjCore(). barrier b3 cond 6 server s1 { rxreq barrier b1 sync barrier b2 sync txresp -nolen -hdr "Transfer-Encoding: chunked" chunkedlen 10 barrier b3 sync chunkedlen 10 chunkedlen 0 } -start server s2 { rxreq txresp -nolen -hdr "Transfer-Encoding: chunked" chunkedlen 10 barrier b3 sync chunkedlen 10 chunkedlen 0 } -start server s3 { rxreq txresp -nolen -hdr "Transfer-Encoding: chunked" chunkedlen 10 barrier b3 sync chunkedlen 10 chunkedlen 0 } -start server s4 { rxreq txresp -nolen -hdr "Transfer-Encoding: chunked" chunkedlen 10 barrier b3 sync chunkedlen 10 chunkedlen 0 } -start server s5 { rxreq txresp -nolen -hdr "Transfer-Encoding: chunked" chunkedlen 10 barrier b3 sync chunkedlen 10 chunkedlen 0 } -start server s6 { rxreq txresp -nolen -hdr "Transfer-Encoding: chunked" chunkedlen 10 barrier b3 sync chunkedlen 10 chunkedlen 0 } -start varnish v1 -arg "-p thread_pools=1" -arg "-p thread_pool_min=30" -arg "-p rush_exponent=2" -arg "-p debug=+syncvsl" -arg "-p debug=+waitinglist" -vcl+backend { sub vcl_backend_fetch { if (bereq.http.user-agent == "c1") { set bereq.backend = s1; } else if (bereq.http.user-agent == "c2") { set bereq.backend = s2; } else if (bereq.http.user-agent == "c3") { set bereq.backend = s3; } else if (bereq.http.user-agent == "c4") { set bereq.backend = s4; } else if (bereq.http.user-agent == "c5") { set bereq.backend = s5; } else if (bereq.http.user-agent == "c6") { set bereq.backend = s6; } } sub vcl_backend_response { return (pass(1m)); } } -start client c1 { txreq rxresp } -start # This makes sure that c1->s1 is done first barrier b1 sync client c2 { txreq rxresp } -start client c3 { txreq rxresp } -start client c4 { txreq rxresp } -start client c5 { txreq rxresp } -start client c6 { txreq rxresp } -start # Wait until c2-c6 are on the waitinglist varnish v1 -vsl_catchup varnish v1 -expect busy_sleep == 5 # Open up the response headers from s1, and as a result HSH_Unbusy barrier b2 sync client c1 -wait client c2 -wait client c3 -wait client c4 -wait client c5 -wait client c6 -wait varnish-7.5.0/bin/varnishtest/tests/c00099.vtc000066400000000000000000000047341457605730600210500ustar00rootroot00000000000000varnishtest "Hit-for-miss and waitinglist rushing" # Barrier to make sure that s1 is run first barrier b1 cond 2 # Barrier to make sure that all requests are on waitinglist before # HSH_Unbusy is called barrier b2 cond 2 # Barrier to control that all backends are reached before any request # finishes. This tests that waitinglists are rushed before # HSH_DerefObjCore(). barrier b3 cond 6 server s1 { rxreq barrier b1 sync barrier b2 sync txresp -nolen -hdr "Transfer-Encoding: chunked" chunkedlen 10 barrier b3 sync chunkedlen 10 chunkedlen 0 } -start server s2 { rxreq txresp -nolen -hdr "Transfer-Encoding: chunked" chunkedlen 10 barrier b3 sync chunkedlen 10 chunkedlen 0 } -start server s3 { rxreq txresp -nolen -hdr "Transfer-Encoding: chunked" chunkedlen 10 barrier b3 sync chunkedlen 10 chunkedlen 0 } -start server s4 { rxreq txresp -nolen -hdr "Transfer-Encoding: chunked" chunkedlen 10 barrier b3 sync chunkedlen 10 chunkedlen 0 } -start server s5 { rxreq txresp -nolen -hdr "Transfer-Encoding: chunked" chunkedlen 10 barrier b3 sync chunkedlen 10 chunkedlen 0 } -start server s6 { rxreq txresp -nolen -hdr "Transfer-Encoding: chunked" chunkedlen 10 barrier b3 sync chunkedlen 10 chunkedlen 0 } -start varnish v1 -arg "-p thread_pools=1" -arg "-p thread_pool_min=30" -arg "-p rush_exponent=2" -arg "-p debug=+syncvsl" -arg "-p debug=+waitinglist" -vcl+backend { sub vcl_backend_fetch { if (bereq.http.user-agent == "c1") { set bereq.backend = s1; } else if (bereq.http.user-agent == "c2") { set bereq.backend = s2; } else if (bereq.http.user-agent == "c3") { set bereq.backend = s3; } else if (bereq.http.user-agent == "c4") { set bereq.backend = s4; } else if (bereq.http.user-agent == "c5") { set bereq.backend = s5; } else if (bereq.http.user-agent == "c6") { set bereq.backend = s6; } } sub vcl_backend_response { set beresp.uncacheable = true; } } -start client c1 { txreq rxresp } -start # This makes sure that c1->s1 is done first barrier b1 sync client c2 { txreq rxresp } -start client c3 { txreq rxresp } -start client c4 { txreq rxresp } -start client c5 { txreq rxresp } -start client c6 { txreq rxresp } -start # Wait until c2-c6 are on the waitinglist varnish v1 -vsl_catchup varnish v1 -expect busy_sleep == 5 # Open up the response headers from s1, and as a result HSH_Unbusy barrier b2 sync client c1 -wait client c2 -wait client c3 -wait client c4 -wait client c5 -wait client c6 -wait varnish-7.5.0/bin/varnishtest/tests/c00100.vtc000066400000000000000000000037651457605730600210320ustar00rootroot00000000000000varnishtest "if-range header" server s1 { rxreq txresp -hdr {etag: "foo"} -hdr "last-modified: Wed, 21 Oct 2015 07:28:00 GMT" -bodylen 16 rxreq txresp -bodylen 16 } -start varnish v1 -vcl+backend {} -start client c1 { txreq rxresp expect resp.status == 200 expect resp.bodylen == 16 # if-range, but no range txreq -hdr {if-range: "foo"} rxresp expect resp.status == 200 expect resp.bodylen == 16 # non-matching etag if-range txreq -hdr {if-range: "fooled"} -hdr "range: bytes=5-9" rxresp expect resp.status == 200 expect resp.bodylen == 16 # matching etag if-range txreq -hdr {if-range: "foo"} -hdr "range: bytes=5-9" rxresp expect resp.status == 206 expect resp.bodylen == 5 # non-matching date if-range (past) txreq -hdr "if-range: Wed, 21 Oct 2015 07:18:00 GMT" -hdr "range: bytes=5-9" rxresp expect resp.status == 200 expect resp.bodylen == 16 # non-matching date if-range (future) txreq -hdr "if-range: Wed, 21 Oct 2015 07:38:00 GMT" -hdr "range: bytes=5-9" rxresp expect resp.status == 200 expect resp.bodylen == 16 # matching etag if-range txreq -hdr "if-range: Wed, 21 Oct 2015 07:28:00 GMT" -hdr "range: bytes=5-9" rxresp expect resp.status == 206 expect resp.bodylen == 5 }-run varnish v1 -cliok "ban obj.status != x" # no etag/LM header client c1 { txreq rxresp expect resp.status == 200 expect resp.bodylen == 16 # non-matching etag if-range txreq -hdr {if-range: "fooled"} -hdr "range: bytes=5-9" rxresp expect resp.status == 200 expect resp.bodylen == 16 # matching etag if-range txreq -hdr {if-range: "foo"} -hdr "range: bytes=5-9" rxresp expect resp.status == 200 expect resp.bodylen == 16 # non-matching date if-range (past) txreq -hdr "if-range: Wed, 21 Oct 2015 07:18:00 GMT" -hdr "range: bytes=5-9" rxresp expect resp.status == 200 expect resp.bodylen == 16 # non-matching date if-range (future) txreq -hdr "if-range: Wed, 21 Oct 2015 07:38:00 GMT" -hdr "range: bytes=5-9" rxresp expect resp.status == 200 expect resp.bodylen == 16 } -run varnish-7.5.0/bin/varnishtest/tests/c00101.vtc000066400000000000000000000016741457605730600210300ustar00rootroot00000000000000varnishtest "bo_* with effect on filters" server s1 -repeat 2 -keepalive { rxreq txresp -body { -This is a test: Hello world } } -start varnish v1 -vcl+backend { sub backend_response_filter { set beresp.http.filter0 = beresp.filters; set beresp.do_esi = true; set beresp.http.filter1 = beresp.filters; set beresp.do_gzip = true; set beresp.http.filter2 = beresp.filters; set beresp.do_esi = false; set beresp.do_gzip = false; set beresp.http.Content-Encoding = "gzip"; set beresp.do_gunzip = true; set beresp.http.filter3 = beresp.filters; set beresp.filters = ""; # does not affect filters set beresp.do_stream = true; } sub vcl_backend_response { call backend_response_filter; if (bereq.http.fiddle) { call backend_response_filter; } } sub vcl_recv { return (pass); } } -start client c1 { txreq rxresp expect resp.status == 200 txreq -hdr "fiddle: yes" rxresp expect resp.status == 503 } -run varnish-7.5.0/bin/varnishtest/tests/c00102.vtc000066400000000000000000000021261457605730600210220ustar00rootroot00000000000000varnishtest "TCP pool teardown with open connection" feature ipv6 # when c1's request has gotten to s1 barrier b1 cond 3 # when vcls have been swizzled barrier b2 cond 3 server s1 -listen "::1" { rxreq expect req.url == "/s1" barrier b1 sync barrier b2 sync txresp -hdr "Host: s1" expect_close } -start varnish v1 -cliok "param.set debug +vclrel" varnish v1 -vcl+backend {} -start client c1 { txreq -url /s1 barrier b1 sync barrier b2 sync rxresp expect resp.http.host == "s1" txreq -url /s2 rxresp expect resp.http.host == "s2" } -start barrier b1 sync varnish v1 -cliok "param.set debug +vclrel" server s2 -listen "::1" { rxreq expect req.url == "/s2" txresp -hdr "Host: s2" } -start varnish v1 -vcl { backend b2 { .host = "${s2_sock}"; } } varnish v1 -vsc LCK.conn_pool.* varnish v1 -cliok "vcl.discard vcl1" varnish v1 -cliok "vcl.list" varnish v1 -vsc LCK.conn_pool.* varnish v1 -expect LCK.conn_pool.destroy == 0 barrier b2 sync client c1 -wait varnish v1 -cliok "vcl.list" server s1 -wait varnish v1 -vsc LCK.conn_pool.* varnish v1 -expect LCK.conn_pool.destroy == 1 varnish-7.5.0/bin/varnishtest/tests/c00103.vtc000066400000000000000000000013101457605730600210150ustar00rootroot00000000000000varnishtest "REGEX expressions in VCL" varnish v1 -vcl { import debug; backend be none; sub vcl_recv { # NB: the REGEX expression below is needlessly complicated # on purpose to ensure we don't treat REGEX expressions # differently. if (req.url ~ (debug.just_return_regex( debug.just_return_regex("he" + "l" + "lo")))) { return (synth(200)); } return (synth(500)); } sub vcl_synth { set resp.reason = regsub(resp.reason, debug.just_return_regex("OK"), "world"); } } -start client c1 { txreq rxresp expect resp.status == 500 expect resp.reason == "Internal Server Error" txreq -url "/hello" rxresp expect resp.status == 200 expect resp.reason == world } -run varnish-7.5.0/bin/varnishtest/tests/c00104.vtc000066400000000000000000000007471457605730600210330ustar00rootroot00000000000000varnishtest "Test watchdog only active on queue 0" server s1 { rxreq txresp } -start varnish v1 -cliok "param.set thread_pools 1" varnish v1 -cliok "param.set thread_pool_min 5" varnish v1 -cliok "param.set thread_pool_max 5" varnish v1 -cliok "param.set thread_pool_watchdog 1" varnish v1 -cliok "param.set feature +http2" varnish v1 -vcl+backend { } -start client c1 { txpri delay 2 } -start client c2 { txpri delay 2 } -start client c3 { txpri delay 2 } -start delay 2 varnish-7.5.0/bin/varnishtest/tests/c00105.vtc000066400000000000000000000020201457605730600210160ustar00rootroot00000000000000varnishtest "Failed post-streaming revalidation" barrier b1 cond 3 barrier b2 sock 2 server s1 { rxreq txresp -nolen -hdr {Etag: "abc"} -hdr "Content-Length: 100" barrier b1 sync barrier b2 sync } -start server s2 { rxreq expect req.http.If-None-Match == {"abc"} txresp -status 304 -nolen -hdr {Etag: "abc"} -hdr "Content-Length: 100" } -start varnish v1 -cliok "param.set vsl_mask +ExpKill" varnish v1 -vcl+backend { import vtc; sub vcl_recv { if (req.http.backend == "s2") { set req.backend_hint = s2; } } sub vcl_backend_response { if (beresp.was_304) { vtc.barrier_sync("${b2_sock}"); } set beresp.ttl = 1ms; } } -start logexpect l1 -v v1 -g raw -q "ExpKill ~ EXP_Inspect" { expect 0 0 ExpKill } -start client c1 { txreq -hdr "backend: s1" rxresphdrs expect resp.status == 200 barrier b1 sync expect_close } -start logexpect l1 -wait varnish v1 -vsl_catchup barrier b1 sync client c2 { txreq -hdr "backend: s2" rxresphdrs expect resp.status == 200 expect_close } -run client c1 -wait varnish-7.5.0/bin/varnishtest/tests/c00106.vtc000066400000000000000000000012731457605730600210300ustar00rootroot00000000000000varnishtest "tunnel basic coverage" barrier b1 cond 2 -cyclic barrier b2 sock 2 -cyclic barrier b3 sock 2 -cyclic server s1 { rxreq txresp } -start varnish v1 -vcl+backend { import vtc; sub vcl_recv { vtc.barrier_sync("${b2_sock}"); vtc.barrier_sync("${b3_sock}"); } } -start tunnel t1 { pause barrier b1 sync send 10 delay 0.1 send 15 resume barrier b2 sync pause barrier b3 sync recv 10 delay 0.1 recv 15 # automatic resumption here } -start client c1 -connect "${t1_sock}" { barrier b1 sync txreq rxresp expect resp.status == 200 } -run tunnel t1 -wait # Same scenario, but wait for c1 _after_ t1 tunnel t1 -start client c1 -start tunnel t1 -wait client c1 -wait varnish-7.5.0/bin/varnishtest/tests/c00107.vtc000066400000000000000000000011651457605730600210310ustar00rootroot00000000000000varnishtest req.hash_ignore_vary server s1 { rxreq expect req.http.cookie ~ ab=a txresp -hdr "vary: cookie" -body a rxreq expect req.http.cookie ~ ab=b txresp -hdr "vary: cookie" -body b } -start varnish v1 -vcl+backend { sub vcl_recv { set req.hash_ignore_vary = req.http.user-agent ~ "bot"; } sub vcl_req_cookie { return; } } -start client ca { txreq -hdr "cookie: ab=a" rxresp expect resp.body == a } -run client cb { txreq -hdr "cookie: ab=b" rxresp expect resp.body == b } -run client cbot { txreq -hdr "user-agent: googlebot" rxresp expect resp.body == b } -run client ca -run client cb -run varnish-7.5.0/bin/varnishtest/tests/c00108.vtc000066400000000000000000000021131457605730600210240ustar00rootroot00000000000000varnishtest "Session pipelining exceeding available workspace" server s1 { loop 2 { rxreq expect req.bodylen == 32769 txresp } rxreq expect req.bodylen == 32768 txresp } -start varnish v1 -cliok "param.set workspace_client 24k" varnish v1 -cliok "param.set http_req_size 1k" varnish v1 -cliok "param.set http_resp_size 1k" varnish v1 -cliok "param.set vsl_buffer 1k" varnish v1 -vcl+backend { } -start client c1 { # Multi-line strings aren't escaped, the send argument # below contains actual carriage returns. The extra body # byte on top of the 32kB is a line feed. send { POST / HTTP/1.1 host: ${localhost} content-length: 32769 ${string,repeat,1024,abcdefghijklmnopqrstuvwxyz012345} POST / HTTP/1.1 host: ${localhost} content-length: 32769 ${string,repeat,1024,abcdefghijklmnopqrstuvwxyz012345} } loop 2 { rxresp expect resp.status == 200 } } -run varnish v1 -cliok "param.set feature +http2" client c2 { stream 1 { txreq -method POST -hdr content-length 32768 -nostrend txdata -datalen 16384 -nostrend txdata -datalen 16384 rxresp } -run } -run varnish-7.5.0/bin/varnishtest/tests/c00109.vtc000066400000000000000000000010041457605730600210230ustar00rootroot00000000000000varnishtest "cc_command and cc_warnings" varnish v1 -cliok {param.set debug +vcl_keep} varnish v1 -cliok {param.set cc_warnings hello} varnish v1 -cliok {param.set cc_command << EOF printf 'd="%%s" D="%%s" w="%%s"' '%d' '%D' '%w' >world printf '%%s' '%n' >v1_name EOF} varnish v1 -errvcl "VCL compilation failed" "backend be none;" shell -match {d=".+" D=".+hello.+" w="hello"} { exec cat v1/vcl_*/world } shell -expect "Value is: hello" { exec varnishadm -n "$(cat v1/vcl_*/v1_name)" param.show cc_warnings } varnish-7.5.0/bin/varnishtest/tests/c00110.vtc000066400000000000000000000006441457605730600210240ustar00rootroot00000000000000varnishtest "Transit buffering with early close" feature cmd {test $(uname) != SunOS} server s1 { non_fatal rxreq txresp -bodylen 2000000 } -start varnish v1 -cliok "param.set transit_buffer 1k" varnish v1 -vcl+backend { } -start client c1 -rcvbuf 128 { txreq -method POST rxresphdrs expect resp.status == 200 recv 100 } -run varnish v1 -expect VBE.vcl1.s1.conn == 0 varnish v1 -expect VBE.vcl1.s1.busy == 0 varnish-7.5.0/bin/varnishtest/tests/c00111.vtc000066400000000000000000000007641457605730600210300ustar00rootroot00000000000000varnishtest "LRU error without transit buffer" server s1 -repeat 2 { non_fatal rxreq txresp -bodylen 1850000 } -start varnish v1 -arg "-s Transient=malloc,1m" -vcl+backend { } -start client c1 { non_fatal txreq -method POST rxresp } -run varnish v1 -vsl_catchup varnish v1 -expect fetch_failed == 1 varnish v1 -cliok "param.set transit_buffer 4k" client c2 { txreq -method POST rxresp } -run varnish v1 -vsl_catchup varnish v1 -expect s_fetch == 2 varnish v1 -expect fetch_failed == 1 varnish-7.5.0/bin/varnishtest/tests/c00112.vtc000066400000000000000000000013051457605730600210210ustar00rootroot00000000000000varnishtest "Transit buffering deadlock test" server s1 { rxreq txresp -status 404 } -start server s2 { non_fatal rxreq txresp -bodylen 2000000 } -start varnish v1 -vcl+backend { sub vcl_recv { set req.backend_hint = s2; if (req.restarts == 1) { set req.backend_hint = s1; } } sub vcl_backend_response { set beresp.transit_buffer = 1k; } sub vcl_deliver { if (req.restarts < 1) { return (restart); } } } -start client c1 { txreq -method POST rxresp expect resp.bodylen == 0 expect resp.status == 404 } -run varnish v1 -expect VBE.vcl1.s1.conn == 0 varnish v1 -expect VBE.vcl1.s1.busy == 0 varnish v1 -expect VBE.vcl1.s2.conn == 0 varnish v1 -expect VBE.vcl1.s2.busy == 0 varnish-7.5.0/bin/varnishtest/tests/c00113.vtc000066400000000000000000000007471457605730600210330ustar00rootroot00000000000000varnishtest "probe expect close" server s0 { rxreq txresp expect_close } -dispatch varnish v1 -vcl+backend { probe default { .window = 3; .threshold = 1; .timeout = 0.5s; .interval = 0.1s; .expect_close = true; } } -start varnish v2 -vcl+backend { probe default { .window = 3; .threshold = 1; .timeout = 0.5s; .interval = 0.1s; .expect_close = false; } } -start delay 2.0 varnish v1 -cliexpect sick backend.list varnish v2 -cliexpect healthy backend.list varnish-7.5.0/bin/varnishtest/tests/c00114.vtc000066400000000000000000000033121457605730600210230ustar00rootroot00000000000000varnishtest "TLV authority over via backends used as SNI for haproxy backend/1" # This test is skipped unless haproxy is available. It fails unless # that binary implements the fc_pp_authority fetch, to return the TLV # Authority value sent in a PROXYv2 header. # In this version of the test, we set port 0 in the server config of # the "ssl-onloading" haproxy, and set the destination port in the # backend config for Varnish. The onloader sets its destination port # from the address forwarded via PROXY, which in turn is set from the # Varnish backend config. See c00101.vtc for another config method. feature ignore_unknown_macro feature cmd {haproxy --version 2>&1 | grep -q 'HA-*Proxy version'} # not sure which haproxy versions work, but 1.0 certainly do not. feature cmd "haproxy --version 2>&1 | grep 'HAProxy version [^1][.]'" server s1 { rxreq txresp -hdr "Foo: bar" } -start haproxy h1 -conf { listen feh1 mode http bind "fd@${feh1}" ssl verify none crt ${testdir}/common.pem server s1 ${s1_addr}:${s1_port} http-response set-header X-SNI %[ssl_fc_sni] } -start # Note the use of port 0 for server s0. haproxy h2 -conf { unix-bind mode 777 listen clear-to-ssl bind unix@"${tmpdir}/h2.sock" accept-proxy server s0 0.0.0.0:0 ssl verify none sni fc_pp_authority } -start varnish v1 -vcl { backend h2 { .path = "${tmpdir}/h2.sock"; } # The ssl-onloader uses the port number set here. backend h1 { .via = h2; .host = "${h1_feh1_addr}"; .port = "${h1_feh1_port}"; .authority = "authority.com"; } sub vcl_recv { set req.backend_hint = h1; } } -start client c1 { txreq rxresp expect resp.status == 200 expect resp.http.Foo == "bar" expect resp.http.X-SNI == "authority.com" } -run varnish-7.5.0/bin/varnishtest/tests/c00120.vtc000066400000000000000000000012701457605730600210210ustar00rootroot00000000000000varnishtest "bgfetch_no_thread" server s1 { rxreq txresp } -start varnish v1 -cliok "param.set thread_pools 1" varnish v1 -cliok "param.set thread_pool_min 5" varnish v1 -cliok "param.set thread_pool_max 5" varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.ttl = 1ms; set beresp.grace = 1h; } } -start logexpect l1 -v v1 { expect * * FetchError "No thread available for bgfetch" } -start client c1 { txreq rxresp expect resp.status == 200 delay 0.1 # At this point the thread reserve is too low to # allow a low-priority task like a bgfetch. txreq rxresp expect resp.status == 200 } -start logexpect l1 -wait varnish v1 -expect MAIN.bgfetch_no_thread == 1 varnish-7.5.0/bin/varnishtest/tests/c00121.vtc000066400000000000000000000026151457605730600210260ustar00rootroot00000000000000varnishtest "Abstract UDS backend: change path, drop poll" feature abstract_uds server s1 -listen "@${vtcid}.s1.sock" { non_fatal timeout 3 loop 40 { rxreq txresp accept } } -start server s2 -listen "@${vtcid}.s2.sock" { non_fatal timeout 3 loop 40 { rxreq txresp accept } } -start varnish v1 -arg "-a @${vtcid}.v1.sock" -vcl { probe default { .window = 8; .initial = 7; .threshold = 8; .interval = 0.1s; } backend s1 { .path = "@${vtcid}.s2.sock"; } } -start delay 1 varnish v1 -vcl { probe default { .window = 8; .initial = 7; .threshold = 8; .interval = 0.1s; } backend s1 { .path = "@${vtcid}.s1.sock"; } } -cliok "vcl.use vcl2" -cliok "vcl.discard vcl1" delay 1 varnish v1 -vcl { backend s1 { .path = "@${vtcid}.s1.sock"; } } -cliok "vcl.use vcl3" -cliok "vcl.discard vcl2" delay 1 varnish v1 -cliok "vcl.list" varnish v1 -cliok "backend.list -p" server s1 -break { rxreq expect req.url == /foo txresp -bodylen 4 } -start delay 1 client c1 -connect "@${vtcid}.v1.sock" { txreq -url /foo rxresp txreq -url /foo rxresp expect resp.status == 200 expect resp.bodylen == 4 } -run ######################################## # coverage varnish v1 -errvcl {Backend path: The empty abstract socket name is not supported} { backend s1 { .path = "@"; } } shell -err -expect "Error: Got no socket(s) for @" \ "varnishd -n ${tmpdir}/v0 -a @ -b None"varnish-7.5.0/bin/varnishtest/tests/c00122.vtc000066400000000000000000000042131457605730600210230ustar00rootroot00000000000000varnishtest "test 64 bit vxid rollover" server s1 { rxreq txresp accept rxreq delay 5 txresp } -start varnish v1 -vcl+backend { sub vcl_backend_response { if (bereq.url == "/retry" && bereq.retries == 0) { return (retry); } } sub vcl_deliver { if (req.url == "/restart" && req.restarts == 0) { return (restart); } } } -start process p1 -winsz 100 200 {exec varnishlog -n ${v1_name} -g raw} -start process p1 -expect-text 0 0 CLI varnish v1 -cliok "param.set debug +syncvsl" varnish v1 -cliok "debug.xid 999999999999998" logexpect l1 -v v1 -g request -T 2 { expect 0 1 Begin "req 999999999999998" expect * = ReqStart expect 0 = ReqMethod GET expect 0 = ReqURL / expect 0 = ReqProtocol HTTP/1.1 expect * = ReqHeader "Foo: bar" expect * = Link "bereq 2 fetch" expect * = VSL "timeout" expect * = End "synth" expect 0 2 Begin "bereq 1" expect * 2 Link "bereq 3 retry" expect * = End expect 0 3 Begin "bereq 2 retry" expect * = End } -start client c1 { txreq -url "/retry" -hdr "Foo: bar" rxresp expect resp.status == 200 } -run varnish v1 -vsl_catchup logexpect l1 -wait process p1 -expect-text 0 1 "999999999999998 SessClose" process p1 -screen_dump ################################################################################ server s1 { rxreq txresp } -start varnish v1 -cliok "param.set debug +syncvsl" varnish v1 -cliok "debug.xid 999999999999997" logexpect l1 -v v1 -g request { expect 0 999999999999998 Begin "req 999999999999997" expect * = ReqStart expect 0 = ReqMethod GET expect 0 = ReqURL / expect 0 = ReqProtocol HTTP/1.1 expect * = ReqHeader "Foo: bar" expect * = Link "bereq 1 fetch" expect * = Link "req 2 restart" expect * = End expect 0 1 Begin "bereq 999999999999998" expect * = End expect 0 2 Begin "req 999999999999998 restart" expect * = End } -start client c1 { txreq -url "/restart" -hdr "Foo: bar" rxresp expect resp.status == 200 } -run varnish v1 -vsl_catchup logexpect l1 -wait process p1 -expect-text 0 1 "999999999999998 Link c req 2 restart" process p1 -expect-text 0 1 "999999999999997 SessClose" process p1 -screen_dump process p1 -stop varnish-7.5.0/bin/varnishtest/tests/c00123.vtc000066400000000000000000000031641457605730600210300ustar00rootroot00000000000000varnishtest "Low req.body streaming pressure on storage" server s0 { rxreq txresp -status 200 expect req.bodylen == 100000 } -dispatch varnish v1 -vcl+backend { import std; sub vcl_recv { set req.storage = storage.s0; if (req.http.cache) { std.cache_req_body(100000b); } } } -start # explicit setting to be robust against changes to the default value varnish v1 -cliok "param.set fetch_chunksize 16k" # chunked req.body streaming uses approximately one fetch_chunksize'd chunk client c1 { txreq -req PUT -hdr "Transfer-encoding: chunked" chunkedlen 100000 chunkedlen 0 rxresp expect resp.status == 200 } -run # in practice a little over fetch_chunksize is allocated varnish v1 -expect SM?.s0.c_bytes < 20000 # reset s0 counters varnish v1 -cliok stop varnish v1 -cliok start varnish v1 -expect SM?.s0.c_bytes == 0 # content-length req.body streaming also needs one chunk client c2 { txreq -req PUT -bodylen 100000 rxresp expect resp.status == 200 } -run varnish v1 -expect SM?.s0.c_bytes < 20000 # reset s0 counters varnish v1 -cliok stop varnish v1 -cliok start # chunked req.body caching allocates storage for the entire body client c3 { txreq -req PUT -hdr "cache: body" -hdr "Transfer-encoding: chunked" chunkedlen 100000 chunkedlen 0 rxresp expect resp.status == 200 } -run varnish v1 -expect SM?.s0.c_bytes > 100000 # reset s0 counters varnish v1 -cliok stop varnish v1 -cliok start # content-length req.body caching allocates storage for the entire body client c4 { txreq -req PUT -hdr "cache: body" -bodylen 100000 rxresp expect resp.status == 200 } -run varnish v1 -expect SM?.s0.c_bytes > 100000 varnish-7.5.0/bin/varnishtest/tests/c00124.vtc.disabled000066400000000000000000000046651457605730600226060ustar00rootroot00000000000000varnishtest "rushed task queued" # this does not work reliably because the acceptor task may # be queued during the child startup if not all threads are # created # thread reserve mitigation barrier barrier b0 sock 2 # client ordering barrier barrier b1 cond 3 # waitinglist barrier barrier b2 cond 2 # thread starvation barrier barrier b3 cond 3 server s1 { rxreq barrier b1 sync barrier b3 sync txresp } -start server s2 { rxreq barrier b1 sync barrier b2 sync txresp -nolen -hdr "Transfer-Encoding: chunked" chunkedlen 10 barrier b3 sync chunkedlen 0 } -start varnish v1 -cliok "param.set thread_pools 1" varnish v1 -cliok "param.set thread_pool_min 5" varnish v1 -cliok "param.set thread_pool_max 5" varnish v1 -cliok "param.set debug +syncvsl" varnish v1 -cliok "param.set debug +waitinglist" varnish v1 -vcl+backend { import vtc; sub vcl_recv { if (req.http.server) { # ensure both c1 and c2 got a thread vtc.barrier_sync("${b0_sock}"); } } sub vcl_backend_fetch { if (bereq.http.server == "s1") { set bereq.backend = s1; } else if (bereq.http.server == "s2") { set bereq.backend = s2; } } } -start # wait for all threads to be started varnish v1 -expect threads == 5 # 2 threads client c1 { txreq -hdr "Cookie: foo" -hdr "server: s1" rxresp expect resp.status == 200 } -start # 2 threads client c2 { txreq -hdr "server: s2" rxresp expect resp.status == 200 } -start # ensure c1 and c2 fetch tasks are started barrier b1 sync logexpect l1 -v v1 -g raw { expect * 1007 Debug "on waiting list" } -start logexpect l2 -v v1 -g raw { expect * 1007 Debug "off waiting list" } -start varnish v1 -expect sess_dropped == 0 varnish v1 -expect sess_queued == 0 # At this point, we are thread-starved and c3 below will steal the # acceptor thread that will queue itself. client c3 { txreq rxresp expect resp.status == 200 } -start logexpect l1 -wait varnish v1 -expect sess_dropped == 0 varnish v1 -expect sess_queued == 1 varnish v1 -expect busy_sleep == 1 # Wake up c2, This will in turn trigger a waitinglist rush and wake up c3. barrier b2 sync # The acceptor thread could have restarted on the newly available thread # if it weren't for the thread pool reserve. For the same reason, c3's # client task should be queued once woken up. logexpect l2 -wait # let everyone loose barrier b3 sync client c1 -wait client c2 -wait client c3 -wait varnish v1 -expect sess_dropped == 0 varnish v1 -expect sess_queued == 2 varnish-7.5.0/bin/varnishtest/tests/c00126.vtc000066400000000000000000000014731457605730600210340ustar00rootroot00000000000000varnishtest "Make sure EXP_Removed is logged correctly" server s1 -repeat 4 { rxreq txresp -bodylen 500000 } -start varnish v1 -arg "-ss1=default,1m" -vcl+backend { } -start varnish v1 -cliok "param.set vsl_mask +ExpKill" logexpect l1 -v v1 -g raw { expect * 0 ExpKill "EXP_Removed x=1002 t=.* h=1" expect * 0 ExpKill "EXP_Removed x=1005 t=.* h=0" } -start client c1 { loop 2 { txreq -url "/1" rxresp expect resp.status == 200 } txreq -url "/2" rxresp expect resp.status == 200 txreq -url "/3" rxresp expect resp.status == 200 txreq -url "/4" rxresp expect resp.status == 200 } -run # NOTE: Nuked objects are mailed twice varnish v1 -expect n_lru_nuked == 2 varnish v1 -expect MAIN.n_object == 2 varnish v1 -expect MAIN.exp_mailed == 6 varnish v1 -expect MAIN.exp_received == 6 logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/c00128.vtc000066400000000000000000000005451457605730600210350ustar00rootroot00000000000000varnishtest "Withdraw graced hit's busy objcore" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.ttl = 1ms; } sub vcl_hit { if (obj.ttl < 0s) { return (fail); } } } -start client c1 { txreq rxresp expect resp.status == 200 delay 0.01 txreq rxresp expect resp.status == 503 } -run varnish-7.5.0/bin/varnishtest/tests/common.pem000066400000000000000000000076211457605730600214770ustar00rootroot00000000000000-----BEGIN RSA PRIVATE KEY----- MIIEpAIBAAKCAQEAnb0BDF7FsqzslakNg7u/n/JQkq6nheuKwvyTqECfpc9y7uSB e/vrEFqBaDSLQagJxuZdL5geFeVtRbdAoB97N1/LZa6vecjjgGSP0Aag/gS/ocnM RIyvlVWWT9MrD46OG3qZY1ORU1ltrVL0NKttJP8xME7j3bTwIDElx/hNI0n7L+yS kAe2xb/7CbZRfoOhjTVAcGv4aSLVc/Hi8k6VkIzdOEtH6TcghXmuGcuqvLNH9Buo syngKTcQ8zg6J+e64aVvC+e7vi94uil9Qu+JHm0pkDzAZ2WluNsuXlrJToPirWyj 6/YdN6xgSI1hbZkBmUPAebgYuxBt6huvfyQd3wIDAQABAoIBABojc8UE/2W4WgwC 04Z82ig7Ezb7Ui9S9M+S4zUCYHItijIkE4DkIfO3y7Hk4x6iJdyb191HK9UdC5p9 32upS9XFPgM/izx3GZvxDhO+xXbSep7ovbyuQ3pPkHTx3TTavpm3GyvmcTKKoy4R jP4dWhzDXPdQW1ol3ZS4EDau4rlyClY6oi1mq9aBEX3MqVjB/nO7s2AbdgclAgP2 OZMhTzWYR1k5tYySHCXh3ggGMCikyvHU0+SsGyrstYzP1VYi/n3f0VgqW/5ZjG8x 6SHpe04unErPF3HuSun2ZMCFdBxaTFZ8FENb8evrSXe3nQOc9W21RQdRRrNNUbjl JYI4veECgYEA0ATYKMS1VCUYRZoQ49b5GTg7avUYqfW4bEo4fSfBue8NrnKR3Wu8 PPBiCTuIYq1vSF+60B7Vu+hW0A8OuQ2UuMxLpYcQ7lKfNad/+yAfoWWafIqCqNU9 at0QMdbW6A69d6jZt7OrXtleBsphCnN58jTz4ch4PIa2Oyq46NUXCvUCgYEAwh8t G6BOHOs3yRNI2s9Y9EEfwoil2uIKrZhqiL3AwdIpu5uNIMuPnbaEpXvRX6jv/qtL 321i8vZLc31aM7zfxQ6B4ReQFJfYC80FJsWvcLwT9hB9mTJpLS4sIu5tzQc87O6w RtjFMom+5ns5hfPB4Eccy0EtbQWVY4nCzUeO6QMCgYBSvqqRRPXwG7VU8lznlHqP upuABzChYrnScY+Y0TixUlL54l79Wb6N6vzEOWceAWkzu8iewrU4QspNhr/PgoR3 IeSxWlG0yy7Dc/ZnmTabx8O06I/iwrfkizzG5nOj6UEamRLJjPGNEB/jyZriQl7u pnugg1K4mMliLbNSAnlhBQKBgQCmYepbv260Qrex1KGhSg9Ia3k5V74weYYFfJnz UhChD+1NK+ourcsOtp3C6PlwMHBjq5aAjlU9QfUxq8NgjQaO8/xGXdfUjsFSfAtq TA4vZkUFpuTAJgEYBHc4CXx7OzTxLzRPxQRgaMgC7KNFOMR34vu/CsJQq3R7uFwL bsYC2QKBgQCtEmg1uDZVdByX9zyUMuRxz5Tq/vDcp+A5lJj2mha1+bUMaKX2+lxQ vPxY55Vaw/ukWkJirRrpGv6IytBn0dLAFSlKZworZGBaxsm8OGTFJ5Oe9+kZTjI9 hvjpClOA1otbmj2F2uZAbuIjxQGDNUkLoifN5yDYCC8JPujHuHmULw== -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIIGeTCCBGGgAwIBAgIBAjANBgkqhkiG9w0BAQsFADB+MQswCQYDVQQGEwJGUjEW MBQGA1UECBMNSWxlLWRlLUZyYW5jZTEOMAwGA1UEBxMFUGFyaXMxEDAOBgNVBAoT B296b24uaW8xFTATBgNVBAMTDE96b24gVGVzdCBDQTEeMBwGCSqGSIb3DQEJARYP c3VwcG9ydEBvem9uLmlvMB4XDTE2MDExNzIzMDIzOFoXDTE4MDExNjIzMDIzOFow gb4xCzAJBgNVBAYTAkZSMRYwFAYDVQQIEw1JbGUtZGUtRnJhbmNlMRowGAYDVQQH ExFOZXVpbGx5LXN1ci1TZWluZTEYMBYGA1UEChMPVE9BRCBDb25zdWx0aW5nMRcw FQYDVQQLEw5lUGFyYXBoZXIgVGVhbTEWMBQGA1UEAxMNd3d3LnRlc3QxLmNvbTEw MC4GCSqGSIb3DQEJARYhYXJuYXVsdC5taWNoZWxAdG9hZC1jb25zdWx0aW5nLmZy MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAnb0BDF7FsqzslakNg7u/ n/JQkq6nheuKwvyTqECfpc9y7uSBe/vrEFqBaDSLQagJxuZdL5geFeVtRbdAoB97 N1/LZa6vecjjgGSP0Aag/gS/ocnMRIyvlVWWT9MrD46OG3qZY1ORU1ltrVL0NKtt JP8xME7j3bTwIDElx/hNI0n7L+ySkAe2xb/7CbZRfoOhjTVAcGv4aSLVc/Hi8k6V kIzdOEtH6TcghXmuGcuqvLNH9BuosyngKTcQ8zg6J+e64aVvC+e7vi94uil9Qu+J Hm0pkDzAZ2WluNsuXlrJToPirWyj6/YdN6xgSI1hbZkBmUPAebgYuxBt6huvfyQd 3wIDAQABo4IBvzCCAbswCwYDVR0PBAQDAgOoMBMGA1UdJQQMMAoGCCsGAQUFBwMB MB0GA1UdDgQWBBTIihFNVNgOseQnsWEcAQxAbIKE4TCBsgYDVR0jBIGqMIGngBRv G9At9gzk2MW5Z7JVey1LtPIZ8KGBg6SBgDB+MQswCQYDVQQGEwJGUjEWMBQGA1UE CBMNSWxlLWRlLUZyYW5jZTEOMAwGA1UEBxMFUGFyaXMxEDAOBgNVBAoTB296b24u aW8xFTATBgNVBAMTDE96b24gVGVzdCBDQTEeMBwGCSqGSIb3DQEJARYPc3VwcG9y dEBvem9uLmlvggkA15FtIaGcrk8wDAYDVR0TAQH/BAIwADAaBgNVHREEEzARgg9j b21tb25OYW1lOmNvcHkwCQYDVR0SBAIwADBIBgNVHR8EQTA/MD2gO6A5hjdodHRw Oi8vb3BlbnNzbGNhLnRvYWQtY29uc3VsdGluZy5jb20vb3BlbnZwbi9MYXRlc3Qu Y3JsMBEGCWCGSAGG+EIBAQQEAwIGQDAxBglghkgBhvhCAQ0EJBYiVE9BRC1Db25z dWx0aW5nIHNlcnZlciBjZXJ0aWZpY2F0ZTANBgkqhkiG9w0BAQsFAAOCAgEAewDa 9BukGNJMex8gsXmmdaczTr8yh9Uvw4NJcZS38I+26o//2g+d6i7wxcQg8hIm62Hj 0TblGU3+RsJo4uzcWxxA5YUYlVszbHNBRpQengEE5pjwHvoXVMNES6Bt8xP04+Vj 0qVnA8gUaDMk9lN5anK7tF/mbHOIJwHJZYCa2t3y95dIOVEXFwOIzzbSbaprjkLN w0BgR5paJz7NZWNqo4sZHUUz94uH2bPEd01SqHO0dJwEVxadgxuPnD05I9gqGpGX Zf3Rn7EQylvUtX9mpPaulQPXc3emefewLUSSAdnZrVikZK2J/B4lSi9FpUwl4iQH pZoE0QLQHtB1SBKacnOAddGSTLSdFvpzjErjjWSpMukF0vutmrP86GG3xtshWVhI u+yLfDJVm/pXfaeDtWMXpxIT/U1i0avpk5MZtFMRC0MTaxEWBTnnJm+/yiaAXQYg E1ZIP0mkZkiUojIawTR7JTjHGhIraP9UVPNceVy0DLfETHEou3vhwBn7PFOz7piJ wjp3A47DStJD4fapaX6B1fqM+n34CMD9ZAiJFgQEIQfObAWC9hyr4m+pqkp1Qfuw vsAP/ZoS1CBirJfm3i+Gshh+VeH+TAmO/NBBYCfzBdgkNz4tJCkOc7CUT/NQTR/L N2OskR/Fkge149RJi7hHvE3gk/mtGtNmHJPuQ+s= -----END CERTIFICATE----- varnish-7.5.0/bin/varnishtest/tests/d00007.vtc000066400000000000000000000053771457605730600210420ustar00rootroot00000000000000varnishtest "Test dynamic backends" # the use case for via-proxy is to have a(n ha)proxy make a(n ssl) # connection on our behalf. For the purpose of testing, we use another # varnish in place - but we are behaving realistically in that we do # not use any prior information for the actual backend connection - # just the information from the proxy protocol varnish v2 -proto PROXY -vcl { import debug; import std; import proxy; backend dummy { .host = "${bad_backend}"; } sub vcl_init { new s1 = debug.dyn("0.0.0.0", "0"); } sub vcl_recv { s1.refresh(server.ip, std.port(server.ip)); set req.backend_hint = s1.backend(); set req.http.Authority = proxy.authority(); return (pass); } sub vcl_deliver { set resp.http.Authority = req.http.Authority; } } -start # NB: ${v2_addr} in s1's work should really be ${s1_addr}, but it is not # defined until after the (implicit) listen(2) call, so we cheat # and use v2's address instead. server s1 { rxreq expect req.url == "/" expect req.http.Probe == "p1" expect req.http.Authority == txresp close accept rxreq expect req.url == "/1" expect req.http.Probe == expect req.http.Authority == txresp close accept rxreq expect req.url == "/" expect req.http.Probe == "p2" expect req.http.Authority == "${v2_addr}" txresp close accept rxreq expect req.url == "/2" expect req.http.Probe == expect req.http.Authority == "${v2_addr}" txresp } -start # # we vtc.sleep to make sure that the health check is done and server # s1 has accepted again. We would rather want to use barriers, but # there is a (yet not understood) bug in varnishtest which prevents # the bX_sock marcros from being available in the second varnish # instance varnish v1 -vcl { import debug; import vtc; backend dummy { .host = "${bad_backend}"; } probe p1 { .threshold = 8; .initial = 8; .interval = 1m; .request = "GET / HTTP/1.1" "Host: ${s1_addr}" "Probe: p1" "Connection: close"; } probe p2 { .threshold = 8; .initial = 8; .interval = 1m; .request = "GET / HTTP/1.1" "Host: ${s1_addr}" "Probe: p2" "Connection: close"; } backend v2 { .host = "${v2_addr}"; .port = "${v2_port}"; } sub vcl_init { new s1 = debug.dyn("0.0.0.0", "0"); } sub vcl_recv { if (req.url == "/1") { s1.refresh("${s1_addr}", "${s1_port}", p1); vtc.sleep(1s); } else if (req.url == "/2") { s1.refresh("${s1_addr}", "${s1_port}", p2, via=v2); vtc.sleep(1s); } set req.backend_hint = s1.backend(); } } -start varnish v1 -expect MAIN.n_backend == 3 client c1 { txreq -url /1 rxresp expect resp.status == 200 expect resp.http.Authority == txreq -url /2 rxresp expect resp.status == 200 expect resp.http.Authority == "${s1_addr}" } -run varnish-7.5.0/bin/varnishtest/tests/d00008.vtc000066400000000000000000000015241457605730600210310ustar00rootroot00000000000000varnishtest "Test dynamic backends hot swap" server s1 { rxreq expect req.url == "/foo" txresp } -start server s2 { rxreq expect req.url == "/bar" txresp } -start varnish v1 -vcl { import debug; backend dummy { .host = "${bad_backend}"; } sub vcl_init { new s1 = debug.dyn("${s1_addr}", "${s1_port}"); } sub vcl_recv { if (req.method == "SWAP") { s1.refresh(req.http.X-Addr, req.http.X-Port); return (synth(200)); } set req.backend_hint = s1.backend(); } } -start varnish v1 -expect MAIN.n_backend == 2 client c1 { txreq -url "/foo" rxresp expect resp.status == 200 txreq -req "SWAP" -hdr "X-Addr: ${s2_addr}" -hdr "X-Port: ${s2_port}" rxresp expect resp.status == 200 txreq -url "/bar" rxresp expect resp.status == 200 } -run delay 1 varnish v1 -cli backend.list # varnish v1 -expect MAIN.n_backend == 2 varnish-7.5.0/bin/varnishtest/tests/d00009.vtc000066400000000000000000000017531457605730600210360ustar00rootroot00000000000000varnishtest "Test dynamic backends hot swap while being used" barrier b1 cond 2 barrier b2 cond 2 server s1 { rxreq expect req.url == "/foo" barrier b1 sync barrier b2 sync txresp } -start server s2 { rxreq expect req.url == "/bar" barrier b2 sync txresp } -start varnish v1 -vcl { import debug; backend dummy { .host = "${bad_backend}"; } sub vcl_init { new s1 = debug.dyn("${s1_addr}", "${s1_port}"); } sub vcl_recv { if (req.method == "SWAP") { s1.refresh(req.http.X-Addr, req.http.X-Port); return (synth(200)); } set req.backend_hint = s1.backend(); } } -start varnish v1 -expect MAIN.n_backend == 2 client c1 { txreq -url "/foo" rxresp expect resp.status == 200 } -start client c2 { barrier b1 sync txreq -req "SWAP" -hdr "X-Addr: ${s2_addr}" -hdr "X-Port: ${s2_port}" rxresp expect resp.status == 200 txreq -url "/bar" rxresp expect resp.status == 200 } -run client c1 -wait varnish v1 -cli backend.list # varnish v1 -expect MAIN.n_backend == 2 varnish-7.5.0/bin/varnishtest/tests/d00010.vtc000066400000000000000000000017701457605730600210250ustar00rootroot00000000000000varnishtest "Test dynamic backends hot swap during a pipe" barrier b1 cond 2 barrier b2 cond 2 server s1 { rxreq expect req.url == "/foo" barrier b1 sync barrier b2 sync txresp } -start server s2 { rxreq expect req.url == "/bar" barrier b2 sync txresp } -start varnish v1 -vcl { import debug; backend dummy { .host = "${bad_backend}"; } sub vcl_init { new s1 = debug.dyn("${s1_addr}", "${s1_port}"); } sub vcl_recv { if (req.method == "SWAP") { s1.refresh(req.http.X-Addr, req.http.X-Port); return (synth(200)); } set req.backend_hint = s1.backend(); return (pipe); } } -start varnish v1 -expect MAIN.n_backend == 2 client c1 { txreq -url "/foo" rxresp expect resp.status == 200 } -start client c2 { barrier b1 sync txreq -req "SWAP" -hdr "X-Addr: ${s2_addr}" -hdr "X-Port: ${s2_port}" rxresp expect resp.status == 200 txreq -url "/bar" rxresp expect resp.status == 200 } -run client c1 -wait varnish v1 -cli backend.list #varnish v1 -expect MAIN.n_backend == 2 varnish-7.5.0/bin/varnishtest/tests/d00011.vtc000066400000000000000000000020711457605730600210210ustar00rootroot00000000000000varnishtest "Test a dynamic backend hot swap after it was picked by a bereq" barrier b1 cond 2 server s1 { } -start server s2 { rxreq txresp } -start varnish v1 -vcl { import std; import debug; import vtc; backend dummy { .host = "${bad_backend}"; } sub vcl_init { new s1 = debug.dyn("${s1_addr}", "${s1_port}"); } sub vcl_recv { if (req.method == "SWAP") { s1.refresh(req.http.X-Addr, req.http.X-Port); return (synth(200)); } } sub vcl_backend_fetch { set bereq.backend = s1.backend(); # hot swap should happen while we sleep vtc.sleep(3s); if (std.healthy(bereq.backend)) { return(abandon); } else { set bereq.backend = s1.backend(); } } } -start varnish v1 -expect MAIN.n_backend == 2 client c1 { txreq barrier b1 sync rxresp expect resp.status == 200 } client c2 { barrier b1 sync delay 0.1 txreq -req "SWAP" -hdr "X-Addr: ${s2_addr}" -hdr "X-Port: ${s2_port}" rxresp expect resp.status == 200 } client c1 -start client c2 -run client c1 -wait varnish v1 -cli backend.list #varnish v1 -expect MAIN.n_backend == 2 varnish-7.5.0/bin/varnishtest/tests/d00012.vtc000066400000000000000000000027051457605730600210260ustar00rootroot00000000000000varnishtest "Test a dynamic backend discard during a request" # vcl.discard testing inspired by v00006.vtc from commit e1f7207 barrier b1 cond 2 barrier b2 cond 2 server s1 { rxreq expect req.url == "/foo" barrier b1 sync barrier b2 sync txresp } -start varnish v1 -arg "-p thread_pools=1" -vcl { import debug; backend dummy { .host = "${bad_backend}"; } sub vcl_init { new s1 = debug.dyn("${s1_addr}", "${s1_port}"); } sub vcl_recv { set req.backend_hint = s1.backend(); } } -start client c1 { txreq -url "/foo" rxresp expect resp.status == 200 } -start varnish v1 -expect MAIN.n_backend == 2 # expected: vcl1.dummy, vcl1.s1 server s2 { rxreq expect req.url == "/bar" txresp } -start barrier b1 sync varnish v1 -vcl { import debug; backend dummy { .host = "${bad_backend}"; } sub vcl_init { new s2 = debug.dyn("${s2_addr}", "${s2_port}"); } sub vcl_recv { set req.backend_hint = s2.backend(); } } varnish v1 -cli "vcl.discard vcl1" barrier b2 sync client c1 -wait delay 2 varnish v1 -expect MAIN.n_backend == 4 # expected: vcl1.dummy, vcl1.s1, vcl2.dummy, vcl2.s2 varnish v1 -expect n_vcl_avail == 1 varnish v1 -expect n_vcl_discard == 1 client c1 { txreq -url "/bar" rxresp expect resp.status == 200 } -run varnish v1 -cli "vcl.list" #varnish v1 -expect MAIN.n_backend == 2 # expected: vcl2.dummy, vcl2.s2 varnish v1 -expect n_vcl_avail == 1 # XXX This test fails on FreeBSD # XXX varnish v1 -expect n_vcl_discard == 0 varnish-7.5.0/bin/varnishtest/tests/d00013.vtc000066400000000000000000000020461457605730600210250ustar00rootroot00000000000000varnishtest "Test a dynamic backend hot swap after it was hinted to a req" barrier b1 cond 2 server s1 { } -start server s2 { rxreq txresp } -start varnish v1 -vcl { import std; import debug; import vtc; backend dummy { .host = "${bad_backend}"; } sub vcl_init { new s1 = debug.dyn("${s1_addr}", "${s1_port}"); } sub vcl_recv { if (req.method == "SWAP") { s1.refresh(req.http.X-Addr, req.http.X-Port); return (synth(200)); } set req.backend_hint = s1.backend(); # hot swap should happen while we sleep vtc.sleep(2s); if (std.healthy(req.backend_hint)) { return(synth(800)); } else { set req.backend_hint = s1.backend(); } } } -start varnish v1 -expect MAIN.n_backend == 2 client c1 { txreq barrier b1 sync rxresp expect resp.status == 200 } client c2 { barrier b1 sync delay 0.1 txreq -req "SWAP" -hdr "X-Addr: ${s2_addr}" -hdr "X-Port: ${s2_port}" rxresp expect resp.status == 200 } client c1 -start client c2 -run client c1 -wait varnish v1 -cli backend.list #varnish v1 -expect MAIN.n_backend == 2 varnish-7.5.0/bin/varnishtest/tests/d00014.vtc000066400000000000000000000004451457605730600210270ustar00rootroot00000000000000varnishtest "Backend as a boolean expression" server s1 -start varnish v1 -vcl+backend { import vtc; sub vcl_recv { set req.backend_hint = vtc.no_backend(); if (!req.backend_hint) { return (synth(404)); } } } -start client c1 { txreq rxresp expect resp.status == 404 } -run varnish-7.5.0/bin/varnishtest/tests/d00032.vtc000066400000000000000000000020011457605730600210150ustar00rootroot00000000000000varnishtest "Test dynamic backends listening at Unix domain sockets" server s1 -listen "${tmpdir}/s1.sock" { rxreq txresp } -start varnish v1 -vcl { import debug; backend dummy None; sub vcl_init { new s1 = debug.dyn_uds("${s1_sock}"); } sub vcl_recv { set req.backend_hint = s1.backend(); } } -start varnish v1 -expect MAIN.n_backend == 1 client c1 { txreq rxresp expect resp.status == 200 } -run varnish v1 -errvcl {path must be an absolute path} { import debug; backend dummy None; sub vcl_init { new s1 = debug.dyn_uds(""); } } varnish v1 -errvcl {path must be an absolute path} { import debug; backend dummy None; sub vcl_init { new s1 = debug.dyn_uds("s1.sock"); } } shell { rm -f ${tmpdir}/foo } varnish v1 -errvcl {Cannot stat path} { import debug; backend dummy None; sub vcl_init { new s1 = debug.dyn_uds("${tmpdir}/foo"); } } varnish v1 -errvcl {is not a socket} { import debug; backend dummy None; sub vcl_init { new s1 = debug.dyn_uds("${tmpdir}"); } } varnish-7.5.0/bin/varnishtest/tests/d00033.vtc000066400000000000000000000013631457605730600210300ustar00rootroot00000000000000varnishtest "Test dynamic UDS backends hot swap" server s1 -listen "${tmpdir}/s1.sock" { rxreq expect req.url == "/foo" txresp } -start server s2 -listen "${tmpdir}/s2.sock" { rxreq expect req.url == "/bar" txresp } -start varnish v1 -vcl { import debug; backend dummy None; sub vcl_init { new s1 = debug.dyn_uds("${s1_sock}"); } sub vcl_recv { if (req.method == "SWAP") { s1.refresh(req.http.X-Path); return (synth(200)); } set req.backend_hint = s1.backend(); } } -start varnish v1 -expect MAIN.n_backend == 1 client c1 { txreq -url "/foo" rxresp expect resp.status == 200 txreq -req "SWAP" -hdr "X-Path: ${s2_sock}" rxresp expect resp.status == 200 txreq -url "/bar" rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/d00034.vtc000066400000000000000000000016231457605730600210300ustar00rootroot00000000000000varnishtest "Test dynamic UDS backends hot swap while being used" barrier b1 cond 2 barrier b2 cond 2 server s1 -listen "${tmpdir}/s1.sock" { rxreq expect req.url == "/foo" barrier b1 sync barrier b2 sync txresp } -start server s2 -listen "${tmpdir}/s2.sock" { rxreq expect req.url == "/bar" barrier b2 sync txresp } -start varnish v1 -vcl { import debug; backend dummy None; sub vcl_init { new s1 = debug.dyn_uds("${s1_sock}"); } sub vcl_recv { if (req.method == "SWAP") { s1.refresh(req.http.X-Path); return (synth(200)); } set req.backend_hint = s1.backend(); } } -start varnish v1 -expect MAIN.n_backend == 1 client c1 { txreq -url "/foo" rxresp expect resp.status == 200 } -start client c2 { barrier b1 sync txreq -req "SWAP" -hdr "X-Path: ${s2_sock}" rxresp expect resp.status == 200 txreq -url "/bar" rxresp expect resp.status == 200 } -run client c1 -wait varnish-7.5.0/bin/varnishtest/tests/d00035.vtc000066400000000000000000000016411457605730600210310ustar00rootroot00000000000000varnishtest "Test dynamic UDS backends hot swap during a pipe" barrier b1 cond 2 barrier b2 cond 2 server s1 -listen "${tmpdir}/s1.sock" { rxreq expect req.url == "/foo" barrier b1 sync barrier b2 sync txresp } -start server s2 -listen "${tmpdir}/s2.sock" { rxreq expect req.url == "/bar" barrier b2 sync txresp } -start varnish v1 -vcl { import debug; backend dummy None; sub vcl_init { new s1 = debug.dyn_uds("${s1_sock}"); } sub vcl_recv { if (req.method == "SWAP") { s1.refresh(req.http.X-Path); return (synth(200)); } set req.backend_hint = s1.backend(); return (pipe); } } -start varnish v1 -expect MAIN.n_backend == 1 client c1 { txreq -url "/foo" rxresp expect resp.status == 200 } -start client c2 { barrier b1 sync txreq -req "SWAP" -hdr "X-Path: ${s2_sock}" rxresp expect resp.status == 200 txreq -url "/bar" rxresp expect resp.status == 200 } -run client c1 -wait varnish-7.5.0/bin/varnishtest/tests/d00036.vtc000066400000000000000000000017201457605730600210300ustar00rootroot00000000000000varnishtest "Test dynamic UDS backend hot swap after it was picked by a bereq" barrier b1 sock 2 server s1 -listen "${tmpdir}/s1.sock" { } -start server s2 -listen "${tmpdir}/s2.sock" { rxreq txresp } -start varnish v1 -vcl { import std; import debug; import vtc; backend dummy None; sub vcl_init { new s1 = debug.dyn_uds("${s1_sock}"); } sub vcl_recv { if (req.method == "SWAP") { s1.refresh(req.http.X-Path); return (synth(200)); } } sub vcl_backend_fetch { set bereq.backend = s1.backend(); # hot swap has happened vtc.barrier_sync("${b1_sock}"); if (std.healthy(bereq.backend)) { return(abandon); } else { set bereq.backend = s1.backend(); } } } -start varnish v1 -expect MAIN.n_backend == 1 client c1 { txreq rxresp expect resp.status == 200 } client c2 { delay 0.1 txreq -req "SWAP" -hdr "X-Path: ${s2_sock}" rxresp expect resp.status == 200 barrier b1 sync } client c1 -start client c2 -run client c1 -wait varnish-7.5.0/bin/varnishtest/tests/d00037.vtc000066400000000000000000000022201457605730600210250ustar00rootroot00000000000000varnishtest "Test a dynamic UDS backend discard during a request" barrier b1 cond 2 barrier b2 cond 2 server s1 -listen "${tmpdir}/s1.sock" { rxreq expect req.url == "/foo" barrier b1 sync barrier b2 sync txresp } -start varnish v1 -arg "-p thread_pools=1" -vcl { import debug; backend dummy None; sub vcl_init { new s1 = debug.dyn_uds("${s1_sock}"); } sub vcl_recv { set req.backend_hint = s1.backend(); } } -start client c1 { txreq -url "/foo" rxresp expect resp.status == 200 } -start varnish v1 -expect MAIN.n_backend == 1 server s2 -listen "${tmpdir}/s2.sock" { rxreq expect req.url == "/bar" txresp } -start barrier b1 sync varnish v1 -vcl { import debug; backend dummy None; sub vcl_init { new s2 = debug.dyn_uds("${s2_sock}"); } sub vcl_recv { set req.backend_hint = s2.backend(); } } varnish v1 -cli "vcl.discard vcl1" barrier b2 sync client c1 -wait delay 2 varnish v1 -expect MAIN.n_backend == 2 varnish v1 -expect n_vcl_avail == 1 varnish v1 -expect n_vcl_discard == 1 client c1 { txreq -url "/bar" rxresp expect resp.status == 200 } -run varnish v1 -cli "vcl.list" varnish v1 -expect n_vcl_avail == 1 varnish-7.5.0/bin/varnishtest/tests/d00038.vtc000066400000000000000000000017551457605730600210420ustar00rootroot00000000000000varnishtest "Test a dynamic UDS backend hot swap after it was hinted to a req" barrier b1 cond 2 server s1 -listen "${tmpdir}/s1.sock" { } -start server s2 -listen "${tmpdir}/s2.sock" { rxreq txresp } -start varnish v1 -vcl { import std; import debug; import vtc; backend dummy None; sub vcl_init { new s1 = debug.dyn_uds("${s1_sock}"); } sub vcl_recv { if (req.method == "SWAP") { s1.refresh(req.http.X-Path); return (synth(200)); } set req.backend_hint = s1.backend(); # hot swap should happen while we sleep vtc.sleep(2s); if (std.healthy(req.backend_hint)) { return(synth(800)); } else { set req.backend_hint = s1.backend(); } } } -start varnish v1 -expect MAIN.n_backend == 1 client c1 { txreq barrier b1 sync rxresp expect resp.status == 200 } client c2 { barrier b1 sync delay 0.1 txreq -req "SWAP" -hdr "X-Path: ${s2_sock}" rxresp expect resp.status == 200 } client c1 -start client c2 -run client c1 -wait varnish v1 -cli backend.list varnish-7.5.0/bin/varnishtest/tests/d00040.vtc000066400000000000000000000010011457605730600210130ustar00rootroot00000000000000varnishtest "Test failing in director callbacks" varnish v1 -vcl { import debug; import std; backend dummy { .host = "${bad_ip}"; } sub vcl_init { new d = debug.director(); } sub vcl_recv { if (req.url == "/healthy") { if (std.healthy(d.fail())) { return (synth(200)); } else { return (synth(404)); } } set req.backend_hint = d.fail(); } } -start client c1 { txreq -url "/" rxresp expect resp.status == 503 txreq -url "/healthy" rxresp expect resp.status == 503 } -run varnish-7.5.0/bin/varnishtest/tests/e00000.vtc000066400000000000000000000011321457605730600210150ustar00rootroot00000000000000varnishtest "ESI test with no ESI content" server s1 { rxreq txresp -body { -This is a test: Hello world } } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_esi = true; } sub vcl_deliver { set resp.http.can_esi = obj.can_esi; } } -start logexpect l1 -v v1 -g raw { expect * * ESI_xmlerror esi_disable_xml_check } -start client c1 { txreq rxresp expect resp.status == 200 expect resp.bodylen == 33 expect resp.http.can_esi == "false" } client c1 -run logexpect l1 -wait varnish v1 -expect esi_errors == 0 varnish v1 -expect MAIN.s_resp_bodybytes == 33 varnish-7.5.0/bin/varnishtest/tests/e00001.vtc000066400000000000000000000015121457605730600210200ustar00rootroot00000000000000varnishtest "ESI:remove" server s1 { rxreq txresp -body { This is a test: Unseen University This is a test: Hello world } } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_esi = true; } sub vcl_deliver { set resp.http.can_esi = obj.can_esi; } } -start logexpect l1 -v v1 -g raw { expect * * ESI_xmlerror {^ERR after 3 ESI 1.0 element nested in } expect 0 = ESI_xmlerror {^ERR after 3 ESI 1.0 Nested After include } rxreq expect req.url == "/body" txresp -body { Included file } } -start varnish v1 -vcl+backend { sub vcl_backend_response { if (bereq.url != "/body") { set beresp.do_esi = true; } } } -start client c1 { txreq rxresp expect resp.status == 200 expect resp.bodylen == 67 } client c1 -run varnish v1 -expect esi_errors == 0 varnish v1 -expect MAIN.s_resp_bodybytes == 67 varnish-7.5.0/bin/varnishtest/tests/e00005.vtc000066400000000000000000000011271457605730600210260ustar00rootroot00000000000000varnishtest "ESI relative include" server s1 { rxreq expect req.url == "/foo/bar" txresp -body { Before include After include } rxreq expect req.url == "/foo/body" txresp -body { Included file } } -start varnish v1 -vcl+backend { sub vcl_backend_response { if (bereq.url != "/foo/body") { set beresp.do_esi = true; } } } -start client c1 { txreq -url /foo/bar rxresp expect resp.status == 200 expect resp.bodylen == 67 } client c1 -run varnish v1 -expect esi_errors == 0 varnish v1 -expect MAIN.s_resp_bodybytes == 67 varnish-7.5.0/bin/varnishtest/tests/e00006.vtc000066400000000000000000000035301457605730600210270ustar00rootroot00000000000000varnishtest "ESI include with http://" server s1 { rxreq expect req.url == "/foo/bar" txresp -body { Before include After include } } -start server s2 { rxreq expect req.url == "/body" txresp -body {
Included file
} } -start varnish v1 -vcl+backend { sub vcl_backend_fetch { if (bereq.http.host == "bozz") { set bereq.backend = s2; } else { set bereq.backend = s1; } } sub vcl_backend_response { set beresp.do_esi = true; } } -start client c1 { txreq -url /foo/bar -hdr "Host: froboz" rxresp expect resp.status == 200 expect resp.bodylen == 78 } client c1 -run varnish v1 -expect esi_errors == 0 varnish v1 -expect MAIN.s_resp_bodybytes == 78 # Now try with invalid URLs server s1 { rxreq expect req.url == /http txresp -body {1234} rxreq expect req.url == /https txresp -body {123456} } -start varnish v1 -vcl+backend { sub vcl_recv { set req.backend_hint = s2; set req.backend_hint = s1; } sub vcl_backend_response { set beresp.do_esi = true; } } varnish v1 -cliok "param.set feature +esi_ignore_https" logexpect l1 -v v1 -g raw { expect * * ESI_xmlerror "ERR after 0 ESI 1.0 invalid src= URL" expect * * ESI_xmlerror "WARN after 0 ESI 1.0 https:// treated as http://" expect * * ESI_xmlerror "ERR after 0 ESI 1.0 invalid src= URL" } -start client c1 { txreq -url /http rxresp expect resp.status == 200 expect resp.bodylen == 4 } -run varnish v1 -expect esi_errors == 1 varnish v1 -expect MAIN.s_resp_bodybytes == 82 client c1 { txreq -url /https rxresp expect resp.status == 200 expect resp.bodylen == 6 } -run logexpect l1 -wait varnish v1 -expect esi_errors == 2 varnish v1 -expect MAIN.s_resp_bodybytes == 88 varnish-7.5.0/bin/varnishtest/tests/e00007.vtc000066400000000000000000000027301457605730600210310ustar00rootroot00000000000000varnishtest "ESI spanning storage bits" # The layout of the body in the response is very carefully # tuned to give the desired code coverage. # The objects segments should have the following *precise* content # # "Fetch 32 byte segments:" # "%0a%09%09filler%0a%09%09This is before" # " the test%0a%09%09%0a%09%09filler%0a%09%09This is a test: Unseen Un" # "iversity%0a%09%09Department of cruel a" # "nd unjust geography%0a%09%09%0a%09%09This is a test: Hello worl" # "d%0a%09" server s1 { rxreq expect req.url == "/foo/bar" send "HTTP/1.0 200 OK\n" send "Connection: close\n" send "\n" send { filler This is before the test filler This is a test: Unseen University Department of cruel and unjust geography This is a test: Hello world } } -start varnish v1 -arg "-sdefault,2m" -vcl+backend { sub vcl_backend_response { set beresp.do_esi = true; } } -start varnish v1 -cliok "debug.fragfetch 32" client c1 { txreq -url /foo/bar -hdr "Host: froboz" rxresp expect resp.status == 200 expect resp.bodylen == 120 expect resp.body == "\n\t\tfiller\n\t\tThis is before the test\n\t\t\n\t\tfiller\n\t\tThis is a test: Hello world\n\t" } client c1 -run varnish v1 -expect esi_errors == 0 varnish v1 -expect MAIN.s_resp_bodybytes == 120 varnish-7.5.0/bin/varnishtest/tests/e00008.vtc000066400000000000000000000063441457605730600210370ustar00rootroot00000000000000varnishtest "ESI parsing errors" server s1 { rxreq txresp -body { 1 Before include 2 3 After include 4 5 6 7 foo 8 9 10 11 12 13 14 15 16 17 bar 18 19 20 21 22 23 25 26 27 28 29 30 31 32 33 34 35 38 } rxreq expect req.url == "/body" txresp -body {
Included file
Included file 2 After include } rxreq expect req.url == "/foo/body" txresp -body { Included file } } -start varnish v1 -vcl+backend { sub vcl_backend_response { if (bereq.url != "/foo/body") { set beresp.do_esi = true; } } } -start client c1 { txreq -url /foo/bar -proto HTTP/1.1 rxresp expect resp.status == 200 expect resp.bodylen == 67 expect resp.http.Transfer-Encoding == "chunked" } -run varnish v1 -expect MAIN.s_resp_bodybytes == 67 client c1 { txreq -url /foo/bar -proto HTTP/1.0 rxresp expect resp.status == 200 expect resp.bodylen == 67 expect resp.http.Transfer-Encoding == expect resp.http.Content-Length == expect resp.http.Connection == "close" } -run varnish v1 -expect esi_errors == 0 varnish v1 -expect MAIN.s_resp_bodybytes == 134 varnish-7.5.0/bin/varnishtest/tests/e00013.vtc000066400000000000000000000007051457605730600210260ustar00rootroot00000000000000varnishtest "All white-space object, in multiple storage segments" server s1 { rxreq expect req.url == "/foo" txresp -nolen -hdr "Transfer-Encoding: chunked" chunked { } chunkedlen 0 } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_esi = true; } } -start varnish v1 -cliok "debug.fragfetch 4" client c1 { txreq -url /foo rxresp } -run varnish v1 -expect esi_errors == 0 varnish-7.5.0/bin/varnishtest/tests/e00014.vtc000066400000000000000000000007071457605730600210310ustar00rootroot00000000000000varnishtest "Check } } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_esi = true; } } -start varnish v1 -cliok "debug.fragfetch 4" client c1 { txreq -url /foo rxresp expect resp.bodylen == 49 } -run varnish v1 -expect esi_errors == 0 varnish-7.5.0/bin/varnishtest/tests/e00015.vtc000066400000000000000000000050601457605730600210270ustar00rootroot00000000000000varnishtest "ESI requests turned off and other filters with esi" server s1 { rxreq expect req.url == / txresp -body { Before include After include } rxreq txresp -body { Before include After include } rxreq expect req.url == "/body" txresp -body {
Included file
} } -start varnish v1 -syntax 4.0 -vcl+backend { sub vcl_recv { if (req.url == "/") { set req.esi = false; } } sub vcl_backend_response { set beresp.do_esi = true; } } -start client c1 { txreq rxresp expect resp.bodylen == 73 expect resp.status == 200 txreq -url "/esi" rxresp expect resp.bodylen == 76 expect resp.status == 200 } -run varnish v1 -vsl_catchup varnish v1 -expect esi_errors == 0 varnish v1 -expect MAIN.s_resp_bodybytes == 149 server s1 { rxreq expect req.url == /top2 txresp -gzipbody { Before include After include } rxreq expect req.url == /recurse txresp -gzipbody { Before include After include } } -start varnish v1 -syntax 4.1 -vcl+backend { sub vcl_deliver { set resp.http.was = resp.do_esi; set resp.http.filter0 = resp.filters; if (req.url == "/top2") { set resp.do_esi = false; } set resp.http.filters = resp.filters; if (req.http.fiddle) { set resp.filters = resp.filters; set resp.do_esi = false; } } sub vcl_backend_response { set beresp.do_esi = true; } } client c1 { txreq -url /top2 rxresp expect resp.bodylen == 73 expect resp.status == 200 expect resp.http.was == true expect resp.http.filter0 == "esi gunzip" expect resp.http.filters == "gunzip" txreq -url "/esi" rxresp expect resp.bodylen == 76 expect resp.status == 200 expect resp.http.was == true expect resp.http.filter0 == "esi" expect resp.http.filters == "esi" txreq -url "/esi" -hdr "Range: bytes=1-2" rxresp expect resp.bodylen == 2 expect resp.status == 206 expect resp.http.was == true expect resp.http.filters == "esi range" txreq -url "/recurse" rxresp expect resp.bodylen == 120 expect resp.status == 200 expect resp.http.was == true expect resp.http.filters == "esi gunzip" txreq -url "/recurse" -hdr "Range: bytes=1-2" rxresp expect resp.bodylen == 2 expect resp.status == 206 expect resp.http.was == true expect resp.http.filters == "esi gunzip range" txreq -url /top2 -hdr "fiddle: do_esi" rxresp expect resp.status == 503 expect resp.bodylen == 251 } -run varnish v1 -expect esi_errors == 0 varnish v1 -expect MAIN.s_resp_bodybytes == 673 varnish-7.5.0/bin/varnishtest/tests/e00016.vtc000066400000000000000000000017301457605730600210300ustar00rootroot00000000000000varnishtest "ESI request can't be turned off midstream" server s1 { rxreq txresp -body { Before include After include } rxreq expect req.url == "/body" txresp -body { } rxreq expect req.url == "/body2" txresp -body { included } rxreq expect req.url == "/body3" txresp -body { included body3 } } -start # give enough stack to 32bit systems varnish v1 -cliok "param.set thread_pool_stack 80k" varnish v1 -cliok "param.set feature +esi_disable_xml_check" varnish v1 -syntax 4.0 -vcl+backend { sub vcl_backend_response { set beresp.do_esi = true; } sub vcl_deliver { set req.esi = true; if (req.url == "/body") { set req.esi = false; } } } -start client c1 { txreq rxresp expect resp.bodylen == 105 expect resp.status == 200 } client c1 -run varnish v1 -expect esi_errors == 0 varnish v1 -expect MAIN.s_resp_bodybytes == 105 varnish-7.5.0/bin/varnishtest/tests/e00017.vtc000066400000000000000000000131011457605730600210240ustar00rootroot00000000000000varnishtest "Aggressive use of ESI include" server s1 { rxreq txresp -body { Before include After include } rxreq txresp -body {
Included file 00
} rxreq txresp -body {
Included file 01
} rxreq txresp -body {
Included file 02
} rxreq txresp -body {
Included file 03
} rxreq txresp -body {
Included file 04
} rxreq txresp -body {
Included file 05
} rxreq txresp -body {
Included file 06
} rxreq txresp -body {
Included file 07
} rxreq txresp -body {
Included file 08
} rxreq txresp -body {
Included file 09
} rxreq txresp -body {
Included file 10
} rxreq txresp -body {
Included file 11
} rxreq txresp -body {
Included file 12
} rxreq txresp -body {
Included file 13
} rxreq txresp -body {
Included file 14
} rxreq txresp -body {
Included file 15
} rxreq txresp -body {
Included file 16
} rxreq txresp -body {
Included file 17
} rxreq txresp -body {
Included file 18
} rxreq txresp -body {
Included file 19
} } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_esi = true; } } -start client c1 { txreq rxresp expect resp.status == 200 # before 29, includes 20*(29+3), after 15 expect resp.bodylen == 684 } client c1 -run varnish v1 -expect esi_errors == 0 varnish v1 -expect MAIN.s_resp_bodybytes == 684 varnish-7.5.0/bin/varnishtest/tests/e00018.vtc000066400000000000000000000014041457605730600210300ustar00rootroot00000000000000varnishtest "Test XML 1.0 entity references" server s1 { rxreq expect req.url == "/" txresp -body { } rxreq expect req.url == "/&" txresp -body "1" rxreq expect req.url == "/<" txresp -body "22" rxreq expect req.url == "/>" txresp -body "333" rxreq expect req.url == {/'} txresp -body "4444" rxreq expect req.url == {/"} txresp -body "55555" } -start varnish v1 -vcl+backend { sub vcl_recv { return (pass); } sub vcl_backend_response { if (bereq.url == "/") { set beresp.do_esi = true; } } } -start client c1 { txreq rxresp expect resp.status == 200 expect resp.bodylen == 32 } -run varnish-7.5.0/bin/varnishtest/tests/e00019.vtc000066400000000000000000000037631457605730600210430ustar00rootroot00000000000000varnishtest "Push corners in new ESI parser" server s1 { rxreq txresp -nolen -hdr "Transfer-encoding: chunked" chunked {<1>
<1>} chunked {<2><2>} chunked {<3><3>} chunked {<4><4>} chunked {

} chunkedlen 256 chunked {

} chunked {

} chunkedlen 65536 chunked {

} chunked {} chunkedlen 256 chunked {} chunkedlen 65536 chunked {} chunked {
This is a test: Hello world } } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_esi = true; set beresp.do_gunzip = true; } } -start varnish v1 -cliok "param.set http_gzip_support true" logexpect l1 -v v1 -g vxid { expect * * Fetch_Body expect 0 = ESI_xmlerror {^ERR after 3 ESI 1.0 element nested in $} expect 0 = ESI_xmlerror {^ERR after 3 ESI 1.0 Nested This is a test: Hello world } } -start varnish v1 -syntax 4.0 -vcl+backend { sub vcl_recv { set req.esi = true; } sub vcl_backend_response { set beresp.do_esi = true; set beresp.do_gzip = true; } } -start varnish v1 -cliok "param.set debug +esi_chop" varnish v1 -cliok "param.set http_gzip_support true" client c1 { txreq -hdr "Accept-Encoding: gzip" rxresp expect resp.http.content-encoding == gzip gunzip expect resp.status == 200 expect resp.bodylen == 40 } client c1 -run varnish v1 -expect esi_errors == 2 varnish-7.5.0/bin/varnishtest/tests/e00022.vtc000066400000000000000000000022621457605730600210260ustar00rootroot00000000000000varnishtest "ESI ability to stitch gzip files together" server s1 { rxreq expect req.http.accept-encoding == gzip txresp -gzipbody { This is a test: Unseen University This is a test: Hello world } } -start varnish v1 -syntax 4.0 -arg "-p thread_pool_stack=262144" -vcl+backend { sub vcl_recv { set req.esi = true; } sub vcl_backend_response { set beresp.do_esi = true; } } -start varnish v1 -cliok "param.set debug +esi_chop" varnish v1 -cliok "param.set http_gzip_support true" varnish v1 -cliok "param.set gzip_memlevel 1" logexpect l1 -v v1 -g vxid { expect * * Fetch_Body expect 0 = ESI_xmlerror {^ERR after 24 ESI 1.0 element nested in $} expect 0 = ESI_xmlerror {^ERR after 24 ESI 1.0 Nested After include } rxreq expect req.method == XGET expect req.url == "/foo/body1" expect req.bodylen == 0 expect req.http.Content-Length == "" expect req.http.Transfer-Encoding == "" txresp -body { Included file } rxreq expect req.method == POST expect req.url == "/foo/bar" expect req.bodylen == 66 # std.cache_req_body() turns chunked into C-L expect req.http.Content-Length == "66" expect req.http.Transfer-Encoding == "" txresp -body { Before include After include } rxreq expect req.method == XGET expect req.url == "/foo/body2" expect req.bodylen == 0 expect req.http.Content-Length == "" expect req.http.Transfer-Encoding == "" txresp -body { Included file } } -start varnish v1 -vcl+backend { import std; sub vcl_recv { std.cache_req_body(1KB); if (req.url ~ "^/foo/body") { set req.method = "XGET"; } return (pass); } sub vcl_backend_response { if (bereq.url !~ "^/foo/body") { set beresp.do_esi = true; } } } -start client c1 { txreq -url /foo/bar -method POST -body {foobar} rxresp expect resp.status == 200 expect resp.bodylen == 67 txreq -url /foo/bar -method POST -nolen \ -hdr "Transfer-encoding: chunked" chunkedlen 66 chunkedlen 0 rxresp expect resp.status == 200 expect resp.bodylen == 67 } -run varnish v1 -expect esi_errors == 0 varnish-7.5.0/bin/varnishtest/tests/e00037.vtc000066400000000000000000000005511457605730600210330ustar00rootroot00000000000000varnishtest "Double fail ESI sub request" server s1 { rxreq txresp -body {} } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_esi = true; } sub vcl_recv { if (req.esi_level > 0) { return (fail); } } sub vcl_synth { return (fail); } } -start client c1 { non_fatal txreq rxresp } -run varnish-7.5.0/bin/varnishtest/tests/f00001.vtc000066400000000000000000000013331457605730600210220ustar00rootroot00000000000000varnishtest "Check that we handle bogusly large chunks correctly" # Check that the bug has been fixed server s1 { rxreq txresp } -start varnish v1 -vcl+backend { } -start client c1 { send "POST / HTTP/1.1\r\n" send "Host: foo\r\n" send "Transfer-Encoding: chunked\r\n\r\n" send "FFFFFFFFFFFFFFED\r\n" send "0\r\n\r\n" rxresp expect resp.status == 503 } -run # Check that the published workaround does not cause harm varnish v1 -vcl+backend { sub vcl_recv { if (req.http.transfer-encoding ~ "(?i)chunked") { return (fail); } } } client c1 { send "POST / HTTP/1.1\r\n" send "Host: foo\r\n" send "Transfer-Encoding: chunked\r\n\r\n" send "FFFFFFFFFFFFFFED\r\n" rxresp expect resp.status == 503 } -run varnish-7.5.0/bin/varnishtest/tests/f00004.vtc000066400000000000000000000033661457605730600210350ustar00rootroot00000000000000varnishtest "VSV00004" server s1 { rxreq expect req.url == /test1 txresp rxreq expect req.url == /test2 send "bogus\r\n\r\n" expect_close accept rxreq expect req.url == /test3 txresp } -start varnish v1 -arg "-p debug=+syncvsl" -arg "-p max_restarts=0" -vcl+backend { import vtc; sub vcl_recv { if (req.url == "/prime") { # Avoid allocations at start of workspace so # that test string is not overwritten vtc.workspace_alloc(client, 1024); set req.http.temp = "super"; set req.http.secret = req.http.temp + "secret"; return (synth(200, req.http.secret)); } } sub vcl_deliver { if (req.url == "/test1") { return (restart); } } sub vcl_backend_error { return (abandon); } } -start # Case 1 client c1 { txreq -url /prime rxresp expect resp.status == 200 expect resp.reason == supersecret txreq -url /test1 rxresp expect resp.status == 503 expect resp.reason != supersecret expect resp.reason == "Service Unavailable" } -run # Case 2 client c2 { txreq -url /prime rxresp expect resp.status == 200 expect resp.reason == supersecret txreq -url /test2 rxresp expect resp.status == 503 expect resp.reason != supersecret expect resp.reason == "Service Unavailable" } -run # Case 3 varnish v1 -cliok "vcl.label label1 vcl1" varnish v1 -cliok "param.reset max_restarts" varnish v1 -vcl+backend { sub vcl_recv { if (req.url == "/prime") { return (vcl(label1)); } if (req.restarts > 0) { return (vcl(label1)); } } sub vcl_deliver { return (restart); } } client c3 { txreq -url /prime rxresp expect resp.status == 200 expect resp.reason == supersecret txreq -url /test3 rxresp expect resp.status == 503 expect resp.reason != supersecret expect resp.reason == "Service Unavailable" } -run varnish-7.5.0/bin/varnishtest/tests/f00005.vtc000066400000000000000000000027751457605730600210410ustar00rootroot00000000000000varnishtest "proxy ws panic" server s1 { rxreq txresp } -start varnish v1 -proto "PROXY" -vcl+backend {}-start # Too large proxy payload using TLV client c1 { sendhex { 0d 0a 0d 0a 00 0d 0a 51 55 49 54 0a 21 21 00 93 aa bb cc dd ee ff 11 22 33 44 55 66 77 88 99 aa bb cc dd ee ff 11 22 33 44 55 66 77 88 99 aa bb 88 da 0d 73 02 00 3c 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 20 00 2d 01 01 00 00 00 21 00 07 54 4c 53 76 31 2e 32 23 00 1b 45 43 44 48 45 2d 52 53 41 2d 41 45 53 32 35 36 2d 47 43 4d 2d 53 48 41 33 38 34 } expect_close } -run # Badly formatted TLV proxy payload client c1 { sendhex { 0d 0a 0d 0a 00 0d 0a 51 55 49 54 0a 21 11 00 13 00 ff 20 ff 10 ff 03 21 20 30 00 20 20 00 00 19 00 02 29 20 00 00 00 41 20 9e 15 15 d6 00 00 08 00 00 00 00 00 07 7a 20 b1 3f 43 20 } expect_close } -run # Reduced size proxy payload to verify Varnish is still running client c1 { sendhex { 0d 0a 0d 0a 00 0d 0a 51 55 49 54 0a 21 21 00 8b aa bb cc dd ee ff 11 22 33 44 55 66 77 88 99 aa bb cc dd ee ff 11 22 33 44 55 66 77 88 99 aa bb 88 da 0d 73 02 00 34 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 61 20 00 2d 01 01 00 00 00 21 00 07 54 4c 53 76 31 2e 32 23 00 1b 45 43 44 48 45 2d 52 53 41 2d 41 45 53 32 35 36 2d 47 43 4d 2d 53 48 41 33 38 34 } txreq rxresp } -run varnish-7.5.0/bin/varnishtest/tests/f00007.vtc000066400000000000000000000026401457605730600210320ustar00rootroot00000000000000varnishtest "H/2 content length smuggling attack" server s1 { rxreqhdrs expect_close } -start server s2 { rxreqhdrs expect_close } -start server s3 { rxreq expect_close } -start server s4 { rxreq expect req.body == "A" txresp } -start varnish v1 -vcl+backend { import vtc; sub vcl_backend_fetch { if (bereq.url == "/1") { set bereq.backend = s1; } else if (bereq.url == "/2") { set bereq.backend = s2; } else if (bereq.url == "/3") { set bereq.backend = s3; } else { set bereq.backend = s4; } } } -start varnish v1 -cliok "param.set feature +http2" varnish v1 -cliok "param.set debug +syncvsl" client c1 { stream 1 { txreq -req POST -url /1 -hdr "content-length" "1" -nostrend txdata -data "AGET /FAIL HTTP/1.1\r\n\r\n" rxrst expect rst.err == PROTOCOL_ERROR } -run } -run client c2 { stream 1 { txreq -req POST -url /2 -hdr "content-length" "1" -nostrend txdata -data "AGET /FAIL HTTP/1.1\r\n\r\n" -nostrend txdata rxrst expect rst.err == PROTOCOL_ERROR } -run } -run client c3 { stream 1 { txreq -req POST -url /3 -hdr "content-length" "1" -nostrend txdata -data "A" -nostrend delay 0.5 txdata -data "GET /FAIL HTTP/1.1\r\n\r\n" rxrst expect rst.err == PROTOCOL_ERROR } -run } -run client c4 { stream 1 { txreq -req POST -url /4 -hdr "content-length" "1" -nostrend txdata -data "A" -nostrend txdata rxresp expect resp.status == 200 } -run } -run varnish-7.5.0/bin/varnishtest/tests/f00008.vtc000066400000000000000000000016471457605730600210410ustar00rootroot00000000000000varnishtest "VRB_Ignore and connection close" server s1 { rxreq txresp -body HIT } -start varnish v1 -arg "-p timeout_idle=1" -vcl+backend { sub vcl_recv { if (req.url == "/synth") { return (synth(200, "SYNTH")); } } } -start # Prime an object client c1 { txreq -url /hit rxresp expect resp.status == 200 expect resp.body == HIT } -run # Test synth client c2 { txreq -req POST -url /synth -hdr "Content-Length: 2" # Send 1 byte send a # Wait timeout_idle delay 2 # Send 1 byte send b rxresp expect resp.status == 200 expect resp.reason == SYNTH expect resp.http.connection == close timeout 0.5 expect_close } -run # Test cache hit client c3 { txreq -req GET -url /hit -hdr "Content-Length: 2" # Send 1 byte send a # Wait timeout_idle delay 2 # Send 1 byte send b rxresp expect resp.status == 200 expect resp.body == HIT expect resp.http.connection == close timeout 0.5 expect_close } -run varnish-7.5.0/bin/varnishtest/tests/f00010.vtc000066400000000000000000000005261457605730600210250ustar00rootroot00000000000000varnishtest "Do not allow critical headers to be marked hop-by-hop" varnish v1 -vcl { backend default none; } -start client c1 { txreq -hdr "Connection: Content-Length" -body "asdf" rxresp expect resp.status == 400 expect_close } -run client c2 { txreq -hdr "Connection: Host" rxresp expect resp.status == 400 expect_close } -run varnish-7.5.0/bin/varnishtest/tests/f00011.vtc000066400000000000000000000010341457605730600210210ustar00rootroot00000000000000varnishtest "H2: Malformed pseudo-headers" server s1 { rxreq txresp } -start varnish v1 -arg "-p feature=+http2" -vcl+backend { } -start client c1 { stream 1 { txreq -url "" rxrst } -run } -run client c1 { stream 1 { txreq -url " \t" rxrst } -run } -run client c1 { stream 1 { txreq -scheme "" rxrst } -run } -run client c1 { stream 1 { txreq -scheme " \t" rxrst } -run } -run client c1 { stream 1 { txreq -req "" rxrst } -run } -run client c1 { stream 1 { txreq -req " \t" rxrst } -run } -run varnish-7.5.0/bin/varnishtest/tests/g00000.vtc000066400000000000000000000016301457605730600210220ustar00rootroot00000000000000varnishtest "test req.can_gzip VCL variable" server s1 { rxreq txresp -bodylen 4 } -start varnish v1 -vcl+backend { sub vcl_deliver { set resp.http.gzip = req.can_gzip; } } -start client c1 { txreq -hdr "Accept-Encoding: gzip, deflate" rxresp expect resp.http.gzip == "true" txreq -hdr "Accept-Encoding: gzip;q=0.7, *;q=0" rxresp expect resp.http.gzip == "true" txreq -hdr "Accept-Encoding: deflate;q=0.7, *;q=0" rxresp expect resp.http.gzip == "false" txreq -hdr "Accept-Encoding: x-gzip;q=0.4, gzip" rxresp expect resp.http.gzip == "true" txreq -hdr "Accept-Encoding: gzip;q=0, x-gzip" rxresp expect resp.http.gzip == "true" txreq -hdr "Accept-Encoding: gzip;q=0" rxresp expect resp.http.gzip == "false" txreq -hdr "Accept-Encoding: gzip;q=-1" rxresp expect resp.http.gzip == "false" txreq -hdr "Accept-Encoding: gzip;q=0.0000001" rxresp expect resp.http.gzip == "true" } -run varnish-7.5.0/bin/varnishtest/tests/g00001.vtc000066400000000000000000000024151457605730600210250ustar00rootroot00000000000000varnishtest "test basic gunzip for client" server s1 { rxreq expect req.http.accept-encoding == "gzip" txresp -nolen -hdr "Content-Length: 49" -hdr "Content-Encoding: gzip" # "date > _ ; gzip -9 _" - contains filename + timestamp sendhex "1f 8b 08 08 ef 00 22 59 02 03 5f 00 0b 2e cd 53" sendhex "f0 4d ac 54 30 32 04 22 2b 03 13 2b 13 73 85 d0" sendhex "10 67 05 23 03 43 73 2e 00 cf 9b db c0 1d 00 00" sendhex "00" } -start varnish v1 -cliok "param.set http_gzip_support true" -vcl+backend { } -start client c1 { txreq rxresp expect resp.bodylen == "29" expect resp.http.content-encoding == txreq -hdr "Accept-encoding: gzip;q=0.1" rxresp expect resp.http.content-encoding == "gzip" gunzip expect resp.bodylen == "29" } -run varnish v1 -vsl_catchup client c1 { txreq -proto HTTP/1.0 rxresp expect resp.bodylen == "29" expect resp.http.content-encoding == } -run varnish v1 -vsl_catchup client c1 { txreq -req HEAD rxresp -no_obj expect resp.http.content-encoding == txreq -req HEAD -hdr "Accept-encoding: gzip;q=0.1" rxresp -no_obj expect resp.http.content-length == "49" expect resp.http.content-encoding == "gzip" } -run varnish v1 -expect n_gzip == 0 varnish v1 -expect n_gunzip == 3 varnish v1 -expect n_test_gunzip == 1 varnish-7.5.0/bin/varnishtest/tests/g00002.vtc000066400000000000000000000026221457605730600210260ustar00rootroot00000000000000varnishtest "test basic gunzip for client" server s1 { rxreq expect req.http.accept-encoding == "gzip" expect req.url == "/foo" txresp -proto HTTP/1.0 -nolen -gziplen 4100 accept rxreq expect req.url == "/bar" txresp -body {

} } -start varnish v1 \ -arg "-sdefault,2m" \ -arg "-p feature=+esi_disable_xml_check" \ -cliok "param.set http_gzip_support true" \ -cliok "param.set gzip_memlevel 1" \ -vcl+backend { import debug; sub vcl_backend_response { set beresp.do_esi = true; } sub vcl_deliver { set resp.filters += " debug.pedantic"; } } -start varnish v1 -cliok "param.set fetch_chunksize 4k" varnish v1 -cliok "param.set vsl_mask +VfpAcct" client c1 { txreq -url /foo -hdr "Accept-Encoding: gzip" rxresp gunzip expect resp.http.content-encoding == "gzip" expect resp.bodylen == 4100 } -run # If this fails, the multiple storage allocations did not happen varnish v1 -expect SM?.s0.c_req > 2 client c1 { # See varnish can gunzip it. txreq -url /foo -hdr "Accept-Encoding: null" rxresp expect resp.http.content-encoding == expect resp.bodylen == 4100 delay .1 # See varnish can gunzip it, inside ESI txreq -url /bar -hdr "Accept-Encoding: null" rxresp expect resp.http.content-encoding == expect resp.bodylen == 4109 } -run varnish v1 -expect n_gzip == 1 varnish v1 -expect n_gunzip == 3 varnish v1 -expect n_test_gunzip == 0 varnish-7.5.0/bin/varnishtest/tests/g00003.vtc000066400000000000000000000045231457605730600210310ustar00rootroot00000000000000varnishtest "test gunzip on fetch" server s1 { rxreq expect req.url == "/foo" expect req.http.accept-encoding == "gzip" txresp -gziplen 41 rxreq expect req.url == "/bar" expect req.http.accept-encoding == "gzip" txresp -bodylen 42 rxreq expect req.url == "/foobar" expect req.http.accept-encoding == "gzip" txresp -bodylen 43 rxreq expect req.url == "/nogzip" expect req.http.accept-encoding == "gzip" txresp -hdr "Vary: Accept-Encoding" \ -gzipbody "keep gzip real" rxreq expect req.url == "/nogzip" expect req.http.accept-encoding == txresp -hdr "Vary: Accept-Encoding" \ -body "keep plain real" rxreq expect req.url == "/filters" expect req.http.accept-encoding == txresp -bodylen 78 } -start varnish v1 -cliok "param.set http_gzip_support true" -vcl+backend { sub vcl_backend_response { set beresp.do_gunzip = true; if (bereq.url == "/foobar") { set beresp.do_gzip = true; } if (bereq.url == "/filters") { set beresp.filters = "gzip gunzip gzip gunzip gzip"; } } sub vcl_deliver { set resp.http.filters = resp.filters; } } -start client c1 { txreq -url /foo -hdr "Accept-Encoding: gzip" rxresp expect resp.http.content-encoding == expect resp.bodylen == 41 } -run varnish v1 -vsl_catchup client c1 { txreq -url /bar -hdr "Accept-Encoding: gzip" rxresp expect resp.http.content-encoding == expect resp.bodylen == 42 } -run varnish v1 -vsl_catchup client c1 { txreq -url /foobar -hdr "Accept-Encoding: gzip" rxresp expect resp.http.content-encoding == "gzip" gunzip expect resp.bodylen == 43 } -run varnish v1 -vsl_catchup client c1 { txreq -url /foobar rxresp expect resp.http.content-encoding == expect resp.bodylen == 43 } -run varnish v1 -vsl_catchup varnish v1 -expect n_gzip == 1 varnish v1 -expect n_gunzip == 2 varnish v1 -expect n_test_gunzip == 0 varnish v1 -cliok "param.set http_gzip_support false" client c1 { txreq -url /nogzip -hdr "Accept-Encoding: gzip" rxresp expect resp.http.content-encoding == "gzip" gunzip expect resp.body == "keep gzip real" txreq -url /nogzip rxresp expect resp.http.content-encoding == expect resp.body == "keep plain real" } -run varnish v1 -vsl_catchup client c1 { txreq -url /filters rxresp expect resp.http.content-encoding == "gzip" gunzip expect resp.bodylen == 78 } -run varnish-7.5.0/bin/varnishtest/tests/g00004.vtc000066400000000000000000000013371457605730600210320ustar00rootroot00000000000000varnishtest "truncated gzip from backend" server s1 -repeat 2 { rxreq txresp -nolen \ -hdr "Content-Encoding: gzip" \ -hdr "Transfer-Encoding: Chunked" send "18\r\n" # A truncate gzip file sendhex "1f8b" sendhex "08" sendhex "00" sendhex "f5 64 ae 4e 02 03 f3 cd cf 53 f0 4f" sendhex "2e 51 30 36 54 30 b0 b4" send "\r\n" chunkedlen 0 } -start varnish v1 \ -vcl+backend { sub vcl_backend_response { set beresp.do_stream = false; if (bereq.url == "/gunzip") { set beresp.do_gunzip = true; } } } varnish v1 -cliok "param.set debug +syncvsl" varnish v1 -start client c1 { txreq rxresp expect resp.status == 503 } -run client c1 { txreq -url /gunzip rxresp expect resp.status == 503 } -run varnish-7.5.0/bin/varnishtest/tests/g00005.vtc000066400000000000000000000033711457605730600210330ustar00rootroot00000000000000varnishtest "test gunzip for client + Range" server s1 -repeat 3 { rxreq expect req.http.accept-encoding == "gzip" txresp -nolen -hdr "Transfer-encoding: chunked" \ -hdr "Content-encoding: gzip" delay 1 # Compressed "FOOBARBARF" sendhex { 31 43 0d 0a 1f 8b 08 00 75 96 cc 5a 02 03 73 f3 f7 77 72 0c 02 22 37 00 06 8e 8c 83 0a 00 00 00 0d 0a 30 0d 0a 0d 0a } } -start varnish v1 -cliok "param.set http_gzip_support true" -vcl+backend { sub vcl_backend_response { if (bereq.url ~ "^/nostream") { set beresp.do_stream = false; } } } -start client c1 { # cache miss txreq -hdr "Range: bytes=3-5" rxresp expect resp.status == 206 expect resp.bodylen == "3" expect resp.http.content-encoding == expect resp.body == "BAR" } -run varnish v1 -vsl_catchup client c2 { txreq -hdr "Accept-encoding: gzip;q=0.1" rxresp expect resp.http.content-encoding == "gzip" gunzip expect resp.bodylen == "10" } -run varnish v1 -vsl_catchup # This delay attempts to ensure that the busyobj # is completed before we attempt the range request delay 2 client c3 { txreq -hdr "Range: bytes=3-5" rxresp expect resp.status == 206 expect resp.http.content-encoding == "" expect resp.bodylen == "3" expect resp.body == "BAR" } -run varnish v1 -vsl_catchup client c4 { txreq -url "/nostreamcachemiss" -hdr "Range: bytes=3-5" rxresp expect resp.status == 206 expect resp.http.content-encoding == "" expect resp.bodylen == "3" expect resp.body == "BAR" } -run varnish v1 -vsl_catchup client c5 { # simple cache miss, no stream, no gunzip txreq -url "/nostream2" -hdr "Range: bytes=3-5" -hdr "Accept-Encoding: gzip" rxresp expect resp.status == 206 expect resp.http.content-encoding == "gzip" expect resp.bodylen == "3" } -run varnish-7.5.0/bin/varnishtest/tests/g00006.vtc000066400000000000000000000034641457605730600210370ustar00rootroot00000000000000varnishtest "Backend IMS'ing g[un]zip'ed objects" server s1 { rxreq expect req.url == /1 txresp -hdr "Last-Modified: Wed, 11 Sep 2013 13:36:55 GMT" \ -hdr {ETag: "foozle"} \ -bodylen 20 rxreq expect req.url == /1 expect req.http.if-modified-since == "Wed, 11 Sep 2013 13:36:55 GMT" txresp -status 304 \ -hdr {ETag: "fizle"} \ -nolen rxreq expect req.url == /2 txresp -hdr "Last-Modified: Wed, 11 Sep 2013 13:36:55 GMT" \ -hdr {ETag: "foobar"} \ -gzipbody "012345678901234567" rxreq expect req.url == /2 expect req.http.if-modified-since == "Wed, 11 Sep 2013 13:36:55 GMT" txresp -status 304 -hdr "Content-Encoding: gzip,rot13" \ -hdr {ETag: "snafu"} \ -nolen } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.http.foobar = beresp.http.content-encoding; if (bereq.url == "/1") { set beresp.do_gzip = true; } else { set beresp.do_gunzip = true; } set beresp.ttl = 1s; set beresp.grace = 0s; set beresp.keep = 60s; } } -start client c1 { txreq -url /1 -hdr "Accept-Encoding: gzip" rxresp expect resp.http.content-encoding == "gzip" expect resp.http.foobar == "" expect resp.http.etag == {W/"foozle"} gunzip expect resp.bodylen == 20 delay 1 txreq -url /1 -hdr "Accept-Encoding: gzip" rxresp expect resp.http.content-encoding == "gzip" expect resp.http.foobar == "gzip" expect resp.http.etag == {W/"fizle"} gunzip expect resp.bodylen == 20 delay .2 txreq -url /2 rxresp expect resp.http.content-encoding == "" expect resp.http.foobar == "gzip" expect resp.bodylen == 18 expect resp.http.etag == {W/"foobar"} delay 1 txreq -url /2 rxresp expect resp.http.content-encoding == "" # Here we see the C-E of the IMS OBJ expect resp.http.foobar == "" expect resp.http.etag == {W/"snafu"} expect resp.bodylen == 18 } -run varnish-7.5.0/bin/varnishtest/tests/g00007.vtc000066400000000000000000000053051457605730600210340ustar00rootroot00000000000000varnishtest "Test Vary with gzip/gunzip" server s1 { rxreq expect req.url == "/foo" txresp -body "foo" rxreq expect req.url == "/bar" txresp -gzipbody "bar" rxreq expect req.url == "/baz" txresp -body "baz" rxreq expect req.url == "/qux" txresp -hdr "Vary: qux" -gzipbody "qux" rxreq expect req.url == "/fubar" txresp -hdr "Vary: fubar, Accept-Encoding" -gzipbody "fubar" rxreq expect req.url == "/foobar" txresp -gzipbody "foobar" rxreq expect req.url == "/foobaz" txresp -hdr "Vary: foobaz" -gzipbody "foobaz" rxreq expect req.url == "/fooqux" txresp -hdr "Vary: fooqux, Accept-Encoding" -gzipbody "fooqux" } -start varnish v1 -vcl+backend { sub vcl_backend_response { if (bereq.url ~ "/baz") { set beresp.do_gzip = true; } elif (bereq.url ~ "/foo(bar|baz|qux)") { set beresp.do_gunzip = true; } } } -start client c1 { # /foo txreq -url /foo rxresp expect resp.body == "foo" expect resp.http.Vary == txreq -url /foo -hdr "Accept-Encoding: gzip" rxresp expect resp.body == "foo" expect resp.http.Vary == # /bar txreq -url /bar rxresp expect resp.body == "bar" expect resp.http.Vary == "Accept-Encoding" txreq -url /bar -hdr "Accept-Encoding: gzip" rxresp expect resp.bodylen == 26 expect resp.http.Vary == "Accept-Encoding" # /baz txreq -url /baz rxresp expect resp.body == "baz" expect resp.http.Vary == "Accept-Encoding" txreq -url /baz -hdr "Accept-Encoding: gzip" rxresp expect resp.bodylen == 23 expect resp.http.Vary == "Accept-Encoding" # /qux txreq -url /qux rxresp expect resp.body == "qux" expect resp.http.Vary == "qux, Accept-Encoding" txreq -url /qux -hdr "Accept-Encoding: gzip" rxresp expect resp.bodylen == 26 expect resp.http.Vary == "qux, Accept-Encoding" # /fubar txreq -url /fubar rxresp expect resp.body == "fubar" expect resp.http.Vary == "fubar, Accept-Encoding" txreq -url /fubar -hdr "Accept-Encoding: gzip" rxresp expect resp.bodylen == 28 expect resp.http.Vary == "fubar, Accept-Encoding" # /foobar txreq -url /foobar rxresp expect resp.body == "foobar" expect resp.http.Vary == txreq -url /foobar -hdr "Accept-Encoding: gzip" rxresp expect resp.body == "foobar" expect resp.http.Vary == # /foobaz txreq -url /foobaz rxresp expect resp.body == "foobaz" expect resp.http.Vary == "foobaz" txreq -url /foobaz -hdr "Accept-Encoding: gzip" rxresp expect resp.body == "foobaz" expect resp.http.Vary == "foobaz" # /fooqux txreq -url /fooqux rxresp expect resp.body == "fooqux" expect resp.http.Vary == "fooqux, Accept-Encoding" txreq -url /fooqux -hdr "Accept-Encoding: gzip" rxresp expect resp.body == "fooqux" expect resp.http.Vary == "fooqux, Accept-Encoding" } -run varnish-7.5.0/bin/varnishtest/tests/g00008.vtc000066400000000000000000000012161457605730600210320ustar00rootroot00000000000000varnishtest "Test uncommon GZIP header fields" # With a a handcrafted GZIP file with all optional header fields server s1 { rxreq txresp -hdr "content-encoding: gzip" -nolen sendhex "1f 8b" sendhex "08" sendhex "1f" sendhex "12 34 56 78" sendhex "00" sendhex "03" sendhex "08 00 50 48 04 00 4b 61 6d 70" sendhex "46 4e 41 4d 45 00" sendhex "46 43 4f 4d 4d 45 4e 54 00" sendhex "96 cc" sendhex "f3 48 cd c9 c9 57 70 8f 52 08 cf 2f ca 49 e1 02 00" sendhex "3a 0b 41 35" sendhex "0f 00 00 00" } -start varnish v1 -vcl+backend { } -start client c1 { txreq rxresp expect resp.status == 200 expect resp.body == "Hello GZ World\n" } -run varnish-7.5.0/bin/varnishtest/tests/h00001.vtc000066400000000000000000000012331457605730600210230ustar00rootroot00000000000000varnishtest "Basic HAproxy test" feature ignore_unknown_macro feature cmd {haproxy --version 2>&1 | grep -q 'HA-*Proxy version'} server s1 { rxreq txresp -body "s1 >>> Hello world!" } -start haproxy h1 -conf { defaults mode http timeout connect 5s timeout server 30s timeout client 30s backend be1 server srv1 ${s1_sock} frontend http1 use_backend be1 bind "fd@${fe1}" } -start client c1 -connect ${h1_fe1_sock} { txreq -url "/" rxresp expect resp.status == 200 expect resp.body == "s1 >>> Hello world!" } -run haproxy h1 -cli { send "show info" expect ~ "Name: HAProxy" } -wait varnish-7.5.0/bin/varnishtest/tests/h00002.vtc000066400000000000000000000011361457605730600210260ustar00rootroot00000000000000varnishtest "Basic HAproxy test (daemon mode)" feature ignore_unknown_macro feature cmd {haproxy --version 2>&1 | grep -q 'HA-*Proxy version'} server s1 { rxreq txresp -body "s1 >>> Hello world!" } -start haproxy h1 -D -conf { defaults mode http timeout connect 5s timeout server 30s timeout client 30s backend be1 server srv1 ${s1_sock} frontend http1 use_backend be1 bind "fd@${fe1}" } -start client c1 -connect ${h1_fe1_sock} { txreq -url "/" rxresp expect resp.status == 200 expect resp.body == "s1 >>> Hello world!" } -run varnish-7.5.0/bin/varnishtest/tests/h00003.vtc000066400000000000000000000007661457605730600210370ustar00rootroot00000000000000varnishtest "Test -conf+backend" feature ignore_unknown_macro feature cmd {haproxy --version 2>&1 | grep -q 'HA-*Proxy version'} server s1 { rxreq txresp -body "s1 >>> Hello world!" } -start haproxy h1 -conf+backend { defaults mode http timeout connect 5s timeout server 30s timeout client 30s } -start client c1 -connect ${h1_fe1_sock} { txreq -url "/" rxresp expect resp.status == 200 expect resp.body == "s1 >>> Hello world!" } -run varnish-7.5.0/bin/varnishtest/tests/h00004.vtc000066400000000000000000000005411457605730600210270ustar00rootroot00000000000000varnishtest "Test -conf+backend" feature ignore_unknown_macro feature cmd {haproxy --version 2>&1 | grep -q 'HA-*Proxy version'} haproxy h1 -conf-OK { defaults mode http timeout connect 5s timeout server 30s timeout client 30s } haproxy h2 -conf-BAD {unknown keyword 'FOOBAR' in 'global' section} { FOOBAR } varnish-7.5.0/bin/varnishtest/tests/h00005.vtc000066400000000000000000000012771457605730600210370ustar00rootroot00000000000000varnishtest "Exercise varnishtest syslog facility" feature ignore_unknown_macro feature cmd {haproxy --version 2>&1 | grep -q 'HA-*Proxy version'} server s1 { rxreq txresp } -start syslog S1 -level notice -bind "${listen_addr}" { recv info expect ~ \"dip\":\"${h1_fe_1_addr}\" } -start haproxy h1 -conf { global log ${S1_sock} local0 defaults log global timeout connect 3000 timeout client 5 timeout server 10000 frontend fe1 bind "fd@${fe_1}" mode tcp log-format {\"dip\":\"%fi\",\"dport\":\"%fp\"} default_backend be1 backend be1 server srv1 ${s1_sock} } -start client c1 -connect ${h1_fe_1_sock} { txreq -url "/" delay 0.02 rxresp } -run syslog S1 -wait varnish-7.5.0/bin/varnishtest/tests/h00006.vtc000066400000000000000000000025001457605730600210260ustar00rootroot00000000000000varnishtest "haproxy tcp-mode, uds, send-proxy-v2, client ip and acl" # same as h00007.vtc, but using uds for haproxy->varnish feature ignore_unknown_macro feature cmd {haproxy --version 2>&1 | grep -q 'HA-*Proxy version'} server s1 { rxreq txresp -body "s1 >>> Hello world!" } -start haproxy h1 -D -conf { defaults mode tcp timeout connect 5s timeout server 30s timeout client 30s listen ssloff bind "fd@${fe1}" server v1 ${tmpdir}/v1.sock send-proxy-v2 } -start varnish v1 -arg "-a ${tmpdir}/v1.sock,PROXY" -vcl+backend { import std; acl localhost { "localhost"; "127.0.0.1"; "::1"; "${localhost}"; "${s1_addr}"; // Jails IPv4 address "${h1_fe1_addr}"; // Jails IPv6 address } sub vcl_deliver { set resp.http.cip = client.ip ~ localhost; set resp.http.stdip = std.ip("" + client.ip, resolve = false) ~ localhost; set resp.http.notcip = client.ip !~ localhost; set resp.http.notstdip = std.ip("" + client.ip, resolve = false) !~ localhost; } } -start client c1 -connect ${h1_fe1_sock} { txreq -url "/" rxresp expect resp.status == 200 expect resp.http.cip == true expect resp.http.stdip == true expect resp.http.notcip == false expect resp.http.notstdip == false expect resp.body == "s1 >>> Hello world!" } -run varnish-7.5.0/bin/varnishtest/tests/h00007.vtc000066400000000000000000000024761457605730600210430ustar00rootroot00000000000000varnishtest "haproxy tcp-mode, tcp, send-proxy-v2, client ip and acl" # same as h00006.vtc, but using tcp for haproxy->varnish feature ignore_unknown_macro feature cmd {haproxy --version 2>&1 | grep -q 'HA-*Proxy version'} server s1 { rxreq txresp -body "s1 >>> Hello world!" } -start varnish v1 -proto "PROXY" -vcl+backend {} -start haproxy h1 -D -conf { defaults mode tcp timeout connect 5s timeout server 30s timeout client 30s listen ssloff bind "fd@${fe1}" server v1 ${v1_sock} send-proxy-v2 } -start varnish v1 -vcl+backend { import std; acl localhost { "localhost"; "127.0.0.1"; "::1"; "${localhost}"; "${s1_addr}"; # Jail IPv4 address "${h1_fe1_addr}"; # Jail IPv6 address } sub vcl_deliver { set resp.http.cip = client.ip ~ localhost; set resp.http.stdip = std.ip("" + client.ip, resolve = false) ~ localhost; set resp.http.notcip = client.ip !~ localhost; set resp.http.notstdip = std.ip("" + client.ip, resolve = false) !~ localhost; } } client c1 -connect ${h1_fe1_sock} { txreq -url "/" rxresp expect resp.status == 200 expect resp.http.cip == true expect resp.http.stdip == true expect resp.http.notcip == false expect resp.http.notstdip == false expect resp.body == "s1 >>> Hello world!" } -run varnish-7.5.0/bin/varnishtest/tests/i00000.vtc000066400000000000000000000015751457605730600210340ustar00rootroot00000000000000varnishtest "SF-binary/BLOB parsing in VCL" varnish v1 -errvcl "BLOB must have n*4 base64 characters" { sub vcl_recv { :a: } } varnish v1 -errvcl "BLOB must have n*4 base64 characters" { sub vcl_recv { :bbbbaa: } } varnish v1 -errvcl "BLOB must have n*4 base64 characters" { sub vcl_recv { :bbbbccccaaa: } } varnish v1 -errvcl "Illegal BLOB character:" { sub vcl_recv { :aa?: } } varnish v1 -errvcl "Wrong padding ('=') in BLOB" { sub vcl_recv { :aaaa=aa: } } varnish v1 -errvcl "Wrong padding ('=') in BLOB" { sub vcl_recv { :aaaa==a: } } varnish v1 -errvcl "Wrong padding ('=') in BLOB" { sub vcl_recv { :aaaaa=a: } } varnish v1 -errvcl "Missing colon at end of BLOB" " sub vcl_recv { :aaaa" varnish v1 -errvcl "Missing colon at end of BLOB" " sub vcl_recv { :" # hint: the 'B' leaves bits in the second output byte varnish v1 -errvcl "Illegal BLOB character:" { sub vcl_recv { :AB==: } } varnish-7.5.0/bin/varnishtest/tests/i00001.vtc000066400000000000000000000025201457605730600210240ustar00rootroot00000000000000varnishtest "SF-decimal/SF-integer ranges" varnish v1 -errvcl {Too many digits for integer.} { sub vcl_recv { set req.http.foo = 1234567890123456; } } varnish v1 -errvcl {Too many digits for real.} { sub vcl_recv { set req.http.foo = 1234567890123.; } } varnish v1 -errvcl {Too many digits for real.} { sub vcl_recv { set req.http.foo = 123456789012.1234; } } varnish v1 -errvcl {Too many digits for real.} { sub vcl_recv { set req.http.foo = 0.1234; } } varnish v1 -errvcl {Unexpected character 'e'.} { sub vcl_recv { set req.http.foo = 42.42e42; } } server s1 { rxreq txresp } -start varnish v1 -vcl+backend { sub vcl_recv { set req.http.foo1 = 123456789012345; set req.http.foo2 = 123456789012.; set req.http.foo3 = 123456789012.123; } sub vcl_deliver { if (req.http.foo) { set resp.http.foo = obj.ttl * 10000000000; } if (req.http.bar) { # unlimited malloc stevedore returns VCL_INT_MAX - epsilon set resp.http.bar = storage.Transient.free_space * 10; } } } -start logexpect l1 -v v1 -g raw -q VCL_Error { expect ? 1001 VCL_Error "REAL overflow converting to string.*" expect ? 1004 VCL_Error "INT overflow converting to string.*" } -start client c1 { txreq -hdr "foo: 1" rxresp expect resp.status == 503 } -run client c1 { txreq -hdr "bar: 1" rxresp expect resp.status == 503 } -run logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/j00000.vtc000066400000000000000000000004361457605730600210300ustar00rootroot00000000000000varnishtest "Code coverage basic UNIX jail" feature user_varnish feature group_varnish feature root server s1 { rxreq txresp } -start varnish v1 \ -jail "-junix,user=varnish,ccgroup=varnish" \ -vcl+backend { } -start client c1 { txreq rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/j00001.vtc000066400000000000000000000006021457605730600210240ustar00rootroot00000000000000varnishtest "Run worker with different uid in UNIX jail" # The "vrun" user must have login group "varnish" feature user_varnish feature user_vcache feature group_varnish feature root server s1 { rxreq txresp } -start varnish v1 \ -jail "-junix,user=varnish,ccgroup=varnish,workuser=vcache" \ -vcl+backend { } -start client c1 { txreq rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/j00003.vtc000066400000000000000000000007071457605730600210340ustar00rootroot00000000000000varnishtest "-junix bad subarg handling" feature root shell -err -expect "unknown sub-argument" "varnishd -junix,bla=foo -f ''" shell -err -expect "user not found" "varnishd -junix,user=/// -f ''" shell -err -expect "user not found" "varnishd -junix,workuser=/// -f ''" shell -err -expect "group not found" "varnishd -junix,ccgroup=/// -f ''" feature user_varnish shell -err -expect "have different login groups" "varnishd -junix,workuser=root -f ''" varnish-7.5.0/bin/varnishtest/tests/j00004.vtc000066400000000000000000000014731457605730600210360ustar00rootroot00000000000000varnishtest "Listen at a Unix domain socket while in jail" feature user_varnish feature group_varnish feature root server s1 { rxreq txresp } -start varnish v1 -arg "-a ${tmpdir}/v1.sock" \ -jail "-junix,user=varnish,ccgroup=varnish" \ -vcl+backend { } -start # Socket is created as management owner before the child goes to jail shell -match "root" { ls -l ${tmpdir}/v1.sock } client c1 -connect "${tmpdir}/v1.sock" { txreq rxresp expect resp.status == 200 } -run server s1 { rxreq txresp } -start varnish v2 -arg "-a ${tmpdir}/v2.sock,user=varnish,group=varnish,mode=666" \ -jail "-junix,user=varnish,ccgroup=varnish" \ -vcl+backend { } -start shell -match "rw-rw-rw-.+varnish.+varnish" { ls -l ${tmpdir}/v2.sock } client c1 -connect "${tmpdir}/v2.sock" { txreq rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/l00000.vtc000066400000000000000000000015711457605730600210330ustar00rootroot00000000000000varnishtest "test logexpect" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { } -start logexpect l1 -v v1 -g session { expect 0 1000 Begin sess expect 0 = SessOpen expect * = Link "req 1001" expect 0 = SessClose expect 0 = End expect 0 * Begin "req 1000" expect * = ReqStart expect 0 = ReqMethod GET expect 0 = ReqURL / expect 0 = ReqProtocol HTTP/1.1 expect * = ReqHeader "Foo: bar" expect * = Link bereq expect * = End expect 0 1002 Begin "bereq 1001" expect * = End } -start # Check with a query (this selects only the backend request) logexpect l2 -v v1 -g vxid -q "Begin ~ 'bereq 1001'" { expect 0 1002 Begin expect * = End } -start client c1 { txreq -hdr "Foo: bar" rxresp expect resp.status == 200 } -run logexpect l1 -wait logexpect l2 -wait # Check -d arg logexpect l1 -d 1 { expect 0 1000 Begin sess expect * = SessClose } -run varnish-7.5.0/bin/varnishtest/tests/l00001.vtc000066400000000000000000000077631457605730600210450ustar00rootroot00000000000000varnishtest "Test VSL query operators" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { sub vcl_deliver { set resp.http.x-test = "123 321"; } } -start logexpect l1 -v v1 client c1 { txreq -hdr "Foo: bar" rxresp expect resp.status == 200 } -run # Test 'eq' operator logexpect l1 -d 1 -g vxid -q "Begin eq 'req 1000 rxreq'" { expect 0 * Begin req expect * = End } -run # Test 'ne' operator logexpect l1 -d 1 -g vxid -q "ReqProtocol ne 'HTTP/1.0'" { expect 0 * Begin req expect * = End } -run # Test '==' operator on integers logexpect l1 -d 1 -g vxid -q "RespStatus == 200" { expect 0 * Begin req expect * = End } -run # Test '==' operator on floats logexpect l1 -d 1 -g vxid -q "RespStatus == 200." { expect 0 * Begin req expect * = End } -run # Test '!=' operator on integers logexpect l1 -d 1 -g vxid -q "RespStatus != 503" { expect 0 * Begin req expect * = End } -run # Test '!=' operator on floats logexpect l1 -d 1 -g vxid -q "RespStatus != 503." { expect 0 * Begin req expect * = End } -run # Test '<' operator on integers logexpect l1 -d 1 -g vxid -q "RespStatus < 201" { expect 0 * Begin req expect * = End } -run # Test '<' operator on floats logexpect l1 -d 1 -g vxid -q "RespStatus < 201." { expect 0 * Begin req expect * = End } -run # Test '>' operator on integers logexpect l1 -d 1 -g vxid -q "RespStatus > 199" { expect 0 * Begin req expect * = End } -run # Test '>' operator on floats logexpect l1 -d 1 -g vxid -q "RespStatus > 199." { expect 0 * Begin req expect * = End } -run # Test '<=' operator on integers logexpect l1 -d 1 -g vxid -q "RespStatus <= 200" { expect 0 * Begin req expect * = End } -run # Test '<=' operator on floats logexpect l1 -d 1 -g vxid -q "RespStatus <= 200." { expect 0 * Begin req expect * = End } -run # Test '>=' operator on integers logexpect l1 -d 1 -g vxid -q "RespStatus >= 200" { expect 0 * Begin req expect * = End } -run # Test '>=' operator on floats logexpect l1 -d 1 -g vxid -q "RespStatus >= 200." { expect 0 * Begin req expect * = End } -run # Test '~' operator logexpect l1 -d 1 -g vxid -q "RespStatus ~ '^200$'" { expect 0 * Begin req expect * = End } -run # Test '!~' operator logexpect l1 -d 1 -g vxid -q "RespStatus !~ '^404$'" { expect 0 * Begin req expect * = End } -run # Test boolean and logexpect l1 -d 1 -g vxid -q "RespStatus == 200 and RespStatus ~ '^200$'" { expect 0 * Begin req expect * = End } -run # Test boolean or logexpect l1 -d 1 -g vxid -q "RespStatus == 404 or RespStatus ~ '^200$'" { expect 0 * Begin req expect * = End } -run # Test boolean not logexpect l1 -d 1 -g vxid -q "RespStatus == 404 or not RespStatus ~ '^404$'" { expect 0 * Begin req expect * = End } -run # Test grouping logexpect l1 -d 1 -g vxid -q "(RespStatus == 200 or RespStatus == 404) and RespStatus == 200" { expect 0 * Begin req expect * = End } -run # Test and/or precedence logexpect l1 -d 1 -g vxid -q "RespStatus == 200 or RespStatus == 503 and RespStatus == 404" { expect 0 * Begin req expect * = End } -run # Test field logexpect l1 -d 1 -g vxid -q "RespHeader[2] == 123" { expect 0 * Begin req expect * = End } -run # Test field on floats logexpect l1 -d 1 -g vxid -q "RespHeader[2] == 123." { expect 0 * Begin req expect * = End } -run # Test taglists logexpect l1 -d 1 -g vxid -q "Debug,Resp* == 200" { expect 0 * Begin req expect * = End } -run # Test record prefix logexpect l1 -d 1 -g vxid -q "Resp*:x-test eq '123 321'" { expect 0 * Begin req expect * = End } -run # Test tag presence (no operator) logexpect l1 -d 1 -g vxid -q "RespStatus" { expect 0 * Begin req expect * = End } -run # Test level limits equal logexpect l1 -d 1 -g vxid -q "{1}Begin ~ req" { expect 0 * Begin req expect * = End } -run # Test level limits less than or equal logexpect l1 -d 1 -g vxid -q "{2-}Begin ~ req" { expect 0 * Begin req expect * = End } -run # Test level limits greater than or equal logexpect l1 -d 1 -g vxid -q "{0+}Begin ~ req" { expect 0 * Begin req expect * = End } -run varnish-7.5.0/bin/varnishtest/tests/l00002.vtc000066400000000000000000000054741457605730600210430ustar00rootroot00000000000000varnishtest "Test request byte counters" server s1 { rxreq expect req.url == "/1" txresp -hdr "Accept-ranges: bytes" -bodylen 1000 rxreq expect req.url == "/2" txresp -hdr "Accept-ranges: bytes" -bodylen 2000 } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_stream = false; } sub vcl_deliver { unset resp.http.date; unset resp.http.age; unset resp.http.via; unset resp.http.x-varnish; } } -start # Request (1001): # POST /1 HTTP/1.1\r\n 18 bytes # Host: foo\r\n 11 bytes # Content-Length: 4\r\n 19 bytes # User-Agent: c1\r\n 16 bytes # \r\n 2 bytes # Total: 66 bytes # Response: # HTTP/1.1 200 OK\r\n 17 bytes # Content-Length: 1000\r\n 22 bytes # Connection: keep-alive\r\n 24 bytes # Accept-Ranges: bytes\r\n 22 bytes # Server: s1\r\n 12 bytes # \r\n 2 bytes # Total: 99 bytes # Request (1003): # GET /2 HTTP/1.1\r\n 17 bytes # Host: foo\r\n 11 bytes # \r\n 2 bytes # Total: 30 bytes # Response: # HTTP/1.1 200 OK\r\n 17 bytes # Content-Length: 2000\r\n 22 bytes # Connection: keep-alive\r\n 24 bytes # Accept-Ranges: bytes\r\n 22 bytes # Server: s1\r\n 12 bytes # \r\n 2 bytes # Total: 99 bytes # Request (1005): # GET /2 HTTP/1.1\r\n 17 bytes # Host: foo\r\n 11 bytes # \r\n 2 bytes # Total: 30 bytes # Response: # HTTP/1.1 200 OK\r\n 17 bytes # Content-Length: 2000\r\n 22 bytes # Connection: keep-alive\r\n 24 bytes # Accept-Ranges: bytes\r\n 22 bytes # Server: s1\r\n 12 bytes # \r\n 2 bytes # Total: 99 bytes # Request (1006): # GET\r\n 5 bytes # \r\n 2 bytes # Total: 7 bytes # Response: # HTTP/1.1 400 Bad Request\r\n 26 bytes # \r\n 2 bytes # Total: 28 bytes logexpect l1 -v v1 -g session { expect * 1001 Begin "^req .* rxreq" expect * = ReqAcct "^66 4 70 99 1000 1099$" expect 0 = End expect * 1003 Begin "^req .* rxreq" expect * = ReqAcct "^30 0 30 99 2000 2099$" expect 0 = End expect * 1005 Begin "^req .* rxreq" expect * = ReqAcct "^30 0 30 99 2000 2099$" expect 0 = End expect * 1006 Begin "^req .* rxreq" expect * = ReqAcct "^7 0 7 28 0 28$" expect 0 = End } -start # Request 1001 client c1 { txreq -method POST -url "/1" -hdr "Host: foo" -body "asdf" rxresp expect resp.http.accept-ranges == "bytes" expect resp.status == 200 send "GET /2 HTTP/1.1\r\nHost: foo\r\n\r\nGET /2 HTTP/1.1\r\nHost: foo\r\n\r\n" rxresp expect resp.http.accept-ranges == "bytes" expect resp.status == 200 rxresp expect resp.http.accept-ranges == "bytes" expect resp.status == 200 send "GET\r\n\r\n" rxresp expect resp.http.accept-ranges == "resp.http.accept-ranges" expect resp.status == 400 } -run logexpect l1 -wait varnish v1 -expect s_req_hdrbytes == 133 varnish v1 -expect s_req_bodybytes == 4 varnish v1 -expect s_resp_hdrbytes == 325 varnish v1 -expect s_resp_bodybytes == 5000 varnish-7.5.0/bin/varnishtest/tests/l00003.vtc000066400000000000000000000041361457605730600210360ustar00rootroot00000000000000varnishtest "Test request byte counters with ESI" server s1 { rxreq expect req.url == "/" txresp -body {ghi} rxreq expect req.url == "/1" txresp -body {abcdef} rxreq expect req.url == "/2" txresp -body {123} } -start # give enough stack to 32bit systems varnish v1 -cliok "param.set thread_pool_stack 80k" varnish v1 -vcl+backend { sub vcl_backend_response { if (bereq.url != "/2") { set beresp.do_esi = true; } } sub vcl_deliver { unset resp.http.date; unset resp.http.age; unset resp.http.via; unset resp.http.x-varnish; } } -start # Request (1001): # GET / HTTP/1.1\r\n 16 bytes # Host: foo\r\n 11 bytes # User-Agent: c1\r\n 16 bytes # \r\n 2 bytes # Total: 45 bytes # Reponse: # HTTP/1.1 200 OK\r\n 17 bytes # Accept-Ranges: bytes\r\n 22 bytes # Transfer-Encoding: chunked\r\n 28 bytes # Connection: keep-alive\r\n 24 bytes # Server: s1\r\n 12 bytes # \r\n 2 bytes # Total: 105 bytes # Response body: # Chunk len - bytes # 123 3 bytes # Chunk end - bytes # Chunk len - bytes # abc 3 bytes # Chunk end - bytes # Chunk len - bytes # 123 3 bytes # Chunk end - bytes # Chunk len - bytes # def 3 bytes # Chunk end - bytes # Chunk len - bytes # ghi 3 bytes # Chunk end - bytes # Chunked end - bytes # Total: 15 bytes logexpect l1 -v v1 -g request { expect 0 1001 Begin "^req .* rxreq" expect * = ReqAcct "^45 0 45 105 15 120$" expect 0 = End expect * 1003 Begin "^req .* esi" expect * = ReqAcct "^0 0 0 0 12 12$" expect 0 = End expect * 1005 Begin "^req .* esi" expect * = ReqAcct "^0 0 0 0 3 3$" expect 0 = End expect * 1007 Begin "^req .* esi" expect * = ReqAcct "^0 0 0 0 3 3$" expect 0 = End } -start client c1 { txreq -url "/" -hdr "Host: foo" rxresp expect resp.status == 200 expect resp.body == "123abc123defghi" } -run logexpect l1 -wait varnish v1 -expect s_req_hdrbytes == 45 varnish v1 -expect s_req_bodybytes == 0 varnish v1 -expect s_resp_hdrbytes == 105 varnish v1 -expect s_resp_bodybytes == 15 varnish-7.5.0/bin/varnishtest/tests/l00004.vtc000066400000000000000000000026131457605730600210350ustar00rootroot00000000000000varnishtest "Test request byte counters on pipe" server s1 { rxreq expect req.url == "/" expect req.http.test == "yes" txresp -nodate -body "fdsa" } -start varnish v1 -vcl+backend { sub vcl_recv { # make PipeAcct deterministic unset req.http.via; return (pipe); } sub vcl_pipe { set bereq.http.test = "yes"; unset bereq.http.x-forwarded-for; unset bereq.http.x-varnish; unset bereq.http.connection; } } -start # req: # POST / HTTP/1.1\r\n 17 bytes # Host: foo\r\n 11 bytes # Content-Length: 4\r\n 19 bytes # User-Agent: c1\r\n 16 bytes # \r\n 2 bytes # Total: 65 bytes # bereq: # POST / HTTP/1.1\r\n 17 bytes # Content-Length: 4\r\n 19 bytes # Host: foo\r\n 11 bytes # User-Agent: c1\r\n 16 bytes # test: yes\r\n 11 bytes # \r\n 2 bytes # Total: 76 bytes # reqbody # asdf 4 bytes # resp: # HTTP/1.1 200 OK\r\n 17 bytes # Content-Length: 4\r\n 19 bytes # Server: s1\r\n 12 bytes # \r\n 2 bytes # fdsa 4 bytes # Total: 54 bytes logexpect l1 -v v1 -g request { expect 0 1001 Begin "^req .* rxreq" expect * = PipeAcct "^65 76 4 54$" expect 0 = End } -start client c1 { txreq -req "POST" -url "/" -hdr "Content-Length: 4" -hdr "Host: foo" send "asdf" rxresp expect resp.status == 200 } -run logexpect l1 -wait varnish v1 -expect s_pipe_hdrbytes == 65 varnish v1 -expect s_pipe_in == 4 varnish v1 -expect s_pipe_out == 54 varnish-7.5.0/bin/varnishtest/tests/l00005.vtc000066400000000000000000000032021457605730600210310ustar00rootroot00000000000000varnishtest "Test backend fetch byte counters" server s1 { rxreq expect req.url == "/1" txresp -nodate -bodylen 1000 rxreq expect req.url == "/2" send "HTTP/1.1\r\n\r\n" } -start varnish v1 -vcl+backend { sub vcl_recv { # make BereqAcct deterministic unset req.http.via; } sub vcl_backend_fetch { unset bereq.http.x-forwarded-for; unset bereq.http.x-varnish; set bereq.http.Host = "foo.bar"; } sub vcl_backend_response { set beresp.do_stream = false; } } -start # Request (1002): # POST /1 HTTP/1.1\r\n 18 bytes # Content-Length: 4\r\n 19 bytes # User-Agent: c1\r\n 16 bytes # Host: foo.bar\r\n 15 bytes # \r\n 2 bytes # Total: 70 bytes # Response: # HTTP/1.1 200 OK\r\n 17 bytes # Content-Length: 1000\r\n 22 bytes # Server: s1\r\n 12 bytes # \r\n 2 bytes # Total: 53 bytes # Request (1004): # POST /2 HTTP/1.1\r\n 18 bytes # Content-Length: 4\r\n 19 bytes # User-Agent: c1\r\n 16 bytes # Host: foo.bar\r\n 15 bytes # \r\n 2 bytes # Total: 70 bytes # Reponse: # HTTP/1.1\r\n 10 bytes # \r\n 2 bytes # Total: 12 bytes logexpect l1 -v v1 -g session { expect * 1001 Begin "^req .* rxreq" expect * = End expect 0 1003 Begin "^req .* rxreq" expect * = End expect 0 1002 Begin "^bereq " expect * = BereqAcct "^70 4 74 53 1000 1053$" expect 0 = End expect 0 1004 Begin "^bereq" expect * = BereqAcct "^70 4 74 12 0 12$" expect * = End } -start # Request 1001 client c1 { txreq -req "POST" -url "/1" -body "asdf" rxresp expect resp.status == 200 txreq -req "POST" -url "/2" -body "asdf" rxresp expect resp.status == 503 } -run logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/l00006.vtc000066400000000000000000000004271457605730600210400ustar00rootroot00000000000000varnishtest "Check shmlog stats" server s1 { rxreq txresp } -start varnish v1 -vcl+backend "" -start client c1 { txreq rxresp } -run varnish v1 -vsl_catchup varnish v1 -expect shm_writes > 0 varnish v1 -expect shm_records > 0 varnish v1 -expect shm_bytes > 0 varnish-7.5.0/bin/varnishtest/tests/l00007.vtc000066400000000000000000000067501457605730600210460ustar00rootroot00000000000000varnishtest "Begin[4] and VSL_OPT_E" varnish v1 -vcl { import vtc; backend be none; sub vcl_init { # make up a replay with -i Begin,Link,End vtc.vsl_replay({" **** v1 vsl| 1000 Begin c sess 0 HTTP/1 **** v1 vsl| 1000 Link c req 1001 rxreq **** v1 vsl| 1003 Begin b bereq 1002 fetch **** v1 vsl| 1003 End b **** v1 vsl| 1002 Begin c req 1001 vmod_foo:subreq 1 **** v1 vsl| 1002 End c **** v1 vsl| 1001 Begin c req 1000 rxreq **** v1 vsl| 1001 Link c req 1002 vmod_foo:subreq 1 **** v1 vsl| 1001 End c **** v1 vsl| 1000 End c **** v1 vsl| 1004 Begin c sess 0 HTTP/1 **** v1 vsl| 1004 End c "}); } } -start varnish v1 -vsl_catchup logexpect l1 -v v1 -d 1 { expect 0 1003 Begin "bereq 1002 fetch" expect 0 = End expect 0 1001 Begin "req 1000 rxreq" expect 0 = Link "req 1002 vmod_foo:subreq" expect 0 = End expect 0 1000 Begin "sess 0 HTTP/1" expect 0 = Link "req 1001 rxreq" expect 0 = End expect 0 1004 Begin "sess 0 HTTP/1" expect 0 = End } -run logexpect l1_bis -v v1 -d 1 -b 1 -c 1 { expect 0 1003 Begin "bereq 1002 fetch" expect 0 = End expect 0 1001 Begin "req 1000 rxreq" expect 0 = Link "req 1002 vmod_foo:subreq" expect 0 = End expect 0 1000 Begin "sess 0 HTTP/1" expect 0 = Link "req 1001 rxreq" expect 0 = End expect 0 1004 Begin "sess 0 HTTP/1" expect 0 = End } -run logexpect l2 -v v1 -d 1 -b 1 { expect 0 1003 Begin "bereq 1002 fetch" expect 0 = End } -run logexpect l3 -v v1 -d 1 -c 1 { expect 0 1001 Begin "req 1000 rxreq" expect 0 = Link "req 1002 vmod_foo:subreq" expect 0 = End expect 0 1000 Begin "sess 0 HTTP/1" expect 0 = Link "req 1001 rxreq" expect 0 = End expect 0 1004 Begin "sess 0 HTTP/1" expect 0 = End } -run logexpect l4 -v v1 -d 1 -E 1 { expect 0 1002 Begin "req 1001 vmod_foo:subreq" expect 0 = End expect 0 1001 Begin "req 1000 rxreq" expect 0 = Link "req 1002 vmod_foo:subreq" expect 0 = End expect 0 1000 Begin "sess 0 HTTP/1" expect 0 = Link "req 1001 rxreq" expect 0 = End expect 0 1004 Begin "sess 0 HTTP/1" expect 0 = End } -run logexpect l4_bis -v v1 -d 1 -c 1 -E 1 { expect 0 1002 Begin "req 1001 vmod_foo:subreq" expect 0 = End expect 0 1001 Begin "req 1000 rxreq" expect 0 = Link "req 1002 vmod_foo:subreq" expect 0 = End expect 0 1000 Begin "sess 0 HTTP/1" expect 0 = Link "req 1001 rxreq" expect 0 = End expect 0 1004 Begin "sess 0 HTTP/1" expect 0 = End } -run logexpect l5 -v v1 -d 1 -b 1 -E 1 { expect 0 1003 Begin "bereq 1002 fetch" expect 0 = End expect 0 1002 Begin "req 1001 vmod_foo:subreq" expect 0 = End expect 0 1001 Begin "req 1000 rxreq" expect 0 = Link "req 1002 vmod_foo:subreq" expect 0 = End expect 0 1000 Begin "sess 0 HTTP/1" expect 0 = Link "req 1001 rxreq" expect 0 = End expect 0 1004 Begin "sess 0 HTTP/1" expect 0 = End } -run logexpect l5_bis -v v1 -d 1 -b 1 -c 1 -E 1 { expect 0 1003 Begin "bereq 1002 fetch" expect 0 = End expect 0 1002 Begin "req 1001 vmod_foo:subreq" expect 0 = End expect 0 1001 Begin "req 1000 rxreq" expect 0 = Link "req 1002 vmod_foo:subreq" expect 0 = End expect 0 1000 Begin "sess 0 HTTP/1" expect 0 = Link "req 1001 rxreq" expect 0 = End expect 0 1004 Begin "sess 0 HTTP/1" expect 0 = End } -run varnish-7.5.0/bin/varnishtest/tests/m00000.vtc000066400000000000000000000154211457605730600210330ustar00rootroot00000000000000varnishtest "Test vmod usage and test vmods (debug & vtc)" server s1 { rxreq expect req.http.encrypted == "ROT52" txresp -hdr "foo: bAr" -hdr "bar: fOo" -bodylen 4 } -start varnish v1 -vcl+backend { import std; import debug; import vtc; import debug; // again import debug as dbg; import debug as dbg; // again sub vcl_init { new objx = dbg.obj(); objx.obj(); dbg.vsc_new(); std.log("inside vcl_init"); } sub vcl_synth { set req.http.overwrite = "the workspace " + "to ensure we notice any unfinished privs"; } sub priv_task { debug.test_priv_task_get(); debug.test_priv_task("foo"); } sub vcl_recv { if (req.url == "/priv-task-no-mem") { vtc.workspace_alloc(client, -4); call priv_task; return (fail); } if (req.url == "/fail") { debug.test_priv_task("foo"); return (fail); } debug.rot52(req); debug.vsc_count(); } sub vcl_deliver { set resp.http.who = debug.author(mithrandir); set resp.http.really = debug.author(); set resp.http.when = objx.date(); set resp.http.what = vtc.typesize("dfijlopsz"); set resp.http.not = vtc.typesize("*"); debug.test_priv_call(); debug.test_priv_vcl(); objx.test_priv_call(); objx.test_priv_vcl(); std.syslog(8 + 7, "Somebody runs varnishtest"); debug.rot52(resp); } } -start varnish v1 -expect DEBUG.count == 0 client c1 { txreq -url "/bar" rxresp expect resp.status == 200 expect resp.bodylen == "4" expect resp.http.who == "Tollef" expect resp.http.really == "Poul-Henning" expect resp.http.when == "Thu, 01 Jan 1970 00:00:21 GMT" expect resp.http.encrypted == "ROT52" expect resp.http.what >= 16 expect resp.http.not == -1 txreq -url "/fail" rxresp expect resp.status == 503 } -run varnish v1 -vsl_catchup client c1 { txreq -url "/priv-task-no-mem" rxresp expect resp.status == 503 } -run varnish v1 -vsl_catchup varnish v1 -expect DEBUG.count == 1 logexpect l1 -v v1 -g raw -d 1 { expect * 0 CLI {^Rd vcl.load} expect 0 0 VCL_Log {^inside vcl_init} expect 0 0 Debug {VCL_EVENT_WARM} expect * 1001 VCL_call {^DELIVER} expect 0 = RespHeader {^who: Tollef} expect 0 = RespHeader {^really: Poul-Henning} expect 0 = RespHeader {^when: Thu, 01 Jan 1970 00:00:21 GMT} expect 0 = RespHeader {^what: [1-9][0-9]} expect 0 = RespHeader {^not: -1} expect 0 = RespHeader {^Encrypted: ROT52} expect 0 = VCL_return {^deliver} } -run varnish v1 -errvcl {Wrong enum value. Expected one of:} { import debug; sub vcl_deliver { set resp.http.who = debug.author(ENUM); } } varnish v1 -errvcl {Wrong enum value. Expected one of:} { import debug; sub vcl_deliver { set resp.http.who = debug.author(slink); } } varnish v1 -errvcl {Expression has type STRING, expected REAL} { import std; sub vcl_deliver { set resp.http.who = std.random("foo", "bar"); } } varnish v1 -errvcl {Symbol not found: 'obj'} { import debug; sub vcl_hit { debug.rot52(obj); } } varnish v1 -errvcl {Symbol not found: 'obj'} { import debug; sub vcl_deliver { debug.rot52(obj); } } varnish v1 -errvcl {rot13: VFP already registered (per-vcl)} { import debug; backend none none; sub vcl_init { debug.rot104(); } } varnish v1 -errvcl {Failed from VCL} { import debug; import directors; backend default { .host = "${localhost}"; } sub vcl_init { return (fail); # uninitialized objects coverage new xyz = debug.obj(); new fb = directors.fallback(); new hsh = directors.hash(); new rnd = directors.random(); new rr = directors.round_robin(); new shd = directors.shard(); new shp = directors.shard_param(); } } varnish v1 -errvcl {Failed initialization} { import debug; backend none none; sub vcl_init { new fails = debug.obj("fail"); } } shell { cat >${tmpdir}/f1 <<-EOF vcl 4.1; import debug; backend none none; sub vcl_init { new fails = debug.obj("fail"); } EOF } varnish v1 -clierr 300 "vcl.load f1 ${tmpdir}/f1" shell -exit 1 -expect {failing as requested} { varnishadm -t 10 -n ${tmpdir}/v1 vcl.load f1 ${tmpdir}/f1 } varnish v1 -cliok "param.set vcc_feature +allow_inline_c" varnish v1 -vcl { import vtc; backend be none; C{ struct zc { size_t z; char c; }; struct ic { int i; char c; }; struct sc { short s; char c; }; struct cc { char c1; char c2; }; struct czc { char c1; size_t z; char c2; }; struct cic { char c1; int i; char c2; }; struct csc { char c1; short s; char c2; }; struct uzp { /* same as vrt_blob */ unsigned u; size_t z; void *p; }; #define SETHDR(type) \ VRT_SetHdr( \ ctx, \ &VGC_HDR_RESP_ ## type ## _2d_sizeof, \ VRT_INT_string(ctx, sizeof(struct type)), \ 0 \ ) }C sub vcl_recv { return (synth(200)); } sub vcl_synth { C{ SETHDR(zc); }C set resp.http.zc-typesize = vtc.typesize("zc"); set resp.http.zc-match = (resp.http.zc-sizeof == resp.http.zc-typesize); C{ SETHDR(ic); }C set resp.http.ic-typesize = vtc.typesize("ic"); set resp.http.ic-match = (resp.http.ic-sizeof == resp.http.ic-typesize); C{ SETHDR(sc); }C set resp.http.sc-typesize = vtc.typesize("sc"); set resp.http.sc-match = (resp.http.sc-sizeof == resp.http.sc-typesize); C{ SETHDR(cc); }C set resp.http.cc-typesize = vtc.typesize("cc"); set resp.http.cc-match = (resp.http.cc-sizeof == resp.http.cc-typesize); C{ SETHDR(czc); }C set resp.http.czc-typesize = vtc.typesize("czc"); set resp.http.czc-match = (resp.http.czc-sizeof == resp.http.czc-typesize); C{ SETHDR(cic); }C set resp.http.cic-typesize = vtc.typesize("cic"); set resp.http.cic-match = (resp.http.cic-sizeof == resp.http.cic-typesize); C{ SETHDR(csc); }C set resp.http.csc-typesize = vtc.typesize("csc"); set resp.http.csc-match = (resp.http.csc-sizeof == resp.http.csc-typesize); C{ SETHDR(uzp); }C set resp.http.uzp-typesize = vtc.typesize("uzp"); set resp.http.uzp-match = (resp.http.uzp-sizeof == resp.http.uzp-typesize); } } client c1 { txreq rxresp expect resp.http.zc-match == true expect resp.http.ic-match == true expect resp.http.sc-match == true expect resp.http.cc-match == true expect resp.http.czc-match == true expect resp.http.cic-match == true expect resp.http.csc-match == true expect resp.http.uzp-match == true } -run varnish v1 -vsl_catchup varnish v1 -vcl+backend { } varnish v1 -cliok "debug.vmod" varnish v1 -cliok "vcl.list" varnish v1 -expect vmods == 3 varnish v1 -cliok "vcl.discard vcl[1-9] vcl10" varnish v1 -cliok "vcl.list" varnish v1 -cliok "debug.vmod" delay .5 varnish v1 -expect vmods == 0 varnish v1 -errvcl {Symbol 'std' type (vmod) can not be used in expression.} { import std; sub vcl_recv { if (std == 2) { } } } varnish v1 -cliok "debug.vmod" varnish v1 -cliok "vcl.list" varnish v1 -expect vmods == 0 varnish-7.5.0/bin/varnishtest/tests/m00003.vtc000066400000000000000000000120331457605730600210320ustar00rootroot00000000000000varnishtest "Test vmod_path param" feature topbuild server s1 { rxreq txresp } -start shell { echo "vcl 4.1; import std; backend dummy None;" >${tmpdir}/test.vcl varnishd -pvmod_path=${topbuild}/vmod/.libs \ -C -f ${tmpdir}/test.vcl 2>/dev/null } varnish v1 -arg "-pvmod_path=${topbuild}/vmod/.libs/" -vcl+backend { import std; } -start varnish v1 -cliok "param.set vmod_path /nonexistent" varnish v1 -errvcl {Could not find VMOD std} { import std; } varnish v1 -cliok "param.set vmod_path ${topbuild}/vmod/.libs/" varnish v1 -vcl+backend { import std; } varnish v1 -cliok "param.set vmod_path ${tmpdir}" shell { cp ${topbuild}/vmod/.libs/libvmod_debug.so \ ${tmpdir}/libvmod_wrong.so } varnish v1 -errvcl {Wrong file for VMOD wrong} { import wrong; } filewrite ${tmpdir}/libvmod_wrong.so "" varnish v1 -errvcl {Error: No VMOD JSON found} { import wrong; } shell "chmod 000 ${tmpdir}/libvmod_wrong.so" varnish v1 -errvcl {Could not open VMOD wrong} { import wrong; } shell "rm -f ${tmpdir}/libvmod_wrong.so" filewrite ${tmpdir}/libvmod_wrong.so "BLA" "VMOD_JSON_SPEC\x02" "BLA" varnish v1 -errvcl {Truncated VMOD JSON} { import wrong; } filewrite ${tmpdir}/libvmod_wrong.so "VMOD_JSON_SPEC\x02" "BLA" "\x03" varnish v1 -errvcl {VMOD wrong: bad metadata} { import wrong; } filewrite ${tmpdir}/libvmod_wrong.so "VMOD_JSON_SPEC\x02" "0" "\x03" varnish v1 -errvcl {Not array[0]} { import wrong; } filewrite ${tmpdir}/libvmod_wrong.so "VMOD_JSON_SPEC\x02" "[0]" "\x03" varnish v1 -errvcl {Not array[1]} { import wrong; } filewrite ${tmpdir}/libvmod_wrong.so "VMOD_JSON_SPEC\x02" "[[0]]" "\x03" varnish v1 -errvcl {Not string[2]} { import wrong; } filewrite ${tmpdir}/libvmod_wrong.so "VMOD_JSON_SPEC\x02" "[[\"$VBLA\"]]" "\x03" varnish v1 -errvcl {Not $VMOD[3]} { import wrong; } filewrite ${tmpdir}/libvmod_wrong.so "VMOD_JSON_SPEC\x02" filewrite -a ${tmpdir}/libvmod_wrong.so { [ [ "$VMOD", "1.0", "wrong", "Vmod_vmod_wrong_Func", "0000000000000000000000000000000000000000000000000000000000000000", "0000000000000000000000000000000000000000000000000000000000000000", "0", "0U" ] ] } filewrite -a ${tmpdir}/libvmod_wrong.so "\x03" varnish v1 -errvcl {Incompatible VMOD wrong} { import wrong; } filewrite ${tmpdir}/libvmod_wrong.so "VMOD_JSON_SPEC\x02" filewrite -a ${tmpdir}/libvmod_wrong.so { [ [ "$VMOD", "1.0", "wrong", "Vmod_vmod_wrong_Func", "0000000000000000000000000000000000000000000000000000000000000000", "0000000000000000000000000000000000000000000000000000000000000000", "1U", "0" ] ] } filewrite -a ${tmpdir}/libvmod_wrong.so "\x03" varnish v1 -errvcl {VMOD wants ABI version 1.0} { import wrong; } ############################################################# # NB: in the tests below "19" should track VRT_MAJOR_VERSION filewrite ${tmpdir}/libvmod_wrong.so "VMOD_JSON_SPEC\x02" filewrite -a ${tmpdir}/libvmod_wrong.so { [ [ "$VMOD", "1.0", "wrong", "Vmod_vmod_wrong_Func", "0000000000000000000000000000000000000000000000000000000000000000", "0000000000000000000000000000000000000000000000000000000000000000", "19", "0" ], [ "$FOOBAR" ] ] } filewrite -a ${tmpdir}/libvmod_wrong.so "\x03" varnish v1 -errvcl {Unknown metadata stanza.} { import wrong; } filewrite ${tmpdir}/libvmod_wrong.so "VMOD_JSON_SPEC\x02" filewrite -a ${tmpdir}/libvmod_wrong.so { [ [ "$VMOD", "1.0", "wrong", "Vmod_vmod_wrong_Func", "0000000000000000000000000000000000000000000000000000000000000000", "0000000000000000000000000000000000000000000000000000000000000000", "19", "0" ] ] } filewrite -a ${tmpdir}/libvmod_wrong.so "\x03" varnish v1 -errvcl {Bad cproto stanza} { import wrong; } filewrite ${tmpdir}/libvmod_wrong.so "VMOD_JSON_SPEC\x02" filewrite -a ${tmpdir}/libvmod_wrong.so { [ [ "$VMOD", "1.0", "wrong", "Vmod_vmod_wrong_Func", "0000000000000000000000000000000000000000000000000000000000000000", "0000000000000000000000000000000000000000000000000000000000000000", "19", "0" ], [ "$CPROTO" ], [ "$VMOD" ] ] } filewrite -a ${tmpdir}/libvmod_wrong.so "\x03" varnish v1 -errvcl {Bad vmod stanza} { import wrong; } filewrite ${tmpdir}/libvmod_wrong.so "VMOD_JSON_SPEC\x02" filewrite -a ${tmpdir}/libvmod_wrong.so { [ [ "$VMOD", "1.0", "std", "Vmod_vmod_std_Func", "0000000000000000000000000000000000000000000000000000000000000000", "0000000000000000000000000000000000000000000000000000000000000000", "19", "0" ], [ "$CPROTO", "/* blabla */" ] ] } filewrite -a ${tmpdir}/libvmod_wrong.so "\x03" varnish v1 -cliok "param.set vmod_path ${topbuild}/vmod/.libs/" varnish v1 -cliok "stop" filewrite ${tmpdir}/wrong.vcl { vcl 4.1; import std; import std as foo from "${tmpdir}/libvmod_wrong.so"; backend default none; } varnish v1 -cliexpect {Different version of VMOD std already loaded} { vcl.load dupvmod ${tmpdir}/wrong.vcl } shell "rm -f ${tmpdir}/libvmod_wrong.so ${tmpdir}wrong.vcl" varnish-7.5.0/bin/varnishtest/tests/m00008.vtc000066400000000000000000000021461457605730600210430ustar00rootroot00000000000000varnishtest "VCL compiler vmod coverage / vmod loading" feature topbuild varnish v1 -errvcl {Symbol 'debug' has wrong type (sub), expected vmod:} { backend b { .host = "${localhost}"; } sub debug {} import debug; sub vcl_recv { call debug; } } varnish v1 -errvcl {Another module already imported as foo} { import debug as foo; import directors as foo; } server s1 { rxreq txresp } -start varnish v1 -vcl+backend { import std; } varnish v1 -cliok "param.set vcc_feature -unsafe_path" varnish v1 -errvcl {'import ... from path ...' is unsafe.} { backend default { .host = "${s1_sock}"; } import std from "${topbuild}/vmod/.libs/"; } varnish v1 \ -cliok "param.set vmod_path /nowhere:${topbuild}/vmod/.libs/" varnish v1 -vcl+backend { import std; } varnish v1 -cliok "param.set vcc_feature +unsafe_path" varnish v1 -cliok "param.set vmod_path /nowhere:/else" varnish v1 -vcl+backend { import std from "${topbuild}/vmod/.libs/"; } varnish v1 -errvcl {Wrong file for VMOD std} { backend default { .host = "${s1_sock}"; } import std from "${topbuild}/vmod/.libs/libvmod_debug.so"; } varnish-7.5.0/bin/varnishtest/tests/m00019.vtc000066400000000000000000000056151457605730600210510ustar00rootroot00000000000000varnishtest "Test vmod functions/contructor named arguments and defaults" server s1 { rxreq txresp -bodylen 6 } -start varnish v1 -vcl+backend { import debug; sub vcl_init { new obj0 = debug.obj(); new obj1 = debug.obj("only_argument"); new obj2 = debug.obj("string", one); new obj3 = debug.obj(string="s_two", number=two); new obj4 = debug.obj(number=three, string="s_three"); new obj5 = debug.obj(number=three); new oo0 = debug.obj_opt(); new oo1 = debug.obj_opt(s="string"); new oo2 = debug.obj_opt(b=true); } sub vcl_deliver { set resp.http.foo1 = debug.argtest("1", 2.1, "3a"); set resp.http.foo2 = debug.argtest("1", two=-2.2, three="3b"); set resp.http.foo3 = debug.argtest("1", three="3c", two=2.3); set resp.http.foo4 = debug.argtest("1", 2.4, three="3d", four=-1); set resp.http.foo5 = debug.argtest("1", 2.5); set resp.http.foo6 = debug.argtest("1", four=6); set resp.http.foo7 = debug.argtest("1", opt="7"); set resp.http.obj0 = obj0.string() + ", " + obj0.number(); set resp.http.obj1 = obj1.string() + ", " + obj1.number(); set resp.http.obj2 = obj2.string() + ", " + obj2.number(); set resp.http.obj3 = obj3.string() + ", " + obj3.number(); set resp.http.obj4 = obj4.string() + ", " + obj4.number(); set resp.http.obj5 = obj5.string() + ", " + obj5.number(); set resp.http.oo0 = oo0.meth_opt(s="s1"); set resp.http.oo1 = oo1.meth_opt(b=false); set resp.http.oo2 = oo2.meth_opt(); } } -start client c1 { txreq rxresp expect resp.bodylen == "6" expect resp.http.foo1 == "1 2.1 3a , 4 0 " expect resp.http.foo2 == "1 -2.2 3b , 4 0 " expect resp.http.foo3 == "1 2.3 3c , 4 0 " expect resp.http.foo4 == "1 2.4 3d , -1 0 " expect resp.http.foo5 == "1 2.5 3 , 4 0 " expect resp.http.foo6 == "1 2 3 , 6 0 " expect resp.http.foo7 == "1 2 3 , 4 1 7" expect resp.http.obj0 == "default, one" expect resp.http.obj1 == "only_argument, one" expect resp.http.obj2 == "string, one" expect resp.http.obj3 == "s_two, two" expect resp.http.obj4 == "s_three, three" expect resp.http.obj5 == "default, three" expect resp.http.oo0 == "obj oo0 obj_s *undef* obj_b *undef* met_s s1 met_b *undef*" expect resp.http.oo1 == "obj oo1 obj_s string obj_b *undef* met_s *undef* met_b false" expect resp.http.oo2 == "obj oo2 obj_s *undef* obj_b true met_s *undef* met_b *undef*" } -run delay .1 varnish v1 -errvcl {Argument 'one' already used} { import debug; backend b1 {.host = "${localhost}";} sub vcl_deliver { set resp.http.foo5 = debug.argtest("1", one="1"); } } varnish v1 -errvcl {Argument 'one' missing} { import debug; backend b1 {.host = "${localhost}";} sub vcl_deliver { set resp.http.foo5 = debug.argtest(two=2.0, three="3"); } } varnish v1 -errvcl {Unknown argument 'five'} { import debug; backend b1 {.host = "${localhost}";} sub vcl_deliver { set resp.http.foo5 = debug.argtest("1", two=2.0, five="3"); } } varnish-7.5.0/bin/varnishtest/tests/m00021.vtc000066400000000000000000000015371457605730600210410ustar00rootroot00000000000000varnishtest "Test expiry callbacks" server s1 { rxreq txresp } -start varnish v1 -vcl+backend {} -start varnish v1 -cliok "param.set debug +vclrel" logexpect l1 -v v1 -g raw { expect * 0 Debug "Subscribed to Object Events" expect * 0 Debug "Object Event: insert 0x[0-9a-f]+" expect * 0 Debug "Object Event: expire 0x[0-9a-f]+" expect * 0 Debug "Unsubscribed from Object Events" } -start varnish v1 -vcl+backend { import debug; sub vcl_init { debug.register_obj_events(); } sub vcl_recv { if (req.method == "PURGE") { return (purge); } } } client c1 { txreq rxresp expect resp.status == 200 txreq -req PURGE rxresp } -run varnish v1 -expect n_object == 0 varnish v1 -vcl+backend {} varnish v1 -cliok "vcl.discard vcl2" varnish v1 -cliok "debug.vmod" varnish v1 -cliok "vcl.list" varnish v1 -expect vmods == 0 logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/m00022.vtc000066400000000000000000000014561457605730600210420ustar00rootroot00000000000000varnishtest "Test vmod failure in vcl_init{}" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { } -start logexpect l1 -v v1 -g raw { fail add * VCL_Log "Should not happen" expect * 0 VCL_Log "Should happen first" expect 0 0 VCL_Log "Should happen second" fail clear } -start varnish v1 -errvcl "Forced failure" { import debug; import std; backend default { .host = "${s1_addr}"; } sub vcl_init { std.log("Should happen first"); debug.fail(); std.log("Should not happen"); } sub vcl_fini { std.log("Should happen second"); } } logexpect l1 -wait varnish v1 -cliok "param.set nuke_limit 42" varnish v1 -errvcl "nuke_limit is not the answer." { import debug; backend default { .host = "${s1_addr}"; } } client c1 { txreq rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/m00023.vtc000066400000000000000000000021641457605730600210400ustar00rootroot00000000000000varnishtest "Test VMOD ACLs (jail-compatible)" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { import debug; acl loopback { "127"/24; } acl locals -pedantic { // We assume c1 and s1 comes from same address "${s1_addr}"/24; } sub vcl_init { if (!debug.match_acl(loopback, "127.0.0.127")) { debug.fail(); } new o_locals = debug.aclobj(locals); new o_null = debug.aclobj(debug.null_acl()); } sub vcl_recv { if (req.url == "/null") { if (client.ip ~ debug.null_acl()) { return (synth(200)); } } if (req.url == "/nullo") { if (client.ip ~ o_null.get()) { return (synth(200)); } } if (! debug.match_acl(ip=client.ip, acl=locals)) { return (synth(500, "match_acl")); } if (client.ip !~ debug.acl(locals)) { return (synth(500, "~")); } if (client.ip !~ o_locals.get()) { return (synth(500, "~")); } return (hash); } } -start client c1 { txreq rxresp expect resp.status == 200 txreq -url /null rxresp expect resp.status == 503 } -start client c2 { txreq -url /nullo rxresp expect resp.status == 503 } -start client c1 -wait client c2 -wait varnish-7.5.0/bin/varnishtest/tests/m00024.vtc000066400000000000000000000007011457605730600210340ustar00rootroot00000000000000varnishtest "Test vtc.barrier_sync" barrier b1 sock 2 barrier b2 sock 2 server s1 { rxreq txresp } -start varnish v1 -vcl+backend { import vtc; sub vcl_recv { vtc.barrier_sync("${b1_sock}"); } sub vcl_backend_response { vtc.barrier_sync("${b2_sock}"); } } -start varnish v1 -cliok "param.set debug +syncvsl" client c1 { txreq rxresp expect resp.status == 200 } -start barrier b1 sync delay 0.5 barrier b2 sync client c1 -wait varnish-7.5.0/bin/varnishtest/tests/m00025.vtc000066400000000000000000000003211457605730600210330ustar00rootroot00000000000000varnishtest "Pass probe definitions to VMODs" varnish v1 -vcl { import debug; backend be { .host = "${localhost}"; } probe pb { } sub vcl_init { debug.test_probe(pb); debug.test_probe(pb, pb); } } varnish-7.5.0/bin/varnishtest/tests/m00027.vtc000066400000000000000000000011471457605730600210440ustar00rootroot00000000000000varnishtest "Test object initialization failure" server s1 { } -start varnish v1 -vcl+backend { } -start logexpect l1 -v v1 -g raw { fail add * VCL_Log "Should not happen" expect * 0 VCL_Log "Should happen first" expect 0 0 VCL_Log "Should happen second" fail clear } -start varnish v1 -errvcl "Missing dynamic backend address or port" { import debug; import std; backend be { .host = "${bad_backend}"; } sub vcl_init { std.log("Should happen first"); new objx = debug.dyn("", ""); std.log("Should not happen"); } sub vcl_fini { std.log("Should happen second"); } } logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/m00048.vtc000066400000000000000000000032601457605730600210450ustar00rootroot00000000000000varnishtest "VMOD vfp & vdp" server s1 { rxreq txresp -body "Ponto Facto, Caesar Transit!" rxreq txresp -body "Ponto Facto, Caesar Transit!" accept rxreq txresp -body "Ponto Facto, Caesar Transit!" } -start varnish v1 -vcl+backend { import debug; sub vcl_backend_response { set beresp.filters = "rot13"; } } -start client c1 { txreq -hdr "Cookie: a" rxresp expect resp.body == "Cbagb Snpgb, Pnrfne Genafvg!" } -run varnish v1 -vsl_catchup varnish v1 -vcl+backend { import debug; sub vcl_backend_response { set beresp.filters = "rot13 rot13a"; } } client c1 { txreq -hdr "Cookie: a" rxresp expect resp.status == 503 } -run varnish v1 -vcl+backend { import debug; sub vcl_backend_response { set beresp.filters = "rot13 rot14"; } } client c1 { txreq -hdr "Cookie: a" rxresp expect resp.status == 503 } -run server s1 -wait server s1 { rxreq txresp -body "Ponto Facto, Caesar Transit!" } -start varnish v1 -vcl+backend { import debug; sub vcl_deliver { if (req.http.Rot13) { set resp.filters = "rot13 debug.pedantic"; } } } client c1 -repeat 2 { txreq rxresp expect resp.body == "Ponto Facto, Caesar Transit!" txreq -hdr "Rot13: please" rxresp expect resp.body == "Cbagb Snpgb, Pnrfne Genafvg!" } -run varnish v1 -vcl { import debug; backend none none; sub vcl_recv { return (synth(200)); } sub vcl_synth { set resp.body = "Ponto Facto, Caesar Transit!"; if (req.http.Rot13) { set resp.filters += "rot13 debug.pedantic"; } return (deliver); } } client c1 -repeat 2 { txreq rxresp expect resp.body == "Ponto Facto, Caesar Transit!" txreq -hdr "Rot13: please" rxresp expect resp.body == "Cbagb Snpgb, Pnrfne Genafvg!" } -run varnish-7.5.0/bin/varnishtest/tests/m00051.vtc000066400000000000000000000043731457605730600210450ustar00rootroot00000000000000varnishtest "catflap" varnish v1 -vcl { import debug; import std; import vtc; backend dummy { .host = "${bad_backend}"; } sub vcl_recv { if (req.http.id) { debug.catflap(miss); } else if (req.http.get == "first") { debug.catflap(first); } else if (req.http.get == "last") { debug.catflap(last); } else if (req.http.novcfpass) { vtc.workspace_reserve(client, -1); return (pass); } else if (req.http.novcfmiss) { vtc.workspace_reserve(client, -1); } else if (req.http.vcfpass) { debug.catflap(first); return (pass); } else if (req.http.rollback) { debug.catflap(first); std.rollback(req); vtc.workspace_reserve(client, -1); } else { return (fail); } return (hash); } sub vcl_backend_error { if (bereq.http.novcfpass || bereq.http.vcfpass) { set beresp.status = 206; return (deliver); } if (! bereq.http.id) { return (deliver); } set beresp.status = 200; set beresp.ttl = 1s; set beresp.grace = 1m; set beresp.http.id = bereq.http.id; } sub vcl_deliver { if (req.http.restart) { unset req.http.restart; unset req.http.id; set req.http.novcfpass = "yes"; return (restart); } } } -start client c1 { txreq -hdr "id: 1" rxresp expect resp.status == 200 txreq -hdr "novcfpass: yes" rxresp expect resp.status == 206 txreq -hdr "id: 2" rxresp expect resp.status == 200 txreq -hdr "id: 2" -hdr "restart: yes" rxresp expect resp.status == 206 txreq -hdr "id: 3" rxresp expect resp.status == 200 txreq -hdr "novcfpass: yes" rxresp expect resp.status == 206 # the first object is the one which went into cache last txreq -hdr "get: first" rxresp expect resp.status == 200 expect resp.http.id == "3" txreq -hdr "rollback: yes" rxresp expect resp.status == 200 expect resp.http.id == "3" txreq -hdr "novcfpass: yes" rxresp expect resp.status == 206 txreq -hdr "get: last" rxresp expect resp.status == 200 expect resp.http.id == "1" txreq -hdr "novcfpass: yes" rxresp expect resp.status == 206 # VCF with return(pass) must not leave the VCF dangling on the # workspace for a subsequent plain miss txreq -hdr "vcfpass: yes" rxresp expect resp.status == 206 txreq -hdr "novcfmiss: yes" rxresp expect resp.status == 200 expect resp.http.id == "3" } -run varnish-7.5.0/bin/varnishtest/tests/m00052.vtc000066400000000000000000000006521457605730600210420ustar00rootroot00000000000000varnishtest "priv_task with [optional] argument" varnish v1 -vcl { import debug; backend be none; sub vcl_deliver { set resp.http.none = debug.priv_task_with_option(); set resp.http.one = debug.priv_task_with_option("one"); set resp.http.two = debug.priv_task_with_option("two"); } } -start client c1 { txreq rxresp expect resp.http.none == "" expect resp.http.one == one expect resp.http.two == one } -run varnish-7.5.0/bin/varnishtest/tests/m00053.vtc000066400000000000000000000110351457605730600210400ustar00rootroot00000000000000varnishtest "VCL_SUB type basics" varnish v1 -vcl { import std; import debug; backend dummy None; # call: 1 ref: 5 + 1 sub foo { set resp.http.it = "works"; } # call: 1 ref: 1 + 1 (from bar walk) sub barbar { # called from bar only, must be marked dynamic call barbarbar; } # call: 2 (1 from bar walk, 1 from barbar walk) # ref: 2 + 1 sub barbarbar { # 2nd level call from dynamic } # call: 0 ref: 1 sub bar { set beresp.http.it = "works"; call barbar; } # call: 0 ref: 1 sub indirect { debug.call(foo); } # call: 0 ref: 1 sub direct { call foo; } # call: 0 ref: 2 sub recursive { std.log("recursive called"); debug.call(recursive); } # call: 1 ref: 1 sub recursive_indirect { std.log("recursive_indirect called"); debug.call(recursive_indirect); } # call: 1 ref: 1 sub rollback { std.rollback(req); } # call: 0 ref: 1 sub priv_top { debug.test_priv_top("only works on client side"); } sub vcl_recv { if (req.url == "/wrong") { debug.call(foo); } if (req.url == "/recursive") { debug.call(recursive); } if (req.url == "/recursive_indirect") { call recursive_indirect; } if (req.url == "/priv_top") { return (pass); } return (synth(200)); } sub vcl_synth { if (req.url == "/foo") { debug.call(foo); } else if (req.url == "/direct") { debug.call(direct); } else if (req.url == "/indirect") { debug.call(indirect); } else if (req.url == "/rollback") { debug.call(rollback); } else if (req.url == "/callthenrollback") { debug.call(foo); call rollback; if (! resp.http.it) { set resp.http.rolledback = true; } debug.call(foo); } else if (req.url == "/checkwrong") { synthetic(debug.check_call(bar)); set resp.status = 500; } return (deliver); } sub vcl_backend_fetch { debug.call(priv_top); } sub vcl_backend_response { set beresp.status = 200; } sub vcl_backend_error { # falling through to None backend would be success call vcl_backend_response; debug.call(vcl_backend_response); return (deliver); } } -start client c1 { txreq -url "/foo" rxresp expect resp.status == 200 expect resp.http.it == "works" txreq -url "/direct" rxresp expect resp.status == 200 expect resp.http.it == "works" txreq -url "/indirect" rxresp expect resp.status == 200 expect resp.http.it == "works" txreq -url "/callthenrollback" rxresp expect resp.status == 200 expect resp.http.rolledback == "true" expect resp.http.it == "works" txreq -url "/wrong" rxresp expect resp.status == 503 } -start logexpect l2 -v v1 -g vxid -q {ReqURL ~ "^/recursive$"} { expect * * VCL_Log "^recursive called" fail add * VCL_Log "^recursive called" expect 0 = VCL_Error {^Recursive call to "sub recursive..} expect 0 = VCL_return "^fail" expect * = End fail clear } -start client c2 { txreq -url "/recursive" rxresp expect resp.status == 503 } -start logexpect l3 -v v1 -g vxid -q {ReqURL ~ "^/recursive_indirect$"} { expect * * VCL_Log "^recursive_indirect called" fail add * VCL_Log "^recursive_indirect called" expect 0 = VCL_Error {^Recursive call to "sub recursive_indirect..} expect 0 = VCL_return "^fail" expect * = End fail clear } -start client c3 { txreq -url "/recursive_indirect" rxresp expect resp.status == 503 } -start client c4 { txreq -url "/rollback" rxresp expect resp.status == 200 } -start client c5 { txreq -url "/checkwrong" rxresp expect resp.status == 500 expect resp.body == {Dynamic call to "sub bar{}" not allowed from here} } -start client c6 { txreq -url "/priv_top" rxresp expect resp.status == 503 } -start varnish v1 -errvcl {Impossible Subroutine('' Line 8 Pos 13)} { import std; import debug; backend dummy None; sub impossible { set req.http.impossible = beresp.reason; } sub vcl_recv { if (req.url == "/impossible") { debug.call(impossible); } } } client c1 -wait client c2 -wait logexpect l2 -wait client c3 -wait logexpect l3 -wait client c4 -wait client c5 -wait client c6 -wait varnish v1 -vcl { import debug; backend b None; sub foo { set resp.http.Foo = "Called"; } sub vcl_init { new vbr = debug.caller(vcl_backend_response); new c = debug.caller(foo); } sub vcl_recv { return (synth(200)); } sub vcl_synth { if (req.url == "/call") { call c.xsub(); } else { c.call(); } return (deliver); } } client c1 { txreq -url "/call" rxresp expect resp.status == 200 expect resp.http.Foo == "Called" } -start client c2 { txreq rxresp expect resp.status == 200 expect resp.http.Foo == "Called" } -start client c1 -wait client c2 -wait varnish-7.5.0/bin/varnishtest/tests/m00054.vtc000066400000000000000000000011061457605730600210370ustar00rootroot00000000000000varnishtest "VCL_SUB wrong vmod behavior" varnish v1 -arg "-p feature=+no_coredump" -vcl { import debug; backend dummy None; sub foo { set resp.http.it = "works"; } sub vcl_init { debug.bad_memory(foo); } } -start varnish v1 -vcl { import debug; backend dummy None; sub vcl_recv { call debug.total_recall(); } } client c1 { txreq -url "/foo" expect_close } -run varnish v1 -wait-stopped varnish v1 -cliexpect "Assert error in VPI_Call_Check" "panic.show" varnish v1 -cliok "panic.clear" varnish v1 -expect MGT.child_panic == 1 varnish v1 -expectexit 0x40 varnish-7.5.0/bin/varnishtest/tests/m00055.vtc000066400000000000000000000016721457605730600210500ustar00rootroot00000000000000varnishtest "Test $Restrict scope" feature topbuild server s1 { rxreq txresp } -start varnish v1 -arg "-pvmod_path=${tmpdir}" -vcl+backend {} -start filewrite ${tmpdir}/libvmod_wrong.so "VMOD_JSON_SPEC\x02" filewrite -a ${tmpdir}/libvmod_wrong.so { [ [ "$VMOD", "1.0", "wrong", "Vmod_vmod_wrong_Func", "0000000000000000000000000000000000000000000000000000000000000000", "0000000000000000000000000000000000000000000000000000000000000000", "19", "0" ], [ "$CPROTO", "struct Vmod_vmod_wrong_Func {", "", "}" ], [ "$FUNC", "test", [ [ "VOID" ], "Vmod_vmod_wrong_Func.test", "" ] ], [ "$RESTRICT", [ "vcl_recv", "foo", "deliver" ] ] ] } filewrite -a ${tmpdir}/libvmod_wrong.so "\x03" varnish v1 -errvcl {invalid scope for $Restrict: foo} { import wrong; } shell "rm -f ${tmpdir}/libvmod_wrong.so ${tmpdir}wrong.vcl" varnish-7.5.0/bin/varnishtest/tests/o00000.vtc000066400000000000000000000112341457605730600210330ustar00rootroot00000000000000varnishtest "PROXY1 protocol tests" server s1 { rxreq expect req.http.X-Forwarded-For == 1.2.3.4 txresp rxreq expect req.http.X-Forwarded-For == 1:f::2 txresp rxreq expect req.http.X-Forwarded-For == 1:f::3 txresp } -start varnish v1 -proto "PROXY" -vcl+backend { import std; sub vcl_deliver { set resp.http.url = req.url; set resp.http.li = local.ip; set resp.http.lp = std.port(local.ip); set resp.http.ri = remote.ip; set resp.http.rp = std.port(remote.ip); set resp.http.ci = client.ip; set resp.http.cp = std.port(client.ip); set resp.http.si = server.ip; set resp.http.sp = std.port(server.ip); } } -start logexpect l1 -v v1 { expect * 1002 ProxyGarbage "PROXY1: Too few fields" expect * 1003 ProxyGarbage "PROXY1: Too many fields" expect * 1004 ProxyGarbage "PROXY1: Wrong TCP\\[46\\] field" expect * 1005 ProxyGarbage "PROXY1: Cannot resolve source address" expect * 1007 ProxyGarbage "PROXY1: Cannot resolve destination address" expect * 1009 ProxyGarbage "PROXY1: Cannot resolve source address" expect * 1011 ProxyGarbage "PROXY1: Cannot resolve source address" expect * 1015 Proxy "1 1.2.3.4 1234 5.6.7.8 5678" expect * 1018 Proxy "1 1:f::2 1234 5:a::8 5678" expect * 1021 Proxy "1 1:f::3 1234 5:a::8 5678" expect * 1025 ProxyGarbage "PROXY1: Too many fields" expect * 1026 ProxyGarbage "PROXY1: Too many fields" } -start client c1 { send "XYZ\r\n" expect_close } -run varnish v1 -vsl_catchup client c1 { send "PROXY " timeout 8 expect_close } -run varnish v1 -vsl_catchup client c1 { send "PROXY A B C D\r\n" timeout 8 expect_close } -run varnish v1 -vsl_catchup client c1 { send "PROXY A B C D E F\r\n" timeout 8 expect_close } -run varnish v1 -vsl_catchup client c1 { send "PROXY A B C D E\r\n" timeout 8 expect_close } -run varnish v1 -vsl_catchup client c1 { send "PROXY TCP4 B C D E\r\n" timeout 8 expect_close } -run varnish v1 -vsl_catchup client c1 { send "PROXY TCP4 1.2.3.4 C D E\r\n" timeout 8 expect_close } -run varnish v1 -vsl_catchup client c1 { send "PROXY TCP4 1.2.3.4 D 1234 E\r\n" timeout 8 expect_close } -run varnish v1 -vsl_catchup client c1 { send "PROXY TCP4 1.2.3.4 5.6.7.8 1234 E\r\n" timeout 8 expect_close } -run varnish v1 -vsl_catchup client c1 { send "PROXY TCP6 B C D E\r\n" timeout 8 expect_close } -run varnish v1 -vsl_catchup client c1 { send "PROXY TCP6 1:f::2 C D E\r\n" timeout 8 expect_close } -run varnish v1 -vsl_catchup client c1 { send "PROXY TCP6 1:f::2 1234 D E\r\n" timeout 8 expect_close } -run varnish v1 -vsl_catchup client c1 { send "PROXY TCP6 1:f::2 5:a::8 1234 E\r\n" timeout 8 expect_close } -run varnish v1 -vsl_catchup client c1 { send "PROXY TCP4 1:f::2 5:a::8 1234 5678\r\n" timeout 8 expect_close } -run varnish v1 -vsl_catchup client c1 { send "PROXY TCP6 1.2.3.4 5.6.7.8 1234 5678\r\n" timeout 8 expect_close } -run varnish v1 -vsl_catchup # Finally try something which works... client c1 -proxy1 "1.2.3.4:1234 5.6.7.8:5678" { txreq -url /1 rxresp expect resp.http.url == "/1" expect resp.http.ci == "1.2.3.4" expect resp.http.cp == "1234" expect resp.http.si == "5.6.7.8" expect resp.http.sp == "5678" expect resp.http.li == ${v1_addr} expect resp.http.lp == ${v1_port} expect resp.http.ri != "1.2.3.4" expect resp.http.rp != "1234" } -run varnish v1 -vsl_catchup client c1 -proxy1 "[1:f::2]:1234 [5:a::8]:5678" { txreq -url /2 rxresp expect resp.http.url == "/2" expect resp.http.ci == "1:f::2" expect resp.http.cp == "1234" expect resp.http.si == "5:a::8" expect resp.http.sp == "5678" expect resp.http.li == ${v1_addr} expect resp.http.lp == ${v1_port} expect resp.http.ri != "1:f::2" expect resp.http.rp != "1234" } -run varnish v1 -vsl_catchup # Try with appended request (See also: #1728) client c2 { send "PROXY TCP6 1:f::3 5:a::8 1234 5678\r\nGET /3 HTTP/1.1\r\nHost: ${string,repeat,54,a}\r\n\r\n" rxresp expect resp.http.url == "/3" } -run varnish v1 -vsl_catchup # Malformed (missing \r) client c2 { send "PROXY TCP4 1.2.3.4 5.6.7.8 1234 5678\n" expect_close } -run varnish v1 -vsl_catchup # Malformed, too long (106) # NB: Should check VSL for proper disposal client c2 { send "PROXY TCP4 1.2.3.4 5.6.7.8 1234 5678 ${string,repeat,68," "}\r\n" expect_close } -run varnish v1 -vsl_catchup # Malformed, too long (107) # NB: Should check VSL for proper disposal client c2 { send "PROXY TCP4 1.2.3.4 5.6.7.8 1234 5678 ${string,repeat,69," "}\r\n" expect_close } -run varnish v1 -vsl_catchup # Malformed, too long (108) # NB: Should check VSL for proper disposal client c2 { send "PROXY TCP4 1.2.3.4 5.6.7.8 1234 5678 ${string,repeat,70," "}\r\n" expect_close } -run varnish v1 -vsl_catchup logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/o00001.vtc000066400000000000000000000116031457605730600210340ustar00rootroot00000000000000varnishtest "PROXY v2 test" server s1 { rxreq expect req.url == "/1" expect req.http.x-forwarded-for == "${localhost}" txresp rxreq expect req.url == "/2" expect req.http.x-forwarded-for == "${localhost}" txresp rxreq expect req.url == "/3" expect req.http.x-forwarded-for == "${localhost}" txresp rxreq expect req.url == "/4" expect req.http.x-forwarded-for == "${localhost}" txresp rxreq expect req.url == "/5" expect req.http.x-forwarded-for == "${localhost}" txresp rxreq expect req.url == "/6" expect req.http.x-forwarded-for == "1.2.3.4" txresp rxreq expect req.url == "/7" expect req.http.x-forwarded-for == "102:304:506::d0e:f10" txresp } -start varnish v1 -proto "PROXY" -vcl+backend { import std; acl fwd_client { "1.2.3.4"; "102:304:506::d0e:f10"; } acl fwd_server { "5.6.7.8"; "8182:8384:8586::8d8e:8f80"; } sub vcl_deliver { set resp.http.li = local.ip; set resp.http.lp = std.port(local.ip); set resp.http.ri = remote.ip; set resp.http.rp = std.port(remote.ip); set resp.http.ci = client.ip; set resp.http.cp = std.port(client.ip); set resp.http.si = server.ip; set resp.http.sp = std.port(server.ip); set resp.http.fc = (client.ip ~ fwd_client); set resp.http.fs = (server.ip ~ fwd_server); set resp.http.xport = req.transport; } } -start logexpect l1 -v v1 -g raw { expect * 1000 Proxy "2 local local local local" expect * 1003 ProxyGarbage "PROXY2: bad command \\(2\\)" expect * 1004 ProxyGarbage "PROXY2: Ignoring UNSPEC\\|UNSPEC addresses" expect * 1007 ProxyGarbage "PROXY2: Ignoring unsupported protocol \\(0x99\\)" expect * 1010 ProxyGarbage "PROXY2: Ignoring short IPv4 addresses \\(11\\)" expect * 1013 ProxyGarbage "PROXY2: Ignoring short IPv6 addresses \\(35\\)" expect * 1016 Proxy "2 1.2.3.4 2314 5.6.7.8 2828" expect * 1019 Proxy "2 102:304:506::d0e:f10 2314 8182:8384:8586::8d8e:8f80 2828" expect * 1022 Begin "^sess 0 PROXY" expect * 1022 SessClose "^RX_OVERFLOW" } -start client c1 { # LOCAL command sendhex "0d 0a 0d 0a 00 0d 0a 51 55 49 54 0a" sendhex "20 00 00 00" txreq -url /1 rxresp expect resp.status == 200 expect resp.http.si == "${v1_addr}" expect resp.http.sp == "${v1_port}" expect resp.http.ci == "${localhost}" # Transport is HTTP/1, even though proxy is used expect resp.http.xport == HTTP/1 } -run delay .1 client c1 { # unknown command sendhex "0d 0a 0d 0a 00 0d 0a 51 55 49 54 0a" sendhex "22 00 00 00" timeout 8 expect_close } -run delay .1 client c1 { # UNSPEC proto sendhex "0d 0a 0d 0a 00 0d 0a 51 55 49 54 0a" sendhex "21 00 00 00" txreq -url /2 rxresp expect resp.status == 200 expect resp.http.si == "${v1_addr}" expect resp.http.sp == "${v1_port}" expect resp.http.ci == "${localhost}" } -run delay .1 client c1 { # unknown proto sendhex "0d 0a 0d 0a 00 0d 0a 51 55 49 54 0a" sendhex "21 99 00 00" txreq -url /3 rxresp expect resp.status == 200 expect resp.http.si == "${v1_addr}" expect resp.http.sp == "${v1_port}" expect resp.http.ci == "${localhost}" } -run delay .1 client c1 { # short IPv4 sendhex "0d 0a 0d 0a 00 0d 0a 51 55 49 54 0a" sendhex "21 11 00 0b" sendhex "01 02 03 04 05 06 07 08 09 0a 0b" txreq -url /4 rxresp expect resp.status == 200 expect resp.http.si == "${v1_addr}" expect resp.http.sp == "${v1_port}" expect resp.http.ci == "${localhost}" } -run delay .1 client c1 { # short IPv6 sendhex "0d 0a 0d 0a 00 0d 0a 51 55 49 54 0a" sendhex "21 21 00 23" sendhex "00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f" sendhex "00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f" sendhex "01 02 03" txreq -url /5 rxresp expect resp.status == 200 expect resp.http.fs == false expect resp.http.fc == false expect resp.http.si == "${v1_addr}" expect resp.http.sp == "${v1_port}" expect resp.http.ci == "${localhost}" } -run delay .1 # good IPv4 client c1 -proxy2 "1.2.3.4:2314 5.6.7.8:2828" { txreq -url /6 rxresp expect resp.status == 200 expect resp.http.fs == true expect resp.http.fc == true expect resp.http.ci == "1.2.3.4" expect resp.http.cp == "2314" expect resp.http.si == "5.6.7.8" expect resp.http.sp == "2828" expect resp.http.li == "${v1_addr}" expect resp.http.lp == "${v1_port}" expect resp.http.ri != "1.2.3.4" } -run delay .1 # good IPv6 client c1 \ -proxy2 "[102:304:506::d0e:f10]:2314 [8182:8384:8586::8d8e:8f80]:2828" { txreq -url /7 rxresp expect resp.status == 200 expect resp.http.fs == true expect resp.http.fc == true expect resp.http.ci == "102:304:506::d0e:f10" expect resp.http.cp == "2314" expect resp.http.si == "8182:8384:8586::8d8e:8f80" expect resp.http.sp == "2828" expect resp.http.li == "${v1_addr}" expect resp.http.lp == "${v1_port}" expect resp.http.ri != "102:304:506::d0e:f10" } -run delay .1 client c2 { # max length with garbage sendhex "0d 0a 0d 0a 00 0d 0a 51 55 49 54 0a" # annouce 1025 bytes > 1024 implicit limit sendhex "20 00 04 01" expect_close } -run delay .1 logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/o00002.vtc000066400000000000000000000051271457605730600210410ustar00rootroot00000000000000varnishtest "Sending proxy headers to backend" # This test is kind of hairy. # We don't have code in server to validate PROXY headers # so use a pipe of: c1 [proxy] v2 [proxy] v1 [http] s1 # Using proxy also between c1 and v2 allows us to test # IPv6 processing over a IPv4 connection. server s1 { rxreq expect req.url == "/1" expect req.http.xyzzy1 == req.http.xyzzy2 expect req.http.xyzzy1 == 1111 expect req.http.x-forwarded-for == "1.2.3.4, 1.2.3.4" txresp -body "proxy1" rxreq expect req.url == "/2" expect req.http.xyzzy1 == req.http.xyzzy2 expect req.http.xyzzy1 == 2222 expect req.http.x-forwarded-for == "1.2.3.4, 1.2.3.4" txresp -body "proxy2" rxreq expect req.url == "/3" expect req.http.xyzzy1 == req.http.xyzzy2 expect req.http.xyzzy1 == 3333 expect req.http.x-forwarded-for == "1:f::2, 1:f::2" txresp -body "proxy3" rxreq expect req.url == "/4" expect req.http.xyzzy1 == req.http.xyzzy2 expect req.http.xyzzy1 == 4444 expect req.http.x-forwarded-for == "1:f::2, 1:f::2" txresp -body "proxy4" rxreq expect req.url == "/pipe" expect req.http.xyzzy1 == req.http.xyzzy2 expect req.http.xyzzy1 == 5555 expect req.http.x-forwarded-for == "1:f::2, 1:f::2" txresp -hdr "Connection: close" -body "pipe" } -start varnish v1 -proto PROXY -vcl+backend { import std; sub vcl_recv { set req.http.xyzzy1 = std.port(client.ip); } } -start varnish v2 -proto PROXY -vcl { import std; backend bp1 { .host = "${v1_addr}"; .port = "${v1_port}"; .proxy_header = 1; } backend bp2 { .host = "${v1_addr}"; .port = "${v1_port}"; .proxy_header = 2; } sub vcl_recv { set req.http.xyzzy2 = std.port(client.ip); if (req.url == "/1" || req.url == "/3") { set req.backend_hint = bp1; } else { set req.backend_hint = bp2; } if (req.url ~ "^/pipe") { return (pipe); } } sub vcl_deliver { set resp.http.connection = "close"; } } -start client c1 -connect ${v2_sock} -proxy1 "1.2.3.4:1111 5.6.7.8:5678" { txreq -url /1 rxresp expect resp.body == "proxy1" } -run delay .2 client c1 -connect ${v2_sock} -proxy1 "1.2.3.4:2222 5.6.7.8:5678" { txreq -url /2 rxresp expect resp.body == "proxy2" } -run delay .2 client c1 -connect ${v2_sock} -proxy1 "[1:f::2]:3333 [5:a::8]:5678" { txreq -url /3 rxresp expect resp.body == "proxy3" } -run delay .2 client c1 -connect ${v2_sock} -proxy1 "[1:f::2]:4444 [5:a::8]:5678" { txreq -url /4 rxresp expect resp.body == "proxy4" } -run delay .2 client c1 -connect ${v2_sock} -proxy1 "[1:f::2]:5555 [5:a::8]:5678" { txreq -url /pipe rxresp expect resp.body == "pipe" expect resp.http.Connection == "close" expect_close } -run varnish-7.5.0/bin/varnishtest/tests/o00003.vtc000066400000000000000000000017121457605730600210360ustar00rootroot00000000000000varnishtest "VCL backend side access to IP#s and debug.proxy_header" server s1 { rxreq txresp } -start varnish v1 -proto PROXY -vcl+backend { import vtc; import blob; sub vcl_backend_response { set beresp.http.li = local.ip; set beresp.http.ri = remote.ip; set beresp.http.ci = client.ip; set beresp.http.si = server.ip; set beresp.http.proxy1 = blob.encode(blob=blob.sub( vtc.proxy_header(v1, client.ip, server.ip), 36B)); set beresp.http.proxy2 = blob.encode(encoding=HEX, blob=vtc.proxy_header(v2, client.ip, server.ip, "vtc.varnish-cache.org")); } } -start client c1 -proxy1 "1.2.3.4:1111 5.6.7.8:5678" { txreq rxresp expect resp.http.li == ${v1_addr} expect resp.http.ci == 1.2.3.4 expect resp.http.si == 5.6.7.8 expect resp.http.proxy1 == "PROXY TCP4 1.2.3.4 5.6.7.8 1111 5678" expect resp.http.proxy2 == "0d0a0d0a000d0a515549540a2111002401020304050607080457162e0200157674632e7661726e6973682d63616368652e6f7267" } -run varnish-7.5.0/bin/varnishtest/tests/o00004.vtc000066400000000000000000000016471457605730600210460ustar00rootroot00000000000000varnishtest "Sending proxy headers via health probes" # Double-proxy scheme stolen from o00002.vtc # Get ${v2_addr} defined so s1 can use it varnish v2 -arg "-b '${bad_backend}'" -start server s1 { rxreq expect req.http.x-forwarded-for == ${v2_addr} txresp } -start varnish v1 -proto PROXY -vcl+backend { sub vcl_recv { if (client.ip != remote.ip || server.ip != local.ip) { return (synth(400)); } } } -start varnish v2 -vcl { import std; probe default { .window = 1; .threshold = 1; .interval =0.5s; } backend bp1 { .host = "${v1_addr}"; .port = "${v1_port}"; .proxy_header = 1; } backend bp2 { .host = "${v1_addr}"; .port = "${v1_port}"; .proxy_header = 2; } sub vcl_init { # dummy backend ref if (bp2) { } } } server s1 -wait delay 1 varnish v2 -cliexpect "vcl1.bp1[ ]+probe[ ]+1/1[ ]+healthy" backend.list varnish v2 -cliexpect "vcl1.bp2[ ]+probe[ ]+1/1[ ]+healthy" backend.list varnish-7.5.0/bin/varnishtest/tests/o00005.vtc000066400000000000000000000111621457605730600210400ustar00rootroot00000000000000varnishtest "PROXY v2 TLV overflow" # this does does not work with IPv6, the workspace overflow test is too brittle feature ipv4 varnish v1 -arg "-p pool_sess=0,0,0" -proto "PROXY" -vcl { backend be none; } -start logexpect l1 -v v1 -g raw { expect * 1000 Begin "sess 0 PROXY" expect * 1000 ProxyGarbage "PROXY2: CRC error" } -start client c1 { # PROXY2 with CRC32C TLV and bad checksum sendhex { 0d 0a 0d 0a 00 0d 0a 51 55 49 54 0a 21 11 00 65 d9 46 b5 21 5f 8e a8 22 ed 96 01 bb 03 00 04 95 03 ee 74 01 00 02 68 32 02 00 0a 68 6f 63 64 65 74 2e 6e 65 74 20 00 3d 01 00 00 00 00 21 00 07 54 4c 53 76 31 2e 33 25 00 05 45 43 32 35 36 24 00 0a 52 53 41 2d 53 48 41 32 35 36 23 00 16 41 45 41 44 2d 41 45 53 31 32 38 2d 47 43 4d 2d 53 48 41 32 35 36 } txreq expect_close } -run varnish v1 -vsl_catchup logexpect l1 -wait logexpect l1 -v v1 -g raw { expect * 1001 Begin "sess 0 PROXY" expect * 1001 ProxyGarbage "PROXY2: TLV Dribble bytes" } -start client c1 { # PROXY2 with CRC32C TLV and bad checksum sendhex { 0d 0a 0d 0a 00 0d 0a 51 55 49 54 0a 21 11 00 67 d9 46 b5 21 5f 8e a8 22 ed 96 01 bb ff 00 04 95 03 ee 74 01 00 02 68 32 02 00 0a 68 6f 63 64 65 74 2e 6e 65 74 20 00 3d 01 00 00 00 00 21 00 07 54 4c 53 76 31 2e 33 25 00 05 45 43 32 35 36 24 00 0a 52 53 41 2d 53 48 41 32 35 36 23 00 16 41 45 41 44 2d 41 45 53 31 32 38 2d 47 43 4d 2d 53 48 41 32 35 36 ff ff } txreq expect_close } -run varnish v1 -vsl_catchup logexpect l1 -wait logexpect l1 -v v1 -g raw { expect * 1002 Begin "sess 0 PROXY" expect * 1002 ProxyGarbage "PROXY2: TLV Length Error" } -start client c1 { # PROXY2 with CRC32C TLV and bad checksum sendhex { 0d 0a 0d 0a 00 0d 0a 51 55 49 54 0a 21 11 00 60 d9 46 b5 21 5f 8e a8 22 ed 96 01 bb ff 00 04 95 03 ee 74 01 00 02 68 32 02 00 0a 68 6f 63 64 65 74 2e 6e 65 74 20 00 3d 01 00 00 00 00 21 00 07 54 4c 53 76 31 2e 33 25 00 05 45 43 32 35 36 24 00 0a 52 53 41 2d 53 48 41 32 35 36 23 00 16 41 45 41 44 2d 41 45 53 31 32 38 2d 47 43 4d 2d 53 48 41 32 35 36 } txreq expect_close } -run varnish v1 -vsl_catchup logexpect l1 -wait logexpect l1 -v v1 -g raw { expect * 1003 Begin "sess 0 PROXY" expect * 1003 ProxyGarbage "PROXY2: TLV Length Error" } -start client c1 { # PROXY2 with CRC32C TLV and bad checksum sendhex { 0d 0a 0d 0a 00 0d 0a 51 55 49 54 0a 21 11 00 65 d9 46 b5 21 5f 8e a8 22 ed 96 01 bb ff 00 04 95 03 ee 74 01 00 02 68 32 02 00 0a 68 6f 63 64 65 74 2e 6e 65 74 20 00 3c 01 00 00 00 00 21 00 07 54 4c 53 76 31 2e 33 25 00 05 45 43 32 35 36 24 00 0a 52 53 41 2d 53 48 41 32 35 36 23 00 16 41 45 41 44 2d 41 45 53 31 32 38 2d 47 43 4d 2d 53 48 41 32 35 36 } txreq expect_close } -run varnish v1 -vsl_catchup logexpect l1 -wait # workspace overflow handling elsewhere in the proxy code # # the workspace_session size is chosen to fail as closely as possible # below the minimum required value for the vtc to work (= the # workspace to overflow) both on 64 and 32 bit. # # This value is fragile, ideally we would want something like # vtc.alloc(-x), yet there is no vcl code being run before we parse # proxy headers # # 20210825: Minimum possible seems to be 364, but param.min is 384 now. varnish v1 -cliok "param.set workspace_session 384" # get rid of the surplus session mpl & zero vsc varnish v1 -stop varnish v1 -cliok "param.set pool_sess 1,1,1" varnish v1 -start client c2 { # PROXY2 with CRC32C TLV sendhex { 0d 0a 0d 0a 00 0d 0a 51 55 49 54 0a 21 11 00 65 d9 46 b5 21 5f 8e a8 22 ed 96 01 bb 03 00 04 95 03 ee 75 01 00 02 68 32 02 00 0a 68 6f 63 64 65 74 2e 6e 65 74 20 00 3d 01 00 00 00 00 21 00 07 54 4c 53 76 31 2e 33 25 00 05 45 43 32 35 36 24 00 0a 52 53 41 2d 53 48 41 32 35 36 23 00 16 41 45 41 44 2d 41 45 53 31 32 38 2d 47 43 4d 2d 53 48 41 32 35 36 } txreq expect_close } -run varnish v1 -expect ws_session_overflow == 1 # increased workspace is being reflected immediately varnish v1 -cliok "param.set workspace_session 524" varnish v1 -cliok "param.set debug +workspace" client c1 { # PROXY2 sp->ws overflow sendhex { 0d 0a 0d 0a 00 0d 0a 51 55 49 54 0a 21 11 01 23 d9 46 b5 21 5f 8e a8 22 ed 96 01 bb 03 00 04 c5 1b 5b 2b 01 00 02 68 32 02 00 c8 ${string,repeat,200,"61 "} 20 00 3d 01 00 00 00 00 21 00 07 54 4c 53 76 31 2e 33 25 00 05 45 43 32 35 36 24 00 0a 52 53 41 2d 53 48 41 32 35 36 23 00 16 41 45 41 44 2d 41 45 53 31 32 38 2d 47 43 4d 2d 53 48 41 32 35 36 } expect_close } -run delay 1 varnish v1 -expect ws_session_overflow >= 2 varnish-7.5.0/bin/varnishtest/tests/o00006.vtc000066400000000000000000000017221457605730600210420ustar00rootroot00000000000000varnishtest "SES_Reserve_proto_priv() overflow" feature 64bit ipv4 server s1 { rxreq txresp } -start varnish v1 -arg "-p pool_sess=0,0,0" -proto "PROXY" -vcl+backend {} -start logexpect l1 -v v1 -g raw { expect 0 1000 Begin "sess 0 PROXY" expect 0 = SessOpen expect * = Proxy "2 217.70.181.33 60822 95.142.168.34 443" expect 0 = Error {\Qinsufficient workspace (proto_priv)\E} expect 0 = SessClose "RX_JUNK" } -start varnish v1 -cliok "param.set workspace_session 480" client c1 { # PROXY2 with CRC32C TLV sendhex { 0d 0a 0d 0a 00 0d 0a 51 55 49 54 0a 21 11 00 65 d9 46 b5 21 5f 8e a8 22 ed 96 01 bb 03 00 04 95 03 ee 75 01 00 02 68 32 02 00 0a 68 6f 63 64 65 74 2e 6e 65 74 20 00 3d 01 00 00 00 00 21 00 07 54 4c 53 76 31 2e 33 25 00 05 45 43 32 35 36 24 00 0a 52 53 41 2d 53 48 41 32 35 36 23 00 16 41 45 41 44 2d 41 45 53 31 32 38 2d 47 43 4d 2d 53 48 41 32 35 36 } txreq expect_close } -run logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/p00000.vtc000066400000000000000000000021331457605730600210320ustar00rootroot00000000000000varnishtest "Test Basic persistence" feature persistent_storage process p1 -dump { varnishd -spersistent -b localhost -n ${tmpdir} -a :0 -d 2>&1 } -start -expect-exit 1 process p1 -expect-text 0 0 {to launch} process p1 -write "start\n" process p1 -expect-text 0 0 {-spersistent has been deprecated} process p1 -write "quit\n" process p1 -expect-exit 0x20 -wait server s1 { rxreq txresp -body FOO accept rxreq txresp -status 700 } -start shell "rm -f ${tmpdir}/_.per" varnish v1 \ -arg "-pfeature=+wait_silo" \ -arg "-sdeprecated_persistent,${tmpdir}/_.per,5m" \ -vcl+backend { } -start varnish v1 -stop varnish v1 -start client c1 { txreq -url "/" rxresp expect resp.status == 200 expect resp.http.X-Varnish == "1001" } -run varnish v1 -vsl_catchup varnish v1 -cliok "storage.list" varnish v1 -cliok "debug.persistent s0 dump" varnish v1 -cliok "debug.persistent s0 sync" varnish v1 -stop varnish v1 -start varnish v1 -cliok "debug.xid 2000" client c1 { txreq -url "/" rxresp expect resp.status == 200 expect resp.http.X-Varnish == "2001 1002" } -run # shell "rm -f /tmp/__v1/_.per" varnish-7.5.0/bin/varnishtest/tests/p00002.vtc000066400000000000000000000014271457605730600210410ustar00rootroot00000000000000varnishtest "Ban a persistent object" feature persistent_storage shell "rm -f ${tmpdir}/_.per[12]" server s1 { rxreq txresp -hdr "Foo: foo" } -start varnish v1 \ -arg "-pfeature=+wait_silo" \ -arg "-pban_lurker_sleep=0" \ -arg "-sdeprecated_persistent,${tmpdir}/_.per1,5m" \ -arg "-sdeprecated_persistent,${tmpdir}/_.per2,5m" \ -vcl+backend { } -start client c1 { txreq -url "/" rxresp expect resp.status == 200 expect resp.http.X-Varnish == "1001" expect resp.http.foo == "foo" } -run varnish v1 -cliok "ban req.url == / && req.http.jam != session" varnish v1 -stop server s1 -wait server s1 { rxreq txresp -hdr "Foo: bar" } -start varnish v1 -start varnish v1 -cliok ban.list # Count of 3 here, because two "magic" bans are also there" varnish v1 -expect bans == 3 varnish-7.5.0/bin/varnishtest/tests/p00003.vtc000066400000000000000000000017051457605730600210410ustar00rootroot00000000000000varnishtest "Ban a persistent object" feature persistent_storage shell "rm -f ${tmpdir}/_.per" server s1 { rxreq txresp -hdr "Foo: foo" } -start varnish v1 \ -arg "-pfeature=+wait_silo" \ -arg "-sdeprecated_persistent,${tmpdir}/_.per,5m" \ -arg "-pban_lurker_sleep=0" \ -vcl+backend { } -start varnish v1 -cliok ban.list client c1 { txreq -url "/" rxresp expect resp.status == 200 expect resp.http.X-Varnish == "1001" expect resp.http.foo == "foo" } -run varnish v1 -cliok "ban req.url == /" varnish v1 -cliok ban.list varnish v1 -stop server s1 -wait server s1 { rxreq txresp -hdr "Foo: bar" } -start varnish v1 -vcl+backend {} -start varnish v1 -cliok ban.list # Count of 2 here, because the "magic" ban is also there" # varnish v1 -expect n_ban == 2 client c1 { txreq -url "/" rxresp expect resp.status == 200 expect resp.http.X-Varnish == "1001" expect resp.http.foo == "bar" } -run varnish v1 -cliok ban.list varnish v1 -stop varnish-7.5.0/bin/varnishtest/tests/p00004.vtc000066400000000000000000000023761457605730600210470ustar00rootroot00000000000000varnishtest "Check object references" feature persistent_storage shell "rm -f ${tmpdir}/_.per" server s1 { rxreq txresp -hdr "Foo: foo" rxreq txresp -hdr "Bar: bar" } -start varnish v1 \ -arg "-pfeature=+wait_silo" \ -arg "-sdeprecated_persistent,${tmpdir}/_.per,5m" \ -arg "-pban_lurker_sleep=0" \ -vcl+backend { } -start client c1 { txreq -url "/foo" rxresp expect resp.status == 200 expect resp.http.X-Varnish == "1001" expect resp.http.foo == "foo" } -run varnish v1 -expect n_object == 1 client c1 { txreq -url "/bar" rxresp expect resp.status == 200 expect resp.http.X-Varnish == "1004" expect resp.http.bar == "bar" } -run varnish v1 -expect n_object == 2 varnish v1 -stop varnish v1 -start varnish v1 -cliok "debug.xid 2000" varnish v1 -expect n_vampireobject == 2 varnish v1 -expect n_object == 0 client c1 { txreq -url "/foo" rxresp expect resp.status == 200 expect resp.http.X-Varnish == "2001 1002" expect resp.http.foo == "foo" } -run varnish v1 -expect n_vampireobject == 1 varnish v1 -expect n_object == 1 client c1 { txreq -url "/bar" rxresp expect resp.status == 200 expect resp.http.X-Varnish == "2003 1005" expect resp.http.bar == "bar" } -run varnish v1 -expect n_object == 2 varnish v1 -expect n_vampireobject == 0 varnish-7.5.0/bin/varnishtest/tests/p00005.vtc000066400000000000000000000025001457605730600210350ustar00rootroot00000000000000varnishtest "Check expiry of non-instantiated object" feature persistent_storage shell "rm -f ${tmpdir}/_.per" server s1 { rxreq txresp -hdr "Foo: foo1" } -start varnish v1 \ -arg "-sdeprecated_persistent,${tmpdir}/_.per,5m" \ -arg "-pban_lurker_sleep=0" \ -arg "-pshortlived=0" \ -vcl+backend { sub vcl_backend_response { set beresp.ttl = 3s; set beresp.keep = 0s; set beresp.grace = 0s; } } -start varnish v1 -cliok "param.set debug +syncvsl" varnish v1 -cliok "param.set feature +wait_silo" logexpect l1 -v v1 -g vxid -q "Begin ~ bereq" { expect * 1002 Storage "persistent s0" } -start client c1 { txreq -url "/foo" rxresp expect resp.status == 200 expect resp.http.X-Varnish == "1001" expect resp.http.foo == "foo1" } -run logexpect l1 -wait varnish v1 -expect n_object == 1 varnish v1 -stop server s1 -wait { rxreq txresp -hdr "Foo: foo2" } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.ttl = 3s; } } -start delay 5 varnish v1 -expect n_object == 0 logexpect l1 -v v1 -g vxid -q "Begin ~ bereq" { expect * 1002 Storage "persistent s0" } -start client c1 { txreq -url "/foo" rxresp expect resp.status == 200 expect resp.http.X-Varnish == "1001" expect resp.http.foo == "foo2" } -run logexpect l1 -wait varnish v1 -expect n_object == 1 varnish-7.5.0/bin/varnishtest/tests/p00006.vtc000066400000000000000000000022751457605730600210470ustar00rootroot00000000000000varnishtest "Check that Vary headers are stored" feature persistent_storage shell "rm -f ${tmpdir}/_.per" server s1 { rxreq txresp -hdr "Foo: foo1" -hdr "Vary: foo, bar" rxreq txresp -hdr "Foo: foo2" -hdr "Vary: foo, bar" } -start varnish v1 \ -arg "-sdeprecated_persistent,${tmpdir}/_.per,5m" \ -arg "-pcli_timeout=60" \ -arg "-pban_lurker_sleep=0" \ -vcl+backend { } -start client c1 { txreq -url "/foo" -hdr "foo: 1" -hdr "bar: 2" rxresp expect resp.status == 200 expect resp.http.X-Varnish == "1001" expect resp.http.foo == "foo1" txreq -url "/foo" -hdr "foo: 2" -hdr "bar: 1" rxresp expect resp.status == 200 expect resp.http.X-Varnish == "1003" expect resp.http.foo == "foo2" } -run varnish v1 -expect n_object == 2 server s1 -wait varnish v1 -stop varnish v1 -start varnish v1 -cliok "debug.xid 2000" varnish v1 -expect n_vampireobject == 2 client c1 { txreq -url "/foo" -hdr "foo: 1" -hdr "bar: 2" rxresp expect resp.status == 200 expect resp.http.X-Varnish == "2001 1002" expect resp.http.foo == "foo1" txreq -url "/foo" -hdr "foo: 2" -hdr "bar: 1" rxresp expect resp.status == 200 expect resp.http.X-Varnish == "2002 1004" expect resp.http.foo == "foo2" } -run varnish-7.5.0/bin/varnishtest/tests/p00007.vtc000066400000000000000000000030331457605730600210410ustar00rootroot00000000000000varnishtest "test reload of object spanning incomplete segment" feature persistent_storage feature disable_aslr barrier b1 cond 2 barrier b2 cond 2 server s1 { rxreq expect req.url == "/1" send "HTTP/1.1 200 OK\n" send "Transfer-encoding: chunked\n" send "\n" chunkedlen 32 # Tell top-level that it can sync the stevedore barrier b1 sync # Top-level tells us it has synched the stevedore barrier b2 sync chunkedlen 32 chunkedlen 0 accept rxreq expect req.url == "/2" txresp -bodylen 100 rxreq expect req.url == "/1" txresp -bodylen 48 } -start varnish v1 -arg "-sdeprecated_persistent,${tmpdir}/_.per,5m" \ -arg "-p feature=+no_coredump" \ -vcl+backend {} -start varnish v1 -cliok "debug.fragfetch 32" client c1 { txreq -url "/1" rxresp expect resp.bodylen == 64 } -start # Wait for first chunk to have been sent barrier b1 sync delay .2 # Sync the stevedore, so the next chunk ends up i segment 2 varnish v1 -cliok "debug.persistent s0 sync" # Tell server to continue barrier b2 sync # Get the result client c1 -wait varnish v1 -cliok "debug.persistent s0 dump" # Panic worker so second segment does not get closed varnish v1 -clierr 400 "debug.panic.worker" delay 0.5 varnish v1 -cliok "panic.clear" delay 0.5 # start again varnish v1 -start client c1 { # Make sure there is not a valid "struct storage" in second seg. txreq -url "/2" rxresp expect resp.bodylen == 100 # Fetch the vampire object and see how that goes... txreq -url "/1" rxresp expect resp.bodylen == 48 } -run varnish v1 -expectexit 0x40 varnish-7.5.0/bin/varnishtest/tests/p00008.vtc000066400000000000000000000035721457605730600210520ustar00rootroot00000000000000varnishtest "Ban list sync across silos" feature persistent_storage shell "rm -f ${tmpdir}/_.per[12]" # Silo 1 & 2 # Prime each with an object with X-Foo: foo server s1 { rxreq expect req.url == "/silo1" txresp -hdr "X-Foo: foo" rxreq expect req.url == "/silo2" txresp -hdr "X-Foo: foo" } -start varnish v1 \ -arg "-pfeature=+wait_silo" \ -arg "-pban_lurker_sleep=0" \ -arg "-sper1=deprecated_persistent,${tmpdir}/_.per1,5m" \ -arg "-sper2=deprecated_persistent,${tmpdir}/_.per2,5m" \ -syntax 4.0 \ -vcl+backend { sub vcl_backend_response { set beresp.storage = storage.per1; if (bereq.url ~ "silo2") { set beresp.storage = storage.per2; } } } -start client c1 { txreq -url "/silo1" rxresp expect resp.status == 200 expect resp.http.x-foo == "foo" txreq -url "/silo2" rxresp expect resp.status == 200 expect resp.http.x-foo == "foo" } -run varnish v1 -stop server s1 -wait # Only silo 1 # Ban on obj.http.x-foo == foo varnish v2 \ -arg "-pfeature=+wait_silo" \ -arg "-pban_lurker_sleep=0" \ -arg "-sdeprecated_persistent,${tmpdir}/_.per1,5m" \ -vcl+backend { } -start varnish v2 -cliok "ban obj.http.x-foo == foo" varnish v2 -cliok "ban.list" varnish v2 -stop # Silo 1 & 2 # Bans should be transferred varnish v3 \ -arg "-pfeature=+wait_silo" \ -arg "-pban_lurker_sleep=0" \ -arg "-sdeprecated_persistent,${tmpdir}/_.per1,5m" \ -arg "-sdeprecated_persistent,${tmpdir}/_.per2,5m" \ -vcl+backend { } -start varnish v3 -cliok "ban.list" varnish v3 -stop # Only silo 2 # Check that /silo2 is banned server s1 { rxreq expect req.url == "/silo2" txresp -hdr "X-Foo: bar" } -start varnish v4 \ -arg "-pfeature=+wait_silo" \ -arg "-pban_lurker_sleep=0" \ -arg "-sdeprecated_persistent,${tmpdir}/_.per2,5m" \ -vcl+backend { } -start client c1 -connect ${v4_sock} { txreq -url "/silo2" rxresp expect resp.status == 200 expect resp.http.x-foo == "bar" } -run varnish-7.5.0/bin/varnishtest/tests/p00009.vtc000066400000000000000000000022411457605730600210430ustar00rootroot00000000000000varnishtest "Check that reloaded bans with completed flag are really completed on restart" feature persistent_storage shell "rm -f ${tmpdir}/_.per[12]" server s1 { rxreq txresp -hdr "x-foo: foo" accept rxreq txresp -hdr "x-foo: bar" } -start varnish v1 \ -arg "-pfeature=+wait_silo" \ -arg "-pban_lurker_sleep=0" \ -arg "-sper1=deprecated_persistent,${tmpdir}/_.per1,5m" \ -arg "-sper2=deprecated_persistent,${tmpdir}/_.per2,5m" \ -vcl+backend { } varnish v1 -start client c1 { txreq rxresp expect resp.http.x-foo == "foo" } -run varnish v1 -cliok "ban req.url == /test" varnish v1 -cliok "ban req.url == /test" varnish v1 -cliok "ban.list" # Expect ban_magic plus the 2 we added varnish v1 -expect bans == 3 # Expect 1 of the 2 added to be marked dup varnish v1 -expect bans_dups == 1 # Expect ban_magic plus our 1 dup to be marked completed varnish v1 -expect bans_completed == 2 # Restart varnish v1 -stop varnish v1 -start varnish v1 -cliok "ban.list" # Check that our object is still there client c1 { txreq rxresp expect resp.http.x-foo == "foo" } -run # Expect our duplicate varnish v1 -expect bans_dups == 1 varnish v1 -expect bans_completed == 1 varnish-7.5.0/bin/varnishtest/tests/p00010.vtc000066400000000000000000000042711457605730600210400ustar00rootroot00000000000000varnishtest "status codes" server s1 {} -start varnish v1 -vcl+backend { import std; sub vcl_recv { return (synth(std.integer(req.http.x-vtc-response-code, 999))); } } -start client c1 { txreq -hdr "x-vtc-response-code: 102" rxresp expect resp.reason == "Processing" txreq -hdr "x-vtc-response-code: 103" rxresp expect resp.reason == "Early Hints" txreq -hdr "x-vtc-response-code: 207" rxresp expect resp.reason == "Multi-Status" txreq -hdr "x-vtc-response-code: 208" rxresp expect resp.reason == "Already Reported" txreq -hdr "x-vtc-response-code: 226" rxresp expect resp.reason == "IM Used" txreq -hdr "x-vtc-response-code: 308" rxresp expect resp.reason == "Permanent Redirect" txreq -hdr "x-vtc-response-code: 421" rxresp expect resp.reason == "Misdirected Request" txreq -hdr "x-vtc-response-code: 422" rxresp expect resp.reason == "Unprocessable Entity" txreq -hdr "x-vtc-response-code: 423" rxresp expect resp.reason == "Locked" txreq -hdr "x-vtc-response-code: 424" rxresp expect resp.reason == "Failed Dependency" txreq -hdr "x-vtc-response-code: 425" rxresp expect resp.reason == "Too Early" txreq -hdr "x-vtc-response-code: 426" rxresp expect resp.reason == "Upgrade Required" txreq -hdr "x-vtc-response-code: 428" rxresp expect resp.reason == "Precondition Required" txreq -hdr "x-vtc-response-code: 429" rxresp expect resp.reason == "Too Many Requests" txreq -hdr "x-vtc-response-code: 431" rxresp expect resp.reason == "Request Header Fields Too Large" txreq -hdr "x-vtc-response-code: 451" rxresp expect resp.reason == "Unavailable For Legal Reasons" txreq -hdr "x-vtc-response-code: 506" rxresp expect resp.reason == "Variant Also Negotiates" txreq -hdr "x-vtc-response-code: 507" rxresp expect resp.reason == "Insufficient Storage" txreq -hdr "x-vtc-response-code: 508" rxresp expect resp.reason == "Loop Detected" txreq -hdr "x-vtc-response-code: 510" rxresp expect resp.reason == "Not Extended" txreq -hdr "x-vtc-response-code: 511" rxresp expect resp.reason == "Network Authentication Required" txreq -hdr "x-vtc-response-code: vtc" rxresp expect resp.status == 999 expect resp.reason == "Unknown HTTP Status" } -run varnish-7.5.0/bin/varnishtest/tests/r00102.vtc000066400000000000000000000010161457605730600210360ustar00rootroot00000000000000varnishtest "Test POST->GET conversion" server s1 { rxreq txresp \ -hdr "Connection: close" \ -body "012345\n" } -start varnish v1 -vcl+backend { sub vcl_recv { if (req.method == "POST") { set req.method = "GET"; } } } -start client c1 { txreq -req POST -url "/" \ -body "123456789\n" rxresp expect resp.status == 200 expect resp.http.X-Varnish == "1001" txreq -req POST -url "/" \ -body "123456789\n" rxresp expect resp.status == 200 expect resp.http.X-Varnish == "1003 1002" } client c1 -run varnish-7.5.0/bin/varnishtest/tests/r00251.vtc000066400000000000000000000010621457605730600210440ustar00rootroot00000000000000varnishtest "Regression test for #251: segfault on regsub on missing http header" server s1 { rxreq txresp \ -hdr "Foobar: _barf_" \ -hdr "Connection: close" \ -body "012345\n" } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.http.Snafu1 = "zoom" + regsub(beresp.http.Foomble, "ar", "\0\0") + "box"; } } -start client c1 { txreq -url "/" rxresp expect resp.status == 200 expect resp.http.X-Varnish == "1001" expect resp.http.foobar == "_barf_" expect resp.http.snafu1 == "zoombox" } client c1 -run varnish-7.5.0/bin/varnishtest/tests/r00255.vtc000066400000000000000000000011251457605730600210500ustar00rootroot00000000000000varnishtest "Regression test for #255: Segfault on header token separation" server s1 { rxreq txresp \ -hdr "Date: Thu, 19 Jun 2008 21:14:49 GMT" \ -hdr "Expires: Thu, 19 Jun 2008 21:14:49 GMT" \ -hdr "Last-Modified: Sun, 27 Nov 2005 05:41:47 GMT" \ -hdr "Cache-Control: max-age =0" \ -body "012345\n" } -start varnish v1 -vcl+backend { sub vcl_backend_fetch { set bereq.backend = s1; } } -start client c1 { txreq -url "/" rxresp expect resp.status == 200 expect resp.http.X-Varnish == "1001" expect resp.http.Cache-Control == "max-age =0" } client c1 -run varnish-7.5.0/bin/varnishtest/tests/r00262.vtc000066400000000000000000000007531457605730600210540ustar00rootroot00000000000000varnishtest "Test that inter-request whitespace trimming works" server s1 { rxreq txresp \ -hdr "Connection: close" \ -body "012345\n" } -start varnish v1 -arg "-p timeout_linger=20" -vcl+backend { } -start client c1 { send "GET / HTTP/1.1\r\nHost: foo\r\n\r\n\r\n" rxresp expect resp.status == 200 expect resp.http.X-Varnish == "1001" send "GET / HTTP/1.1\r\nHost: foo\r\n\r\n" rxresp expect resp.status == 200 expect resp.http.X-Varnish == "1003 1002" } client c1 -run varnish-7.5.0/bin/varnishtest/tests/r00292.vtc000066400000000000000000000012761457605730600210600ustar00rootroot00000000000000varnishtest "Header deletion test" # This test checks that the ->hdf flag which tracks Connection: header # deletes tracks the headers properly. server s1 { rxreq expect req.url == "/foo" expect req.http.hdr1 == expect req.http.hdr2 == "2" expect req.http.hdr3 == expect req.http.hdr4 == "4" expect req.http.hdr5 == expect req.http.hdr6 == "6" txresp -body "foobar" } -start varnish v1 -vcl+backend { sub vcl_recv { unset req.http.hdr1; unset req.http.hdr5; } } -start client c1 { txreq -url "/foo" \ -hdr "Connection: hdr3" \ -hdr "hdr1: 1" \ -hdr "hdr2: 2" \ -hdr "hdr3: 3" \ -hdr "hdr4: 4" \ -hdr "hdr5: 5" \ -hdr "hdr6: 6" rxresp } -run varnish-7.5.0/bin/varnishtest/tests/r00306.vtc000066400000000000000000000014771457605730600210570ustar00rootroot00000000000000varnishtest "Regression test for ticket #306, random director ignoring good backend" server s1 { rxreq expect req.url == /foo txresp -body "foo1" rxreq expect req.url == /bar txresp -body "bar1" } -start server s2 { rxreq txresp -status 404 } -start varnish v1 -vcl { import directors; backend s1 { .host = "${s1_addr}"; .port = "${s1_port}"; } backend s2 { .host = "${s2_addr}"; .port = "${s2_port}"; } sub vcl_init { new foo = directors.random(); foo.add_backend(s1, 1); foo.add_backend(s2, 1); } sub vcl_backend_fetch { set bereq.backend = foo.backend(); } } -start varnish v1 -cliok "backend.set_health s2 sick" varnish v1 -cliok "backend.list" client c1 { timeout 10 txreq -url "/foo" rxresp expect resp.status == 200 txreq -url "/bar" rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/r00310.vtc000066400000000000000000000002741457605730600210440ustar00rootroot00000000000000varnishtest "Test obj.http.x-cache in vcl_hit" varnish v1 -errvcl {Variable is read only.} { backend foo { .host = "${localhost}"; } sub vcl_hit { set obj.http.x-cache = "hit"; } } varnish-7.5.0/bin/varnishtest/tests/r00318.vtc000066400000000000000000000004321457605730600210500ustar00rootroot00000000000000varnishtest "ESI with no body in response" server s1 { rxreq txresp -status 302 } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_esi = true; set beresp.uncacheable = true; } } -start client c1 { txreq rxresp expect resp.status == 302 } -run varnish-7.5.0/bin/varnishtest/tests/r00325.vtc000066400000000000000000000010201457605730600210400ustar00rootroot00000000000000varnishtest "Check lack of response-string" server s1 { rxreq send "HTTP/1.0 200 \r\n" send "Connection: close\r\n" send "\r\n" send "\r\n" send "FOO\r\n" } -start varnish v1 -vcl+backend {} -start client c1 { txreq -url /bar rxresp expect resp.status == 200 expect resp.reason == OK } -run server s1 { rxreq send "HTTP/1.0 200\r\n" send "Connection: close\r\n" send "\r\n" send "\r\n" send "FOO\r\n" } -start client c1 { txreq -url /foo rxresp expect resp.status == 200 expect resp.reason == OK } -run varnish-7.5.0/bin/varnishtest/tests/r00326.vtc000066400000000000000000000005751457605730600210570ustar00rootroot00000000000000varnishtest "No zerolength verbatim before " server s1 { rxreq txresp -body {} rxreq txresp -body "
FOO
\n" } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_esi = true; } } -start client c1 { txreq rxresp expect resp.status == 200 expect resp.bodylen == 30 } -run varnish-7.5.0/bin/varnishtest/tests/r00345.vtc000066400000000000000000000010421457605730600210460ustar00rootroot00000000000000varnishtest "#345, ESI waitinglist trouble" barrier b1 cond 2 server s1 { rxreq txresp -body {} rxreq barrier b1 sync delay 1 txresp -body {DATA} } -start varnish v1 -arg "-p debug=+workspace" -vcl+backend { sub vcl_backend_response { if (bereq.url == "/") { set beresp.do_esi = true; } } } -start client c1 { txreq rxresp expect resp.bodylen == 4 } -start client c2 { txreq barrier b1 sync rxresp expect resp.bodylen == 4 } -run client c1 { txreq rxresp expect resp.bodylen == 4 } -run varnish-7.5.0/bin/varnishtest/tests/r00386.vtc000066400000000000000000000011121457605730600210510ustar00rootroot00000000000000varnishtest "#386, failure to insert include" server s1 { rxreq expect req.url == "/body" txresp -hdr "Last-Modified: Tue, 25 Nov 2008 00:00:00 GMT" -body "BODY" rxreq expect req.url == "/" txresp -body {} } -start varnish v1 -arg "-p debug=+workspace" -vcl+backend { sub vcl_backend_response { if (bereq.url == "/") { set beresp.do_esi = true; } } } -start client c1 { txreq -url /body rxresp expect resp.bodylen == 4 txreq -url / -hdr "If-Modified-Since: Tue, 25 Nov 2008 00:00:00 GMT" rxresp expect resp.bodylen == 11 } -run varnish-7.5.0/bin/varnishtest/tests/r00387.vtc000066400000000000000000000006711457605730600210630ustar00rootroot00000000000000varnishtest "Regression test for #387: too long chunk header" server s1 { rxreq non_fatal send "HTTP/1.1 200 OK\r\n" send "Transfer-encoding: chunked\r\n" send "\r\n" send "004\r\n1234\r\n" send "1000000000000000000001\r\n@\r\n" send "00000000\r\n" send "\r\n" } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_stream = false; } } -start client c1 { txreq rxresp expect resp.status == 503 } -run varnish-7.5.0/bin/varnishtest/tests/r00400.vtc000066400000000000000000000004371457605730600210450ustar00rootroot00000000000000varnishtest "Regression test for ticket 409" server s1 { rxreq expect req.url == "/" send "HTTP/1.0 400 Not funny\r\n" send "\r\n" send "12345\r\n" } -start varnish v1 -vcl+backend { } -start client c1 { txreq rxresp expect resp.status == 400 expect resp.bodylen == 7 } -run varnish-7.5.0/bin/varnishtest/tests/r00409.vtc000066400000000000000000000011561457605730600210550ustar00rootroot00000000000000varnishtest "Regression test for ticket 409" varnish v1 -errvcl {Unknown token '!' when looking for REGEX} { backend be none; sub vcl_recv { if ( req.url ~ ! "\.(png|jpg|gif|js|css)$" ) { return (pass); } } } varnish v1 -errvcl {Expression has type STRING, expected REGEX} { backend be none; sub vcl_recv { set req.http.regex = "\.(png|jpg|gif|js|css)$"; if (req.url ~ req.http.regex) { return (pass); } } } varnish v1 -errvcl {Expression has type STRING, expected REGEX} { backend be none; sub vcl_recv { set req.http.regex = "\?.*"; set req.url = regsub(req.url, req.http.regex, ""); } } varnish-7.5.0/bin/varnishtest/tests/r00411.vtc000066400000000000000000000015561457605730600210520ustar00rootroot00000000000000varnishtest "Test restart in vcl_deliver" server s1 { rxreq expect req.url == "/foo" txresp -status 303 -hdr "Location: /bar" rxreq expect req.url == "/bar" txresp -status 200 -body "1" } -start varnish v1 -vcl+backend { sub vcl_hit { if (obj.http.X-Magic-Redirect == "1") { set req.url = obj.http.Location; return (restart); } } sub vcl_backend_response { if (beresp.status == 303) { set beresp.ttl = 3m; set beresp.http.X-Magic-Redirect = "1"; } } sub vcl_deliver { if (resp.http.X-Magic-Redirect == "1") { return (restart); } set resp.http.X-Restarts = req.restarts; } } -start client c1 { txreq -url "/foo" rxresp expect resp.status == 200 expect resp.bodylen == 1 expect resp.http.X-Restarts == "2" txreq -url "/foo" rxresp expect resp.status == 200 expect resp.bodylen == 1 expect resp.http.X-Restarts == "1" } -run varnish-7.5.0/bin/varnishtest/tests/r00412.vtc000066400000000000000000000011061457605730600210420ustar00rootroot00000000000000varnishtest "Regression test for ticket 412" server s1 { rxreq expect req.url == "/" txresp -status 303 -hdr "Location: /foo" accept rxreq expect req.url == "/foo" txresp -body "12345" } -start varnish v1 -vcl+backend { sub vcl_backend_response { if (beresp.status == 303) { set beresp.ttl = 60 s; set beresp.http.X-Magic-Redirect = "1"; return (deliver); } } sub vcl_deliver { if (resp.http.X-Magic-Redirect == "1") { set req.url = resp.http.Location; return (restart); } } } -start client c1 { txreq rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/r00416.vtc000066400000000000000000000054561457605730600210620ustar00rootroot00000000000000varnishtest "Regression test for #416: a surplus of HTTP headers" server s1 { rxreq txresp \ -hdr hdr00=00 \ -hdr hdr01=01 \ -hdr hdr02=02 \ -hdr hdr03=03 \ -hdr hdr04=04 \ -hdr hdr05=05 \ -hdr hdr06=06 \ -hdr hdr07=07 \ -hdr hdr08=08 \ -hdr hdr09=09 \ -hdr hdr10=10 \ -hdr hdr11=11 \ -hdr hdr12=12 \ -hdr hdr13=13 \ -hdr hdr14=14 \ -hdr hdr15=15 \ -hdr hdr16=16 \ -hdr hdr17=17 \ -hdr hdr18=18 \ -hdr hdr19=19 \ -hdr hdr20=20 \ -hdr hdr21=21 \ -hdr hdr22=22 \ -hdr hdr23=23 \ -hdr hdr24=24 \ -hdr hdr25=25 \ -hdr hdr26=26 \ -hdr hdr27=27 \ -hdr hdr28=28 \ -hdr hdr29=29 \ -hdr hdr30=30 \ -hdr hdr31=31 \ -hdr hdr32=32 \ -hdr hdr33=33 \ -hdr hdr34=34 \ -hdr hdr35=35 \ -hdr hdr36=36 \ -hdr hdr37=37 \ -hdr hdr38=38 \ -hdr hdr39=39 \ -hdr hdr40=40 \ -hdr hdr41=41 \ -hdr hdr42=42 \ -hdr hdr43=43 \ -hdr hdr44=44 \ -hdr hdr45=45 \ -hdr hdr46=46 \ -hdr hdr47=47 \ -hdr hdr48=48 \ -hdr hdr49=49 \ -hdr hdr50=50 \ -hdr hdr51=51 \ -hdr hdr52=52 \ -hdr hdr53=53 \ -hdr hdr54=54 \ -hdr hdr55=55 \ -hdr hdr56=56 \ -hdr hdr57=57 \ -hdr hdr58=58 \ -hdr hdr59=59 \ -hdr hdr60=60 \ -hdr hdr61=61 \ -hdr hdr62=62 \ -hdr hdr63=63 \ -hdr hdr64=64 \ -hdr hdr65=65 \ -hdr hdr66=66 \ -hdr hdr67=67 \ -hdr hdr68=68 \ -hdr hdr69=69 \ -body "foo" } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_stream = false; } } -start client c1 { txreq \ -hdr hdr00=00 \ -hdr hdr01=01 \ -hdr hdr02=02 \ -hdr hdr03=03 \ -hdr hdr04=04 \ -hdr hdr05=05 \ -hdr hdr06=06 \ -hdr hdr07=07 \ -hdr hdr08=08 \ -hdr hdr09=09 \ -hdr hdr10=10 \ -hdr hdr11=11 \ -hdr hdr12=12 \ -hdr hdr13=13 \ -hdr hdr14=14 \ -hdr hdr15=15 \ -hdr hdr16=16 \ -hdr hdr17=17 \ -hdr hdr18=18 \ -hdr hdr19=19 \ -hdr hdr20=20 \ -hdr hdr21=21 \ -hdr hdr22=22 \ -hdr hdr23=23 \ -hdr hdr24=24 \ -hdr hdr25=25 \ -hdr hdr26=26 \ -hdr hdr27=27 \ -hdr hdr28=28 \ -hdr hdr29=29 \ -hdr hdr30=30 \ -hdr hdr31=31 \ -hdr hdr32=32 \ -hdr hdr33=33 \ -hdr hdr34=34 \ -hdr hdr35=35 \ -hdr hdr36=36 \ -hdr hdr37=37 \ -hdr hdr38=38 \ -hdr hdr39=39 \ -hdr hdr40=40 \ -hdr hdr41=41 \ -hdr hdr42=42 \ -hdr hdr43=43 \ -hdr hdr44=44 \ -hdr hdr45=45 \ -hdr hdr46=46 \ -hdr hdr47=47 \ -hdr hdr48=48 \ -hdr hdr49=49 \ -hdr hdr50=50 \ -hdr hdr51=51 \ -hdr hdr52=52 \ -hdr hdr53=53 \ -hdr hdr54=54 \ -hdr hdr55=55 \ -hdr hdr56=56 \ -hdr hdr57=57 \ -hdr hdr58=58 \ -hdr hdr59=59 \ -hdr hdr60=60 \ -hdr hdr61=61 \ -hdr hdr62=62 \ -hdr hdr63=63 \ -hdr hdr64=64 \ -hdr hdr65=65 \ -hdr hdr66=66 \ -hdr hdr67=67 \ -hdr hdr68=68 \ -hdr hdr69=69 rxresp expect resp.status == 400 } -run client c1 { txreq rxresp expect resp.status == 503 } -run varnish-7.5.0/bin/varnishtest/tests/r00425.vtc000066400000000000000000000012061457605730600210470ustar00rootroot00000000000000varnishtest "check late pass stalling" server s1 { rxreq txresp \ -hdr "Set-Cookie: foo=bar" \ -hdr "Expires: Thu, 19 Nov 1981 08:52:00 GMT" \ -body "1111\n" rxreq txresp \ -hdr "Set-Cookie: foo=bar" \ -hdr "Expires: Thu, 19 Nov 1981 08:52:00 GMT" \ -body "22222n" rxreq txresp \ -hdr "Set-Cookie: foo=bar" \ -hdr "Expires: Thu, 19 Nov 1981 08:52:00 GMT" \ -body "33333n" } -start varnish v1 -vcl+backend { } -start client c1 { txreq rxresp txreq rxresp txreq rxresp } -run varnish v1 -expect cache_hitpass == 0 varnish v1 -expect cache_hitmiss == 2 varnish v1 -expect cache_miss == 3 varnish-7.5.0/bin/varnishtest/tests/r00427.vtc000066400000000000000000000011771457605730600210600ustar00rootroot00000000000000varnishtest "client close in ESI delivery" barrier b1 cond 2 barrier b2 cond 2 server s1 { rxreq txresp -body { } rxreq expect req.url == "/foo" barrier b1 sync barrier b2 sync txresp -body "[foo]" rxreq expect req.url == "/bar" txresp -body "[bar]" rxreq expect req.url == "/barf" txresp -body "[barf]" } -start varnish v1 -vcl+backend { sub vcl_backend_response { if (bereq.url == "/") { set beresp.do_esi = true; } } } -start client c1 { txreq barrier b1 sync } -run client c1 { barrier b2 sync txreq rxresp } -run varnish-7.5.0/bin/varnishtest/tests/r00433.vtc000066400000000000000000000012231457605730600210450ustar00rootroot00000000000000varnishtest "noidx" server s1 { rxreq expect req.url == "/foo" txresp -hdr "Connection: close" send { FOO{ }FOO The end. } } -start server s2 { rxreq expect req.url == "/bar" txresp -body "bar" } -start varnish v1 -vcl+backend { sub vcl_backend_fetch { if (bereq.url == "/foo") { set bereq.backend = s1; } else { set bereq.backend = s2; } } sub vcl_backend_response { set beresp.do_esi = true; } } -start varnish v1 -cliok "param.set debug +syncvsl" varnish v1 -cliok "debug.fragfetch 32" client c1 { txreq -url /foo rxresp } -run varnish-7.5.0/bin/varnishtest/tests/r00445.vtc000066400000000000000000000006331457605730600210540ustar00rootroot00000000000000varnishtest "zero length ESI include segmens with chunked encoding" server s1 { rxreq expect req.url == "/" txresp -body {} rxreq expect req.url == "/bar" txresp } -start varnish v1 -arg "-p feature=+esi_disable_xml_check" -vcl+backend { sub vcl_backend_response { set beresp.do_esi = true; } } -start client c1 { txreq rxresp expect resp.bodylen == 10 } -run varnish-7.5.0/bin/varnishtest/tests/r00466.vtc000066400000000000000000000016271457605730600210630ustar00rootroot00000000000000varnishtest "Check Range forwarding to backend" server s1 { rxreq expect req.url == "/foo" expect req.http.range == txresp \ -hdr "Foobar: _barf_" \ -hdr "Connection: close" \ -body "012345\n" accept rxreq expect req.url == "/bar" expect req.http.range == "200-300" txresp \ -status 206 \ -hdr "Foobar: _barf_" \ -body "012345\n" } -start varnish v1 -vcl+backend { sub vcl_recv { if (req.url ~ "bar") { return(pass); } } } -start varnish v1 -cliok "param.set debug +syncvsl" client c1 { txreq -url "/foo" -hdr "Range: bytes=1-2" rxresp expect resp.status == 206 expect resp.http.Content-Length == 2 expect resp.http.X-Varnish == "1001" } -run varnish v1 -cliok "param.set http_range_support off" client c2 { # NB: Deliberately bogus Range header txreq -url "/bar" -hdr "Range: 200-300" rxresp expect resp.status == 206 expect resp.http.X-Varnish == "1004" } -run varnish-7.5.0/bin/varnishtest/tests/r00476.vtc000066400000000000000000000012141457605730600210540ustar00rootroot00000000000000varnishtest "zero length ESI include segments with chunked encoding" server s1 { rxreq expect req.url == "/" txresp -body {
\0c} rxreq expect req.url == "/bar" txresp rxreq expect req.url == "/comment" txresp -body {\0c} rxreq expect req.url == "/nullbefore" txresp -body {\0c} } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_esi = true; } } -start client c1 { txreq rxresp expect resp.bodylen == 8 txreq -url /comment rxresp expect resp.bodylen == 13 txreq -url /nullbefore rxresp expect resp.bodylen == 8 } -run varnish-7.5.0/bin/varnishtest/tests/r00494.vtc000066400000000000000000000011271457605730600210570ustar00rootroot00000000000000varnishtest "HTTP continuation lines" server s1 { rxreq txresp -hdr "Foo: bar,\n\tbarf: fail" -body "xxx" rxreq txresp -hdr "Foo: bar,\n \t\n\tbarf: fail" -body "xxx" } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.http.bar = beresp.http.foo; unset beresp.http.foo; } } -start client c1 { txreq rxresp expect resp.http.bar == "bar, barf: fail" expect resp.http.barf == expect resp.http.foo == txreq -url /2 rxresp expect resp.http.bar == "bar, barf: fail" expect resp.http.barf == expect resp.http.foo == } -run varnish-7.5.0/bin/varnishtest/tests/r00495.vtc000066400000000000000000000012071457605730600210570ustar00rootroot00000000000000varnishtest "HTTP 1.0 backend not getting reused" server s1 { rxreq txresp -proto HTTP/1.0 -status 201 -hdr "Connection: keep-alive" -body foo rxreq txresp -proto HTTP/1.0 -status 202 -hdr "Connection: close" -body foo expect_close accept rxreq txresp -proto HTTP/1.0 -status 203 -body foo expect_close accept rxreq txresp -proto HTTP/1.0 -status 205 -body bar } -start varnish v1 -vcl+backend { sub vcl_recv { return (pass); } } -start client c1 { txreq rxresp expect resp.status == 201 txreq rxresp expect resp.status == 202 txreq rxresp expect resp.status == 203 txreq rxresp expect resp.status == 205 } -run varnish-7.5.0/bin/varnishtest/tests/r00498.vtc000066400000000000000000000005131457605730600210610ustar00rootroot00000000000000varnishtest "very very very long return header" server s1 { rxreq expect req.url == "/" txresp -noserver -hdr "Location: ${string,repeat,8136,1}" -nodate -body {foo} } -start varnish v1 -vcl+backend { } -start varnish v1 -cliok "param.set http_resp_hdr_len 32768" client c1 { txreq rxresp expect resp.bodylen == 3 } -run varnish-7.5.0/bin/varnishtest/tests/r00502.vtc000066400000000000000000000006771457605730600210560ustar00rootroot00000000000000varnishtest "multi element ban" server s1 { rxreq txresp -hdr "foo: bar1" -body "1" rxreq txresp -hdr "foo: bar2" -body "22" } -start varnish v1 -vcl+backend { import std; sub vcl_recv { std.ban("req.url == / && obj.http.foo ~ bar1"); } } -start client c1 { txreq rxresp expect resp.http.foo == "bar1" txreq rxresp expect resp.http.foo == "bar2" txreq rxresp expect resp.http.foo == "bar2" } -run varnish v1 -cliok ban.list varnish-7.5.0/bin/varnishtest/tests/r00506.vtc000066400000000000000000000003031457605730600210440ustar00rootroot00000000000000varnishtest "Illegal HTTP status from backend" server s1 { rxreq send "HTTP/1.1 1000\n\nFoo" } -start varnish v1 -vcl+backend { sub vcl_recv { } } -start client c1 { txreq rxresp } -run varnish-7.5.0/bin/varnishtest/tests/r00524.vtc000066400000000000000000000010441457605730600210470ustar00rootroot00000000000000varnishtest "Regression test for 524: HTTP/1.0 and ESI" server s1 { rxreq expect req.url == "/" txresp -body { } rxreq txresp -body "" } -start varnish v1 -vcl+backend { sub vcl_recv { // return (pass); } sub vcl_backend_response { set beresp.do_esi = true; } } -cliok "param.set timeout_idle 60" -start client c1 { txreq -proto HTTP/1.0 -hdr "Connection: kEep-alive" rxresp expect resp.status == 200 expect resp.bodylen == 16 } -run varnish-7.5.0/bin/varnishtest/tests/r00545.vtc000066400000000000000000000003411457605730600210510ustar00rootroot00000000000000varnishtest "High-bit chars" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { sub vcl_deliver { set resp.http.foo = "æøå"; } } -start client c1 { txreq rxresp expect resp.http.foo == "æøå" } -run varnish-7.5.0/bin/varnishtest/tests/r00549.vtc000066400000000000000000000003171457605730600210600ustar00rootroot00000000000000varnishtest "Regression test for bad backend reply with ctrl char." server s1 { rxreq send "HTTP/1.1 200 OK\013\r\n\r\nTest" } -start varnish v1 -vcl+backend {} -start client c1 { txreq rxresp } -run varnish-7.5.0/bin/varnishtest/tests/r00558.vtc000066400000000000000000000004031457605730600210540ustar00rootroot00000000000000varnishtest "error from vcl_recv{} has no numeric code" # XXX: V4 makes this an obsolete test ? server s1 { } -start varnish v1 -vcl+backend { sub vcl_recv { return (synth(501)); } } -start client c1 { txreq rxresp expect resp.status == 501 } -run varnish-7.5.0/bin/varnishtest/tests/r00561.vtc000066400000000000000000000004001457605730600210430ustar00rootroot00000000000000varnishtest "Junk request should not go to vcl_synth" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { sub vcl_synth { return (restart); } } -start client c1 { send "sljdslf\r\n\r\n" delay .1 } -run client c1 { txreq rxresp } -run varnish-7.5.0/bin/varnishtest/tests/r00590.vtc000066400000000000000000000032511457605730600210540ustar00rootroot00000000000000varnishtest "Regression test for 590" server s1 { rxreq expect req.url == "/" txresp -body { } rxreq txresp -body "foo" } -start varnish v1 -vcl+backend { sub vcl_recv { return (pass); } sub vcl_backend_response { if (bereq.url == "/") { set beresp.do_esi = true; } } } -start client c1 { txreq rxresp expect resp.status == 200 expect resp.bodylen == 140 } -run varnish-7.5.0/bin/varnishtest/tests/r00612.vtc000066400000000000000000000013251457605730600210470ustar00rootroot00000000000000varnishtest "Url workspace gets overwritten/reused" server s1 { rxreq expect req.url == "/" txresp -body { } rxreq expect req.url == "/1" txresp -body "11111" rxreq expect req.url == "/2" txresp -body "22222" rxreq expect req.url == "/3" txresp -body "33333" rxreq expect req.url == "/4" txresp -body "44444" rxreq expect req.url == "/5" txresp -body "55555" } -start varnish v1 -arg "-p feature=+esi_disable_xml_check" -vcl+backend { sub vcl_recv { return (pass); } sub vcl_backend_response { set beresp.do_esi = true; } } -start client c1 { txreq rxresp } -run varnish-7.5.0/bin/varnishtest/tests/r00641.vtc000066400000000000000000000010641457605730600210510ustar00rootroot00000000000000varnishtest "Regression test for 524: HTTP/1.0 and ESI" server s1 { rxreq expect req.url == "/" txresp -proto HTTP/1.0 -body { <_esi:include src="/foo"/> } } -start varnish v1 -vcl+backend { sub vcl_recv { // return (pass); } sub vcl_backend_response { set beresp.do_esi = true; } } -cliok "param.set timeout_idle 60" -start client c1 { txreq -proto HTTP/1.1 rxresp # XXX this is the problem: expect resp.proto == HTTP/1.1 expect resp.status == 200 expect resp.bodylen == 37 } -run varnish-7.5.0/bin/varnishtest/tests/r00655.vtc000066400000000000000000000001721457605730600210550ustar00rootroot00000000000000varnishtest "Test nested /*...*/ comments " varnish v1 -errvcl {/* ... */ comment contains /*} { /* foo /* bar */ } varnish-7.5.0/bin/varnishtest/tests/r00667.vtc000066400000000000000000000013761457605730600210670ustar00rootroot00000000000000varnishtest "things stuck on busy object" barrier b1 cond 2 barrier b2 cond 2 server s1 { rxreq barrier b1 sync barrier b2 sync # There is a race in varnish between the first request releasing # the backend connection, and the second request trying to get it # which makes reuse of backend connection sometimes work and # sometimes not. Solve by never reusing the backend connection. txresp -hdr "Connection: close" -bodylen 2 accept rxreq txresp -bodylen 5 } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.ttl = 0s; } } -start client c1 { txreq rxresp expect resp.bodylen == 2 } -start client c2 { barrier b1 sync txreq barrier b2 sync rxresp expect resp.bodylen == 5 } -start client c1 -wait client c2 -wait varnish-7.5.0/bin/varnishtest/tests/r00679.vtc000066400000000000000000000003131457605730600210600ustar00rootroot00000000000000varnishtest "pass + HEAD" server s1 { rxreq expect req.method == "HEAD" txresp } -start varnish v1 -vcl+backend {} -start client c1 { txreq -req HEAD -hdr "Cookie: foo=bar" rxresp -no_obj } -run varnish-7.5.0/bin/varnishtest/tests/r00686.vtc000066400000000000000000000007171457605730600210660ustar00rootroot00000000000000varnishtest "Check that cache-control headers are collapsed" server s1 { rxreq txresp -hdr "Cache-Control: foo" -hdr "Cache-control: bar" -bodylen 4 } -start varnish v1 -vcl+backend { sub vcl_deliver { set resp.http.Foo = req.http.cache-control; } } -start client c1 { txreq -hdr "Cache-Control: froo" -hdr "Cache-control: boz" rxresp expect resp.http.foo == "froo, boz" expect resp.http.cache-control == "foo, bar" expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/r00694.vtc000066400000000000000000000007241457605730600210630ustar00rootroot00000000000000varnishtest "Wrong calculation of last storage segment length" server s1 { rxreq txresp -hdr "Transfer-encoding: chunked" # This is chunksize (128k) + 2 to force to chunks to be allocated chunkedlen 131074 chunkedlen 0 } -start varnish v1 -vcl+backend "" -start client c1 { txreq -proto HTTP/1.0 rxresp # Chunked encoding streaming HTTP/1.0 client turns # into EOF body delivery. expect resp.http.connection == close expect resp.bodylen == 131074 } -run varnish-7.5.0/bin/varnishtest/tests/r00700.vtc000066400000000000000000000003151457605730600210430ustar00rootroot00000000000000varnishtest "check TAB in 3 header field" server s1 { rxreq send "HTTP/1.1 666 foo\tbar\n\nFOO" } -start varnish v1 -vcl+backend {} -start client c1 { txreq rxresp expect resp.status == 666 } -run varnish-7.5.0/bin/varnishtest/tests/r00702.vtc000066400000000000000000000005271457605730600210520ustar00rootroot00000000000000varnishtest "Range bug" server s1 { rxreq txresp -bodylen 100 } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_stream = false; } } -start varnish v1 -cliok "param.set http_range_support on" client c1 { txreq -hdr "Range: bytes=50-200" rxresp expect resp.status == 206 expect resp.bodylen == 50 } -run varnish-7.5.0/bin/varnishtest/tests/r00704.vtc000066400000000000000000000005241457605730600210510ustar00rootroot00000000000000varnishtest "Range bug" server s1 { rxreq txresp -bodylen 100 } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_stream = false; } } -start varnish v1 -cliok "param.set http_range_support on" client c1 { txreq -hdr "Range: bytes=-20" rxresp expect resp.status == 206 expect resp.bodylen == 20 } -run varnish-7.5.0/bin/varnishtest/tests/r00722.vtc000066400000000000000000000013231457605730600210470ustar00rootroot00000000000000varnishtest "Director cleanup fails on vcl.discard" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { import directors; backend b2 { .host = "${s1_addr}"; .port = "${s1_port}"; } backend b3 { .host = "${s1_addr}"; .port = "${s1_port}"; } sub vcl_init { new foo = directors.random(); foo.add_backend(s1, 1); foo.add_backend(b2, 1); foo.add_backend(b3, 1); } sub vcl_backend_fetch { set bereq.backend = foo.backend(); } } -start varnish v1 -cliok "vcl.list" varnish v1 -cliok "backend.list" varnish v1 -vcl+backend { } varnish v1 -cliok "vcl.list" varnish v1 -cliok "backend.list" varnish v1 -cliok "vcl.list" varnish v1 -cliok "vcl.discard vcl1" client c1 { txreq rxresp } -run varnish-7.5.0/bin/varnishtest/tests/r00733.vtc000066400000000000000000000003441457605730600210530ustar00rootroot00000000000000varnishtest "HTTP/1.1 Backend sends no length hint" server s1 { rxreq send "HTTP/1.0 200 OK\n" send "\n" send "12345" } -start varnish v1 -vcl+backend {} -start client c1 { txreq rxresp expect resp.bodylen == 5 } -run varnish-7.5.0/bin/varnishtest/tests/r00742.vtc000066400000000000000000000005261457605730600210550ustar00rootroot00000000000000varnishtest "% escapes in VCL source and vcl.show" server s1 { rxreq txresp } -start varnish v1 -cliok "param.set vcc_feature +allow_inline_c" -vcl+backend { C{ #include }C sub vcl_recv { C{ printf("%s %s %s", "foo", "bar", "barf"); }C } } -start client c1 { txreq rxresp } -run varnish v1 -cliok "vcl.show vcl1" varnish-7.5.0/bin/varnishtest/tests/r00763.vtc000066400000000000000000000005701457605730600210570ustar00rootroot00000000000000varnishtest "Vary header with extra colon" server s1 { rxreq txresp -hdr "Vary:: foo" -hdr "Foo: bar" -bodylen 9 rxreq txresp -hdr "Vary:: foo" -hdr "Foo: bar" -bodylen 8 } -start varnish v1 -vcl+backend {} -start client c1 { txreq rxresp expect resp.status == 200 expect resp.bodylen == 9 txreq rxresp expect resp.status == 200 expect resp.bodylen == 8 } -run varnish-7.5.0/bin/varnishtest/tests/r00769.vtc000066400000000000000000000013451457605730600210660ustar00rootroot00000000000000varnishtest "Test that set status code is readable again for obj.status and beresp.status" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { sub vcl_recv { if (req.url ~ "^/test1") { return (synth(700)); } } sub vcl_backend_response { set beresp.status = 404; set beresp.http.X-status = beresp.status; return (deliver); } sub vcl_synth { if (resp.status == 700) { set resp.status=404; set resp.http.X-status = resp.status; return (deliver); } } } -start client c1 { txreq -url "/test1" rxresp expect resp.status == 404 expect resp.http.X-status == 404 } client c2 { txreq -url "/test2" rxresp expect resp.status == 404 expect resp.http.X-status == 404 } client c1 -run client c2 -run varnish-7.5.0/bin/varnishtest/tests/r00776.vtc000066400000000000000000000004651457605730600210660ustar00rootroot00000000000000varnishtest "Edge case of chunked encoding, trimming storage to length." server s1 { rxreq txresp -nolen -hdr "Transfer-encoding: chunked" chunkedlen 4096 chunkedlen 0 } -start varnish v1 \ -arg "-p fetch_chunksize=4k" \ -arg "-s default,1m" -vcl+backend { } -start client c1 { txreq rxresp } -run varnish-7.5.0/bin/varnishtest/tests/r00781.vtc000066400000000000000000000007361457605730600210630ustar00rootroot00000000000000varnishtest "NULL assignment to line1 fields." server s1 { rxreq txresp -bodylen 10 } -start varnish v1 -vcl+backend { sub vcl_recv { set req.url = req.http.foo; } } -start client c1 { txreq rxresp expect resp.status == 503 expect resp.reason == "VCL failed" } -run varnish v1 -vcl+backend { sub vcl_backend_fetch { set bereq.url = bereq.http.foo; } } client c1 { txreq rxresp expect resp.status == 503 expect resp.reason == "Service Unavailable" } -run varnish-7.5.0/bin/varnishtest/tests/r00789.vtc000066400000000000000000000006431457605730600210700ustar00rootroot00000000000000varnishtest "pass should not filter Content-Range header" server s1 { rxreq expect req.http.range == "bytes=1-1" txresp -status 206 -hdr "Content-Range: bytes 1-1/20" -body "F" } -start varnish v1 -vcl+backend { sub vcl_recv {return(pass);} } -start client c1 { txreq -hdr "Range: bytes=1-1" rxresp expect resp.status == 206 expect resp.bodylen == 1 expect resp.http.content-range == "bytes 1-1/20" } -run varnish-7.5.0/bin/varnishtest/tests/r00795.vtc000066400000000000000000000014031457605730600210600ustar00rootroot00000000000000varnishtest "Content-Length in pass'ed 304 does not trigger body fetch" # XXX: Description doesn't make sense relative to actual goings on... server s1 { rxreq txresp -hdr "Last-Modified: ${date}" -body "FOO" rxreq expect req.url == "/bar" txresp -body "FOOBAR" } -start varnish v1 -vcl+backend { } -start # First load the objects into cache client c1 { txreq rxresp expect resp.status == 200 expect resp.bodylen == 3 txreq -url "/bar" rxresp expect resp.status == 200 expect resp.bodylen == 6 } -run # Wait, so we know ${date} to be higher delay 1 client c1 { txreq -hdr "If-Modified-Since: ${date}" rxresp -no_obj expect resp.status == 304 txreq -url "/bar" -hdr "If-Modified-Since: ${date}" rxresp -no_obj expect resp.status == 304 } -run varnish-7.5.0/bin/varnishtest/tests/r00803.vtc000066400000000000000000000004521457605730600210510ustar00rootroot00000000000000varnishtest "304 response in pass mode" server s1 { rxreq txresp -status 304 \ -nolen \ -hdr "Date: Mon, 25 Oct 2010 06:34:06 GMT" delay 600 } -start varnish v1 -vcl+backend { sub vcl_recv { return (pass); } } -start client c1 { txreq rxresp -no_obj expect resp.status == 304 } -run varnish-7.5.0/bin/varnishtest/tests/r00806.vtc000066400000000000000000000007441457605730600210600ustar00rootroot00000000000000varnishtest "Content-Length in pass'ed 304 does not trigger body fetch" server s1 { rxreq txresp -status 304 \ -nolen \ -hdr "Date: Mon, 25 Oct 2010 06:34:06 GMT" \ -hdr "Connection: close" \ -hdr "Content-Length: 100" } -start varnish v1 -vcl+backend { sub vcl_recv { return(pass);} sub vcl_deliver { set resp.http.CL = resp.http.content-length; } } -start client c1 { txreq rxresphdrs expect resp.status == 304 expect resp.http.cl == 100 expect_close } -run varnish-7.5.0/bin/varnishtest/tests/r00832.vtc000066400000000000000000000001521457605730600210500ustar00rootroot00000000000000varnishtest "Regression #832 IPV6 parse bug" varnish v1 -vcl { backend default { .host = "[::]"; } } varnish-7.5.0/bin/varnishtest/tests/r00861.vtc000066400000000000000000000035111457605730600210540ustar00rootroot00000000000000varnishtest "Regression test for ESI/Gzip issues in #861" server s1 { rxreq expect req.url == "/1" txresp -body {

} rxreq expect req.url == "/foo" txresp -body rxreq expect req.url == "/bar" txresp -body rxreq expect req.url == "/barf" txresp -body {[{"program":true,"id":972389,"vendorId":"15451701","starttime":1297777500000,"endtime":1297783500000,"title":"Swimming Pool","oTitle":"true","genre":"0x10x0","timeshiftEnabled":true},{"program":true,"id":972391,"vendorId":"15451702","starttime":1297783500000,"endtime":1297785000000,"title":"Fashion -Trends","oTitle":null,"genre":"0x30x0","timeshiftEnabled":true},{"program":true,"id":972384,"vendorId":"15451703","starttime":1297785000000,"endtime":1297786500000,"title":"Fashion - mænd","oTitle":null,"genre":"0x30x0","timeshiftEnabled":true},{"program":true,"id":972388,"vendorId":"15451704","starttime":1297786500000,"endtime":1297789800000,"title":"The Day Before","oTitle":"true","genre":"0x30x0","timeshiftEnabled":true},{"program":true,"id":972393,"vendorId":"15451705","starttime":1297789800000,"endtime":1297793100000,"title":"Kessels øje","oTitle":null,"genre":"0x20x3","timeshiftEnabled":true}]} rxreq expect req.url == "/2" txresp -body { } } -start varnish v1 \ -vcl+backend { sub vcl_backend_response { if (bereq.url == "/1" || bereq.url == "/2") { set beresp.do_esi = true; set beresp.do_gzip = true; } } } -start client c1 { txreq -url "/1" rxresp expect resp.http.Content-Encoding == expect resp.bodylen == 22 txreq -url "/barf" -hdr "Accept-Encoding: gzip" rxresp expect resp.http.Content-Encoding == expect resp.bodylen == 909 txreq -url "/2" -hdr "Accept-Encoding: gzip" rxresp gunzip expect resp.bodylen == 910 } -run varnish-7.5.0/bin/varnishtest/tests/r00873.vtc000066400000000000000000000010431457605730600210550ustar00rootroot00000000000000varnishtest "Ticket #873" server s1 { rxreq txresp -nolen -hdr "Transfer-encoding: chunked" chunked {} chunked {} chunkedlen 0 } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_esi = true; set beresp.do_gzip = true; } } -start client c1 { txreq -hdr "Accept-Encoding: gzip" rxresp expect resp.http.content-encoding == gzip gunzip expect resp.status == 200 expect resp.bodylen == 75 } -run varnish v1 -expect esi_errors == 0 varnish-7.5.0/bin/varnishtest/tests/r00878.vtc000066400000000000000000000010251457605730600210620ustar00rootroot00000000000000varnishtest "Loading vmods in subsequent VCLs" server s1 { rxreq txresp -bodylen 4 } -start varnish v1 -vcl+backend { import debug; sub vcl_deliver { set resp.http.who = debug.author(phk); } } -start client c1 { txreq rxresp } -run varnish v1 -vcl+backend { import debug; sub vcl_deliver { set resp.http.who = debug.author(des); } } client c1 { txreq rxresp } -run varnish v1 -vcl+backend { import debug; sub vcl_deliver { set resp.http.who = debug.author(kristian); } } client c1 { txreq rxresp } -run varnish-7.5.0/bin/varnishtest/tests/r00894.vtc000066400000000000000000000007231457605730600210640ustar00rootroot00000000000000varnishtest "Ticket #894" server s1 { rxreq txresp -body {} } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_esi = true; } } -start logexpect l1 -v v1 -g raw { expect * * Fetch_Body expect 0 = ESI_xmlerror {^ERR after 5 ESI 1.0 has multiple src= attributes$} expect 0 = BackendClose } -start client c1 { txreq rxresp expect resp.bodylen == 10 } -run logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/r00896.vtc000066400000000000000000000010571457605730600210670ustar00rootroot00000000000000varnishtest "Ticket #896, strings over 256 bytes" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { sub vcl_recv { if (req.http.host ~ "^(abcdefghij|abcdefghij|abcdefghij|abcdefghij|abcdefghij|abcdefghij|abcdefghij|abcdefghij|abcdefghij|abcdefghij|abcdefghij|abcdefghij|abcdefghij|abcdefghij|abcdefghij|abcdefghij|abcdefghij|abcdefghij|abcdefghij|abcdefghij|abcdefghij|abcdefghij|abcdefghij|abcdefghij|abcdefghij|abcdefghij)") { return (synth(500,"not ok")); } } } -start client c1 { txreq rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/r00902.vtc000066400000000000000000000010011457605730600210400ustar00rootroot00000000000000varnishtest "Ticket #902 http_CollectHdr() failure on consequitive headers" server s1 { rxreq txresp \ -hdr "Server: Microsoft-IIS/5.0" \ -hdr "Cache-Control: A" \ -hdr "Cache-Control: B" \ -hdr "Cache-Control: C" \ -hdr "Cache-Control: D" \ -hdr "Foo: bar" \ -bodylen 5 } -start varnish v1 -vcl+backend { } -start varnish v1 -cliok "param.set debug +req_state" client c1 { txreq -hdr "foo: /foo" rxresp expect resp.http.cache-control == "A, B, C, D" expect resp.http.foo == "bar" } -run varnish-7.5.0/bin/varnishtest/tests/r00911.vtc000066400000000000000000000005121457605730600210460ustar00rootroot00000000000000varnishtest "vcc_feature::err_unref should also cover subs" server s1 { rxreq expect req.url == "/bar" txresp -body "foobar" } -start varnish v1 -arg "-p vcc_feature=-err_unref" -vcl+backend { sub foobar { set req.http.foobar = "foobar"; } } -start client c1 { txreq -url /bar rxresp expect resp.bodylen == 6 } -run varnish-7.5.0/bin/varnishtest/tests/r00913.vtc000066400000000000000000000005711457605730600210550ustar00rootroot00000000000000varnishtest "test regsub(NULL)" server s1 { rxreq expect req.url == "/bar" txresp -body "foobar" } -start varnish v1 -vcl+backend { sub vcl_backend_response { if (beresp.http.bar ~ "$") { set beresp.http.foo = regsub(beresp.http.bar, "$", "XXX"); } } } -start client c1 { txreq -url /bar rxresp expect resp.bodylen == 6 expect resp.http.foo == "XXX" } -run varnish-7.5.0/bin/varnishtest/tests/r00915.vtc000066400000000000000000000007401457605730600210550ustar00rootroot00000000000000varnishtest "error object allocation with persistent" feature persistent_storage server s1 { rxreq txresp } -start shell "rm -f ${tmpdir}/_.per" varnish v1 \ -arg "-pfeature=+wait_silo" \ -arg "-sdeprecated_persistent,${tmpdir}/_.per,5m" \ -vcl+backend { sub vcl_backend_response { set beresp.uncacheable = false; set beresp.ttl = 0s; set beresp.status = 751; return (deliver); } } -start client c1 { txreq -url "/" rxresp expect resp.status == 751 } -run varnish-7.5.0/bin/varnishtest/tests/r00916.vtc000066400000000000000000000004501457605730600210540ustar00rootroot00000000000000varnishtest "VCC reference bug" server s1 { rxreq txresp -body "FOO" } -start varnish v1 -errvcl {Undefined backend s-1, first reference:} { backend b { .host = "${localhost}"; } sub s1 { } sub vcl_backend_response { if (bereq.backend == s-1){ set bereq.backend = s-1; } } } varnish-7.5.0/bin/varnishtest/tests/r00917.vtc000066400000000000000000000005321457605730600210560ustar00rootroot00000000000000varnishtest "test here documents for bans" server s1 { rxreq expect req.url == "/bar" txresp -body "foobar" } -start varnish v1 -vcl+backend { } -start client c1 { txreq -url /bar rxresp expect resp.bodylen == 6 } -run varnish v1 -cliok {ban req.url ~ << foo \.bar foo } varnish v1 -cliok ban.list varnish v1 -expect bans_added == 2 varnish-7.5.0/bin/varnishtest/tests/r00921.vtc000066400000000000000000000006171457605730600210550ustar00rootroot00000000000000varnishtest "VCC type issue in regsub arg 1" server s1 { rxreq expect req.http.foo == "${localhost}" expect req.http.bar == "${localhost}" txresp } -start varnish v1 -vcl+backend { sub vcl_recv { set req.http.bar = regsub(req.url, ".*", client.ip); set req.http.foo = regsub(client.ip, "#.*", client.ip); } } -start client c1 { txreq -url "/" rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/r00940.vtc000066400000000000000000000035351457605730600210600ustar00rootroot00000000000000varnishtest "GZIP, ESI and etags" server s1 { rxreq expect req.url == /1 expect req.http.accept-encoding == "gzip" txresp -hdr {ETag: "foo"} -gziplen 41 rxreq expect req.url == /2 txresp -hdr {ETag: "foo2"} -bodylen 42 rxreq expect req.url == /3 txresp -hdr {ETag: "foo3"} -body {

foo

} rxreq expect req.url == /4 txresp -hdr {ETag: "foo4"} -gzipbody { foo } } -start varnish v1 -syntax 4.0 -vcl+backend { sub vcl_backend_response { if (bereq.url == "/2") { set beresp.do_gzip = true; } if (bereq.url == "/3" || bereq.url == "/4") { set beresp.do_esi = true; } } sub vcl_deliver { if (req.http.foo == "noesi") { set req.esi = false; } } } -start client c1 { # Straight through gzip, strong etag survives txreq -url /1 -hdr "Accept-Encoding: gzip" rxresp expect resp.http.etag == {"foo"} gunzip expect resp.bodylen == 41 delay .2 # gzip in, gunzip out, weak etag txreq -url /1 rxresp expect resp.http.etag == {W/"foo"} expect resp.bodylen == 41 delay .2 # Gzip on input, weak etag txreq -url /2 -hdr "Accept-Encoding: gzip" rxresp expect resp.http.etag == {W/"foo2"} gunzip expect resp.bodylen == 42 delay .2 # Gzip on input, gunzip on output, weak etag txreq -url /2 rxresp expect resp.http.etag == {W/"foo2"} expect resp.bodylen == 42 delay .2 # ESI expansion, weak etag txreq -url /3 rxresp expect resp.http.etag == {W/"foo3"} expect resp.bodylen == 8 delay .2 # ESI parse, but no expansion, strong etag txreq -url /3 -hdr "foo: noesi" rxresp expect resp.http.etag == {"foo3"} expect resp.bodylen == 38 delay .2 # ESI parse, no expansion, but re-gzipping, weak etag txreq -url /4 -hdr "foo: noesi" -hdr "Accept-Encoding: gzip" rxresp expect resp.http.etag == {W/"foo4"} gunzip expect resp.bodylen == 40 } -run varnish-7.5.0/bin/varnishtest/tests/r00941.vtc000066400000000000000000000005541457605730600210570ustar00rootroot00000000000000varnishtest "beresp.ttl set in vcl takes effect" server s1 { rxreq txresp -hdr "Cache-control: max-age=1" -body "FOO" rxreq txresp -body "FOOBAR" } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.ttl = 1000s; } } -start client c1 { txreq rxresp expect resp.bodylen == 3 delay 2 txreq rxresp expect resp.bodylen == 3 } -run varnish-7.5.0/bin/varnishtest/tests/r00942.vtc000066400000000000000000000023371457605730600210610ustar00rootroot00000000000000varnishtest "#942 junk after gzip from backend" server s1 { rxreq txresp -nolen \ -hdr "Content-Encoding: gzip" \ -hdr "Connection: close" \ -hdr "Transfer-Encoding: Chunked" send "14\r\n" # An empty gzip file: sendhex "1f8b" sendhex "08" sendhex "00" sendhex "00000000" sendhex "00" sendhex "03" sendhex "0300" sendhex "00000000" sendhex "00000000" send "\r\n" chunked "FOOBAR" non_fatal chunkedlen 0 } -start varnish v1 \ -vcl+backend { sub vcl_backend_response { set beresp.do_stream = false; if (bereq.http.foo == "foo") { set beresp.do_gunzip = true; } } } varnish v1 -cliok "param.set debug +syncvsl" varnish v1 -start client c1 { txreq -url /1 rxresp expect resp.status == 503 } -run server s1 -wait { fatal rxreq txresp -nolen \ -hdr "Content-Encoding: gzip" \ -hdr "Connection: close" \ -hdr "Transfer-Encoding: Chunked" send "14\r\n" # An empty gzip file: sendhex "1f8b" sendhex "08" sendhex "00" sendhex "00000000" sendhex "00" sendhex "03" sendhex "0300" sendhex "00000000" sendhex "00000000" send "\r\n" chunked "FOOBAR" non_fatal chunkedlen 0 } -start client c1 { txreq -url /2 -hdr "Foo: foo" rxresp expect resp.status == 503 } -run varnish-7.5.0/bin/varnishtest/tests/r00956.vtc000066400000000000000000000021131457605730600210560ustar00rootroot00000000000000varnishtest "obj.ttl relative/absolute" server s1 { rxreq txresp -hdr "Cache-Control: max-age=23" -hdr "Age: 4" -bodylen 40 } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.http.fooA = beresp.ttl; set beresp.ttl = 10s; set beresp.http.fooB = beresp.ttl; set beresp.http.ageA = beresp.age; } sub vcl_hit { set req.http.foo = obj.ttl; set req.http.ageB = obj.age; } sub vcl_deliver { set resp.http.foo = req.http.foo; set resp.http.ageB = req.http.ageB; } } -start client c1 { txreq rxresp expect resp.bodylen == 40 expect resp.http.fooA == 19.000 expect resp.http.fooB == 10.000 expect resp.http.ageA == 4.000 delay 2 txreq rxresp expect resp.bodylen == 40 expect resp.http.fooA == 19.000 expect resp.http.fooB == 10.000 expect resp.http.foo <= 8.000 expect resp.http.ageA == 4.000 expect resp.http.ageB >= 6.000 delay 2 txreq rxresp expect resp.bodylen == 40 expect resp.http.fooA == 19.000 expect resp.http.fooB == 10.000 expect resp.http.foo <= 6.000 expect resp.http.ageA == 4.000 expect resp.http.ageB >= 8.000 } -run varnish-7.5.0/bin/varnishtest/tests/r00961.vtc000066400000000000000000000014301457605730600210530ustar00rootroot00000000000000varnishtest "Test XML 1.0 entity references" server s1 { rxreq expect req.url == "/" txresp -body { } rxreq expect req.url == "/t&t" txresp -body "1" rxreq expect req.url == "/tt" txresp -body "333" rxreq expect req.url == {/t't} txresp -body "4444" rxreq expect req.url == {/t"t} txresp -body "55555" } -start varnish v1 -vcl+backend { sub vcl_recv { return (pass); } sub vcl_backend_response { if (bereq.url == "/") { set beresp.do_esi = true; } } } -start client c1 { txreq rxresp expect resp.status == 200 expect resp.bodylen == 32 } -run varnish-7.5.0/bin/varnishtest/tests/r00962.vtc000066400000000000000000000024341457605730600210610ustar00rootroot00000000000000varnishtest "Test address remapping" feature persistent_storage # VM-remapping is too random on OSX feature cmd {test $(uname) != "Darwin"} # Same on some hardened Linux feature cmd "test ! -c /dev/grsec" server s1 { rxreq txresp } -start shell "rm -f ${tmpdir}/_.per?" varnish v1 \ -arg "-pfeature=+wait_silo" \ -arg "-sdeprecated_persistent,${tmpdir}/_.per1,5m" \ -arg "-sdeprecated_persistent,${tmpdir}/_.per2,5m" \ -syntax 4.0 \ -vcl+backend { sub vcl_backend_response { set beresp.storage = storage.s0; } } -start varnish v1 -stop varnish v1 -start client c1 { txreq -url "/" rxresp expect resp.status == 200 expect resp.http.X-Varnish == "1001" } -run varnish v1 -cliok "storage.list" varnish v1 -cliok "debug.persistent s0 dump" varnish v1 -cliok "debug.persistent s0 sync" varnish v1 -stop server s1 { rxreq txresp -status 400 -reason "Persistent Object Not Found" } -start varnish v2 \ -arg "-pfeature=+wait_silo" \ -arg "-sdeprecated_persistent,${tmpdir}/_.per2,5m" \ -arg "-sdeprecated_persistent,${tmpdir}/_.per1,5m" \ -vcl+backend { } -start client c1 -connect ${v2_sock} { txreq -url "/" rxresp expect resp.reason != "Persistent Object Not Found" expect resp.status == 200 expect resp.http.X-Varnish == "1001 1002" } -run # shell "rm -f /tmp/__v1/_.per" varnish-7.5.0/bin/varnishtest/tests/r00963.vtc000066400000000000000000000011111457605730600210510ustar00rootroot00000000000000varnishtest "Test hsh_rush" barrier b1 cond 5 server s1 { rxreq barrier b1 sync txresp -bodylen 10 } -start varnish v1 -vcl+backend { } -start varnish v1 -cliok "param.set rush_exponent 2" client c1 { txreq barrier b1 sync rxresp expect resp.bodylen == 10 } -start client c2 { txreq barrier b1 sync rxresp expect resp.bodylen == 10 } -start client c3 { txreq barrier b1 sync rxresp expect resp.bodylen == 10 } -start client c4 { txreq barrier b1 sync rxresp expect resp.bodylen == 10 } -start client c1 -wait client c2 -wait client c3 -wait client c4 -wait varnish-7.5.0/bin/varnishtest/tests/r00965.vtc000066400000000000000000000017051457605730600210640ustar00rootroot00000000000000varnishtest "restart in vcl_miss #965" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { sub vcl_recv { if (req.http.X-Banned == "check") { unset req.http.X-Banned; } elseif (req.restarts == 0) { set req.http.X-Banned = "check"; if (req.http.x-pass) { return (pass); } else { return (hash); } } } sub vcl_hash { ## Check if they have a ban in the cache, or if they are going to be banned in cache. if (req.http.X-Banned) { hash_data(client.ip); return (lookup); } } sub vcl_synth { if (resp.status == 988) { return (restart); } } sub vcl_miss { if (req.http.X-Banned == "check") { return (synth(988,"restarting")); } } sub vcl_pass { if (req.http.X-Banned == "check") { return (synth(988,"restarting")); } } } -start client c1 { txreq rxresp expect resp.status == 200 txreq rxresp expect resp.status == 200 txreq -hdr "X-Pass: 1" rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/r00972.vtc000066400000000000000000000005461457605730600210640ustar00rootroot00000000000000varnishtest "Test conditional delivery and do_stream" server s1 { rxreq txresp -hdr {ETag: "foo"} -body "11111\n" } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_stream = true; } } -start client c1 { txreq -hdr {If-None-Match: "foo"} rxresp -no_obj expect resp.status == 304 expect resp.http.etag == {"foo"} } -run varnish-7.5.0/bin/varnishtest/tests/r00979.vtc000066400000000000000000000010241457605730600210630ustar00rootroot00000000000000varnishtest "r00979.vtc Test restart when do_stream in vcl_deliver" server s1 { rxreq txresp -hdr "Connection: close" -gzipbody "1" expect_close accept rxreq txresp -body "11" } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_stream = true; set beresp.uncacheable = true; } sub vcl_deliver { if (req.restarts == 0) { return (restart); } } } -start varnish v1 -cliok "param.set debug +syncvsl" client c1 { txreq rxresp expect resp.status == 200 expect resp.bodylen == 2 } -run varnish-7.5.0/bin/varnishtest/tests/r00980.vtc000066400000000000000000000011601457605730600210540ustar00rootroot00000000000000varnishtest "r00980 test gzip on fetch with content_length and do_stream" server s1 { rxreq expect req.url == "/foobar" expect req.http.accept-encoding == "gzip" txresp -bodylen 43 } -start varnish v1 -cliok "param.set http_gzip_support true" -vcl+backend { sub vcl_backend_response { set beresp.do_gzip = true; set beresp.do_stream = true; } } -start client c1 { txreq -url /foobar -hdr "Accept-Encoding: gzip" rxresp expect resp.http.content-encoding == "gzip" gunzip expect resp.bodylen == 43 txreq -url /foobar rxresp expect resp.http.content-encoding == expect resp.bodylen == 43 } -run varnish-7.5.0/bin/varnishtest/tests/r00984.vtc000066400000000000000000000032421457605730600210630ustar00rootroot00000000000000varnishtest "Status other than 200,203,300,301,302,307,410 and 404 should not be cached" server s1 { rxreq txresp -status 500 rxreq txresp -status 200 -body "11" rxreq txresp -status 200 -body "ban" rxreq txresp -status 503 rxreq txresp -status 200 -body "11" rxreq txresp -status 200 -body "ban" rxreq txresp -status 502 rxreq txresp -status 200 -body "11" rxreq txresp -status 200 -body "ban" rxreq txresp -status 405 rxreq txresp -status 200 -body "11" rxreq txresp -status 200 -body "ban" rxreq txresp -status 200 -body "11" } -start varnish v1 -arg "-t 300" -vcl+backend { sub vcl_recv { if (req.url == "/ban") { ban("req.url ~ /"); } } } -start client c1 { txreq -url "/" rxresp expect resp.status == 500 txreq -url "/" rxresp expect resp.status == 200 expect resp.bodylen == 2 txreq -url "/ban" rxresp expect resp.status == 200 expect resp.bodylen == 3 txreq -url "/" rxresp expect resp.status == 503 txreq -url "/" rxresp expect resp.status == 200 expect resp.bodylen == 2 txreq -url "/ban" rxresp expect resp.status == 200 expect resp.bodylen == 3 txreq -url "/" rxresp expect resp.status == 502 txreq -url "/" rxresp expect resp.status == 200 expect resp.bodylen == 2 txreq -url "/ban" rxresp expect resp.status == 200 expect resp.bodylen == 3 txreq -url "/" rxresp expect resp.status == 405 txreq -url "/" rxresp expect resp.status == 200 expect resp.bodylen == 2 txreq -url "/ban" rxresp expect resp.status == 200 expect resp.bodylen == 3 txreq -url "/" rxresp expect resp.status == 200 expect resp.bodylen == 2 txreq -url "/" rxresp expect resp.status == 200 expect resp.bodylen == 2 } -run varnish-7.5.0/bin/varnishtest/tests/r01002.vtc000066400000000000000000000002761457605730600210450ustar00rootroot00000000000000varnishtest "Real relational comparisons" varnish v1 -vcl { import std; backend foo { .host = "${localhost}"; } sub vcl_recv { if (std.random(0,5) < 1.0) { return (pipe); } } } varnish-7.5.0/bin/varnishtest/tests/r01014.vtc000066400000000000000000000004671457605730600210520ustar00rootroot00000000000000varnishtest "bug 1014, Invalid C-L header with gzip" server s1 { rxreq txresp -nolen -hdr "Content-Encoding: gzip" -hdr "Content-Length:" } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_stream = false; } } -start client c1 { txreq rxresp expect resp.status == 503 } -run varnish-7.5.0/bin/varnishtest/tests/r01029.vtc000066400000000000000000000011111457605730600210430ustar00rootroot00000000000000varnishtest "#1029" server s1 { rxreq expect req.url == "/bar" txresp -gzipbody {[bar]} rxreq expect req.url == "/foo" txresp -body {

FOOBARF

} } -start varnish v1 -vcl+backend { sub vcl_backend_response { if (bereq.url == "/foo") { set beresp.do_esi = true; set beresp.ttl = 0s; } else { set beresp.ttl = 10m; } } } -start client c1 { txreq -url "/bar" -hdr "Accept-Encoding: gzip,foo" rxresp gunzip expect resp.bodylen == 5 txreq -url "/foo" -hdr "Accept-Encoding: gzip,foo" rxresp expect resp.bodylen == 21 } -run varnish-7.5.0/bin/varnishtest/tests/r01030.vtc000066400000000000000000000023071457605730600210430ustar00rootroot00000000000000varnishtest "Test ban_lurker_sleep vs failed ban lurker" # The idea here is that the ban lurker should always wait 1 second when it # can't proceed, as per documentation and original intent. The # ban_lurker_sleep should not affect sleep-times when the lurker fails. server s1 { rxreq txresp -status 200 rxreq txresp -status 200 } -start varnish v1 -vcl+backend { sub vcl_recv { if (req.method == "BAN") { ban("obj.http.url ~ /"); return (synth(201,"banned")); } return (hash); } sub vcl_backend_response { set beresp.http.url = bereq.url; } } -start varnish v1 -cliok "param.set ban_lurker_age 0" varnish v1 -expect bans_tests_tested == 0 delay 0.01 client c1 { txreq -req GET rxresp expect resp.status == 200 txreq -req BAN rxresp expect resp.status == 201 } -run #delay 0.1 #varnish v1 -expect bans_lurker_tests_tested == 0 #delay 1.5 varnish v1 -expect bans_lurker_tests_tested >= 1 varnish v1 -cliok "param.set ban_lurker_sleep 5.01" client c2 { txreq -req GET rxresp expect resp.status == 200 txreq -req BAN rxresp expect resp.status == 201 } -run #delay 0.1 #varnish v1 -expect bans_lurker_tests_tested == 1 #delay 1.1 varnish v1 -expect bans_lurker_tests_tested == 2 varnish-7.5.0/bin/varnishtest/tests/r01036.vtc000066400000000000000000000006161457605730600210520ustar00rootroot00000000000000varnishtest "Test case for #1036" server s1 { rxreq # Send a bodylen of 1,5M, which will exceed cache space of 1M non_fatal txresp -bodylen 1572864 } -start varnish v1 -arg "-sdefault,1M" -arg "-pgzip_level=0" -vcl+backend { sub vcl_backend_response { set beresp.do_stream = false; set beresp.do_gzip = true; } } -start client c1 { txreq rxresp expect resp.status == "503" } -run varnish-7.5.0/bin/varnishtest/tests/r01037.vtc000066400000000000000000000010531457605730600210470ustar00rootroot00000000000000varnishtest "Test case for #1037" server s1 { rxreq # Send a bodylen of 1,5M, which will exceed cache space of 1M non_fatal txresp -nolen -hdr "Transfer-encoding: chunked" chunked {} chunkedlen 500000 chunked {} chunkedlen 500000 chunked {} chunkedlen 500000 chunkedlen 0 } -start varnish v1 -arg "-sdefault,1M" -arg "-pgzip_level=0" -vcl+backend { sub vcl_backend_response { set beresp.do_esi = true; set beresp.do_gzip = true; } } -start client c1 { txreq rxresp expect resp.status == "503" } -run varnish-7.5.0/bin/varnishtest/tests/r01038.vtc000066400000000000000000000025271457605730600210570ustar00rootroot00000000000000varnishtest "ticket 1038 regression test" server s1 { rxreq txresp -nolen -hdr "Transfer-encoding: chunked" chunked {} chunked {} chunked {} chunked {} chunked {} chunked {} chunked {} chunked {} chunked {} chunked {} chunkedlen 0 rxreq expect req.url == "/xxx0.htm" txresp -body "foo0" rxreq expect req.url == "/xxx1.htm" txresp -body "foo1" rxreq expect req.url == "/xxx2.htm" txresp -body "foo2" rxreq expect req.url == "/xxx3.htm" txresp -body "foo3" rxreq expect req.url == "/xxx4.htm" txresp -body "foo4" rxreq expect req.url == "/xxx5.htm" txresp -body "foo5" rxreq expect req.url == "/xxx6.htm" txresp -body "foo6" rxreq expect req.url == "/xxx7.htm" txresp -body "foo7" rxreq expect req.url == "/xxx8.htm" txresp -body "foo8" } -start varnish v1 -arg "-p workspace_backend=10k -p vsl_buffer=4k" -vcl+backend { sub vcl_backend_response { if (bereq.url == "/") { set beresp.do_esi = true; } } } -start client c1 { txreq rxresp expect resp.bodylen == 42 txreq rxresp expect resp.bodylen == 42 } -run varnish v1 -expect losthdr == 0 varnish-7.5.0/bin/varnishtest/tests/r01068.vtc000066400000000000000000000005561457605730600210620ustar00rootroot00000000000000varnishtest "Bug 1068 restart on hit in vcl_deliver causes segfault" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { sub vcl_deliver { if (req.http.x-restart && req.restarts == 0) { return (restart); } } } -start client c1 { txreq rxresp expect resp.status == 200 txreq -hdr "x-restart: true" rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/r01073.vtc000066400000000000000000000016721457605730600210560ustar00rootroot00000000000000varnishtest "Test that hash_always_miss also implies hash_ignore_busy. Ticket #1073." barrier b1 cond 2 barrier b2 cond 2 server s1 { rxreq barrier b1 sync barrier b2 sync delay 1 txresp -hdr "Server: 1" } -start server s2 { rxreq barrier b2 sync txresp -hdr "Server: 2" } -start varnish v1 -vcl+backend { sub vcl_recv { if (req.http.x-hash-always-miss == "1") { set req.hash_always_miss = true; } } sub vcl_backend_fetch { if (bereq.http.x-client == "1") { set bereq.backend = s1; } if (bereq.http.x-client == "2") { set bereq.backend = s2; } } } -start client c1 { txreq -url "/" -hdr "x-client: 1" rxresp expect resp.status == 200 expect resp.http.Server == "1" } -start client c2 { barrier b1 sync txreq -url "/" -hdr "x-client: 2" -hdr "x-hash-always-miss: 1" txreq -url "/" -hdr "x-client: 2" rxresp expect resp.status == 200 expect resp.http.Server == "2" } -start client c1 -wait client c2 -wait varnish-7.5.0/bin/varnishtest/tests/r01086.vtc000066400000000000000000000024761457605730600210650ustar00rootroot00000000000000varnishtest "#1086 junk after gzip from backend and streaming enabled" barrier b1 cond 2 server s1 { # This one will be streamed rxreq txresp -nolen \ -hdr "Content-Encoding: gzip" \ -hdr "Transfer-Encoding: Chunked" \ -hdr "Set-Cookie: FOO" send "14\r\n" # An empty gzip file: sendhex "1f8b" sendhex "08" sendhex "00" sendhex "00000000" sendhex "00" sendhex "03" sendhex "0300" sendhex "00000000" sendhex "00000000" send "\r\n" barrier b1 sync chunked "FOOBAR" non_fatal chunkedlen 0 } -start varnish v1 -vcl+backend { sub vcl_backend_response { if (beresp.http.set-cookie == "BAR") { set beresp.do_stream = false; } } } -start client c1 { txreq -hdr "Cookie: FOO" rxresphdrs expect resp.status == 200 barrier b1 sync expect_close } -run delay .1 server s1 -wait { fatal # This one will not be streamed rxreq txresp -nolen \ -hdr "Content-Encoding: gzip" \ -hdr "Transfer-Encoding: Chunked" \ -hdr "Set-Cookie: BAR" send "14\r\n" # An empty gzip file: sendhex "1f8b" sendhex "08" sendhex "00" sendhex "00000000" sendhex "00" sendhex "03" sendhex "0300" sendhex "00000000" sendhex "00000000" send "\r\n" delay .2 chunked "FOOBAR" non_fatal chunkedlen 0 } -start client c1 { txreq -hdr "Cookie: BAR" rxresp expect resp.status == 503 } -run varnish-7.5.0/bin/varnishtest/tests/r01092.vtc000066400000000000000000000012641457605730600210540ustar00rootroot00000000000000varnishtest "Test case for #1092 - esi:remove and comments" server s1 { rxreq txresp -body { Keep-1 Remove-1 Remove-4444 Keep-4444 } } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_esi = true; } } -start logexpect l1 -v v1 -g raw { expect * * Fetch_Body expect 0 * ESI_xmlerror {^ERR after 66 ESI 1.0 Nested After include } rxreq expect req.url == "/foo" txresp -body { Before include After include } rxreq expect req.url == "/body" expect req.http.host == "bozz" txresp -body BAR } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_esi = true; } } -start client c1 { txreq rxresp expect resp.bodylen == 49 } -run varnish v1 -cliok "param.set feature +esi_ignore_https" client c1 { txreq -url /foo rxresp expect resp.bodylen == 52 } -run varnish-7.5.0/bin/varnishtest/tests/r01337.vtc000066400000000000000000000023351457605730600210560ustar00rootroot00000000000000varnishtest "Bogus backend status" server s1 { rxreq expect req.url == /small txresp -status 99 accept rxreq expect req.url == /low txresp -status 099 accept rxreq expect req.url == /high txresp -status 1000 accept rxreq expect req.url == /X txresp -status X99 accept rxreq expect req.url == /Y txresp -status 9X9 accept rxreq expect req.url == /Z txresp -status 99X accept rxreq expect req.url == /x txresp -status %99 accept rxreq expect req.url == /y txresp -status 9%9 accept rxreq expect req.url == /z txresp -status 99% } -start varnish v1 -vcl+backend {} -start client c1 { txreq -url /small rxresp expect resp.status == 503 } -run client c1 { txreq -url /low rxresp expect resp.status == 503 } -run client c1 { txreq -url /high rxresp expect resp.status == 503 } -run client c1 { txreq -url /X rxresp expect resp.status == 503 } -run client c1 { txreq -url /Y rxresp expect resp.status == 503 } -run client c1 { txreq -url /Z rxresp expect resp.status == 503 } -run client c1 { txreq -url /x rxresp expect resp.status == 503 } -run client c1 { txreq -url /y rxresp expect resp.status == 503 } -run client c1 { txreq -url /z rxresp expect resp.status == 503 } -run varnish-7.5.0/bin/varnishtest/tests/r01349.vtc000066400000000000000000000020551457605730600210600ustar00rootroot00000000000000varnishtest "Exact matching for varnishadm backend.set_health" server s1 -repeat 2 { rxreq txresp -hdr "Backend: b1" } -start server s2 -repeat 2 { rxreq txresp -hdr "Backend: b" } -start varnish v1 -vcl { backend b1 { .host = "${s1_addr}"; .port = "${s1_port}"; } backend b { .host = "${s2_addr}"; .port = "${s2_port}"; } sub vcl_recv { return(pass); } sub vcl_backend_fetch { if (bereq.http.backend == "b1") { set bereq.backend = b1; } else { set bereq.backend = b; } } } -start varnish v1 -cliok "backend.list b" client c1 { txreq -hdr "Backend: b1" rxresp expect resp.status == 200 expect resp.http.backend == "b1" txreq -hdr "Backend: b" rxresp expect resp.status == 200 expect resp.http.backend == "b" } -run varnish v1 -cliok "backend.set_health b sick" client c1 { txreq -hdr "Backend: b1" rxresp expect resp.status == 200 expect resp.http.backend == "b1" txreq -hdr "Backend: b" rxresp expect resp.status == 503 } -run varnish v1 -clierr 106 "backend.set_health b(1.2.3.4:) healthy" varnish-7.5.0/bin/varnishtest/tests/r01350.vtc000066400000000000000000000010051457605730600210420ustar00rootroot00000000000000varnishtest "IMS + Vary panic" server s1 { rxreq txresp -hdr "Vary: Accept-Encoding" -hdr "Last-Modified: Wed, 10 May 2006 07:22:58 GMT" -body "IMS" rxreq txresp -status 304 -body "" } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.ttl = 2s; set beresp.grace = 1m; set beresp.keep = 1m; return(deliver); } } -start client c1 { txreq rxresp expect resp.status == 200 delay 3 txreq rxresp expect resp.status == 200 delay 1 txreq rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/r01355.vtc000066400000000000000000000026721457605730600210620ustar00rootroot00000000000000varnishtest "Test ESI ignoring BOMs" server s1 { rxreq expect req.url == /1 txresp -body "\xeb\xbb\xbf blabla" rxreq expect req.url == /2 txresp -body "\xeb\xbb\xbf blabla" rxreq expect req.url == /3 txresp -body "\xeb\xbb\xbf\xeb\xbb\xbf blabla" rxreq expect req.url == /4 txresp -body "\xeb\xbc blabla" } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_esi = true; } } -start logexpect l1 -v v1 -g raw { expect * * Fetch_Body expect 0 = ESI_xmlerror {^No ESI processing, first char not '<' but BOM. .See feature esi_remove_bom.$} expect 0 = BackendClose # XXX another logexpect weirdness - why can't we catch the second occurrence? # expect * * Fetch_Body # expect 0 = ESI_xmlerror {^No ESI processing, first char not '<' but BOM. .See feature esi_remove_bom.$} # expect 0 = BackendClose } -start client c1 { # No ESI processing txreq -url /1 rxresp expect resp.bodylen == 47 } -run logexpect l1 -wait varnish v1 -cliok "param.set feature +esi_remove_bom" client c1 { # BOM removed, ESI processing txreq -url /2 rxresp expect resp.bodylen == 13 } -run client c1 { # BOMs removed, ESI processing txreq -url /3 rxresp expect resp.bodylen == 13 } -run client c1 { # Not a BOM, no ESI processing txreq -url /4 rxresp expect resp.bodylen == 46 } -run varnish-7.5.0/bin/varnishtest/tests/r01356.vtc000066400000000000000000000010611457605730600210520ustar00rootroot00000000000000varnishtest "#1356, req.body failure" server s1 { rxreqhdrs expect_close } -start varnish v1 -vcl+backend { } -start client c1 { txreq -req POST -nolen -hdr "Transfer-Encoding: carrier-pigeon" rxresp expect resp.status == 400 } -run client c1 { txreq -req POST -nolen -hdr "Content-Length: carrier-pigeon" rxresp expect resp.status == 400 } -run client c1 { txreq -req POST -nolen -hdr "Content-Length: 56" } -run # Check that varnishd still runs server s1 { rxreq txresp } -start client c1 { txreq rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/r01367.vtc000066400000000000000000000005001457605730600210510ustar00rootroot00000000000000varnishtest "blank GET" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { sub vcl_synth { return (restart); } } -start client c1 { send "GET \nHost: example.com\n\n" rxresp expect resp.status == 400 } -run client c1 { txreq -hdr "Expect: Santa-Claus" rxresp expect resp.status == 417 } -run varnish-7.5.0/bin/varnishtest/tests/r01391.vtc000066400000000000000000000007641457605730600210620ustar00rootroot00000000000000varnishtest "client abandoning hit-for-miss" barrier b1 cond 2 server s1 { non_fatal rxreq txresp -nolen -hdr "Transfer-Encoding: chunked" -hdr "Set-Cookie: foo=bar" chunked "foo" barrier b1 sync chunked "bar" delay .1 chunkedlen 64 delay .1 chunkedlen 64 chunkedlen 0 } -start varnish v1 -vcl+backend { } -start client c1 { txreq rxresphdrs rxchunk barrier b1 sync } -run delay 2 server s1 { rxreq txresp } -start client c1 { txreq rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/r01395.vtc000066400000000000000000000004241457605730600210570ustar00rootroot00000000000000varnishtest "Test vcl_backend_error is called if vcl_backend_fetch fails" varnish v1 -vcl { backend default { .host = "${bad_backend}"; } sub vcl_backend_error { set beresp.status = 299; } } -start client c1 { txreq -url "/" rxresp expect resp.status == 299 } -run varnish-7.5.0/bin/varnishtest/tests/r01398.vtc000066400000000000000000000006001457605730600210560ustar00rootroot00000000000000varnishtest "ticket 1398" varnish v1 -syntax 4.0 -vcl { backend foo { .host = "${bad_backend}"; } sub vcl_backend_response { set beresp.http.X-BE-Name = beresp.backend.name; set beresp.http.X-BE-IP = beresp.backend.ip; } } -start client c1 { txreq rxresp expect resp.status == 503 expect resp.http.X-BE-Name == "" expect resp.http.X-BE-IP == "" } -run varnish-7.5.0/bin/varnishtest/tests/r01399.vtc000066400000000000000000000017511457605730600210670ustar00rootroot00000000000000varnishtest "1399 race issue with bg-fetches" barrier b1 cond 2 server s1 { rxreq expect req.http.Is-bg == "false" txresp -bodylen 1 barrier b1 sync # Delay here, to stall the bgfetch for a while, to give the req time to finish # delivery and cleanup up struct req delay 2 # Shut the connection to force the bgfetch to retry close accept # And see if it has all its marbles still... rxreq expect req.url == "/" expect req.http.Is-bg == "true" txresp -bodylen 2 } -start varnish v1 -vcl+backend { sub vcl_backend_fetch { set bereq.http.Is-bg = bereq.is_bgfetch; } sub vcl_backend_response { set beresp.do_stream = false; set beresp.ttl = 2s; set beresp.grace = 1800s; } } -start client c1 { txreq rxresp expect resp.http.content-length == 1 } -run # 3 is longer than the ttl, in order to kick off the bgfetch delay 3 client c1 { txreq rxresp expect resp.http.content-length == 1 barrier b1 sync } -run # Wait for the server to not explode server s1 -wait varnish-7.5.0/bin/varnishtest/tests/r01401.vtc000066400000000000000000000015711457605730600210470ustar00rootroot00000000000000varnishtest "too many retries" server s1 { rxreq expect req.url == /1 txresp -hdr "foo: bar" -bodylen 5 accept rxreq expect req.url == /1 txresp -hdr "foo: foof" -hdr "Connection: close" -bodylen 7 accept rxreq expect req.url == /2 txresp -hdr "foo: bar" -bodylen 10 accept rxreq expect req.url == /2 txresp -hdr "foo: bar" -bodylen 11 accept rxreq expect req.url == /2 txresp -hdr "foo: bar" -bodylen 12 accept rxreq expect req.url == /2 txresp -hdr "foo: bar" -bodylen 13 accept rxreq expect req.url == /2 txresp -hdr "foo: bar" -bodylen 4 } -start varnish v1 -vcl+backend { sub vcl_backend_response { if (beresp.http.foo == "bar") { return (retry); } } } -start client c1 { txreq -url /1 rxresp expect resp.http.foo == foof expect resp.bodylen == 7 } -run delay .1 client c1 { txreq -url /2 rxresp expect resp.status == 503 } -run varnish-7.5.0/bin/varnishtest/tests/r01404.vtc000066400000000000000000000004641457605730600210520ustar00rootroot00000000000000varnishtest "Test that 304 does not send Content-Length" server s1 { rxreq txresp -hdr {ETag: "foo"} -body "11111\n" } -start varnish v1 -vcl+backend { } -start client c1 { txreq -hdr {If-None-Match: "foo"} rxresp -no_obj expect resp.status == 304 expect resp.http.Content-Length == } -run varnish-7.5.0/bin/varnishtest/tests/r01406.vtc000066400000000000000000000011101457605730600210410ustar00rootroot00000000000000varnishtest "#1406 empty header" server s1 { rxreq txresp } -start varnish v1 -arg "-p vcc_feature=+allow_inline_c" -vcl+backend { import std ; C{ static const struct gethdr_s VGC_HDR_REQ_foo = { HDR_REQ, "\020X-CUSTOM-HEADER:"}; }C sub vcl_recv { C{ VRT_SetHdr( ctx, &VGC_HDR_REQ_foo, 0, &(struct strands){.n = 0} ); }C } sub vcl_deliver { if (req.http.X-CUSTOM-HEADER) { set resp.http.foo = "yes"; } else { set resp.http.foo = "no"; } } } -start client c1 { txreq rxresp expect resp.http.foo == yes } -run varnish-7.5.0/bin/varnishtest/tests/r01417.vtc000066400000000000000000000003451457605730600210540ustar00rootroot00000000000000varnishtest "vcl_backend_response abandon" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { sub vcl_backend_response { return (abandon); } } -start client c1 { txreq rxresp expect resp.status == 503 } -run varnish-7.5.0/bin/varnishtest/tests/r01419.vtc000066400000000000000000000014341457605730600210560ustar00rootroot00000000000000varnishtest "Make sure banlurker skips busy objects" barrier b1 cond 2 barrier b2 cond 2 server s1 { rxreq send "HTTP/1.0 200 OK\r\n" barrier b1 sync send "Foobar: blaf\r\n" send "Content-Length: 10\r\n" send "\r\n\r\n" send "abcde" barrier b2 sync send "abcdefghij" } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_stream = false; } } -start varnish v1 -cliok {param.set debug +lurker} varnish v1 -cliok {param.set ban_lurker_age 1} varnish v1 -cliok {ban.list} client c1 { txreq rxresp expect resp.status == 200 expect resp.http.foobar == blaf } -start barrier b1 sync varnish v1 -cliok {ban.list} varnish v1 -cliok {ban obj.http.goo == bar} varnish v1 -cliok {ban.list} delay 2 varnish v1 -cliok {ban.list} barrier b2 sync client c1 -wait varnish-7.5.0/bin/varnishtest/tests/r01441.vtc000066400000000000000000000013751457605730600210550ustar00rootroot00000000000000varnishtest "Session grouping on ESI" server s1 { rxreq expect req.url == "/" txresp -body "" rxreq expect req.url == "/include" txresp -body "included" } -start varnish v1 -vcl+backend { sub vcl_backend_response { if (bereq.url == "/") { set beresp.do_esi = true; } } } -start -cliok "param.set debug +syncvsl" logexpect l1 -v v1 -g session { expect 0 1000 Begin {sess 0 HTTP/1} expect * = End expect 0 1001 Begin {req 1000 rxreq} expect * = End expect 0 1002 Begin {bereq 1001 fetch} expect * = End expect 0 1003 Begin {req 1001 esi} expect * = End expect 0 1004 Begin {bereq 1003 fetch} expect * = End } -start client c1 { txreq -url "/" rxresp expect resp.status == 200 } -run logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/r01442.vtc000066400000000000000000000011501457605730600210450ustar00rootroot00000000000000varnishtest "ESI + IMS" server s1 { rxreq txresp -hdr "Last-Modified: Thu, 26 Jun 2008 12:00:01 GMT" \ -body {bla bla bla} rxreq expect req.http.if-modified-since == "Thu, 26 Jun 2008 12:00:01 GMT" txresp -status "304" -hdr "Last-Modified: Thu, 26 Jun 2008 12:00:01 GMT" -nolen } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_esi = true; set beresp.grace = 0s; set beresp.keep = 60s; set beresp.ttl = 1s; } } -start client c1 { txreq rxresp expect resp.bodylen == 13 delay 3 txreq rxresp expect resp.bodylen == 13 } -run varnish-7.5.0/bin/varnishtest/tests/r01468.vtc000066400000000000000000000016011457605730600210560ustar00rootroot00000000000000varnishtest "#1468 - freeing failed obj" barrier b1 cond 2 barrier b2 cond 2 server s1 { rxreq expect req.url == "/1" txresp -nolen -hdr "Transfer-Encoding: chunked" chunked {} barrier b1 sync close accept rxreq expect req.url == "/2" txresp -nolen -hdr "Transfer-Encoding: chunked" chunked {} barrier b2 sync } -start varnish v1 -vcl+backend { sub vcl_recv { if (req.url == "/2") { return (pass); } } sub vcl_backend_response { set beresp.ttl = 5m; } } -start # Test the regular cached content case client c1 { txreq -url "/1" rxresphdrs expect resp.status == 200 rxchunk barrier b1 sync expect_close } -run # Test the pass from vcl_recv case client c1 { txreq -url "/2" rxresphdrs expect resp.status == 200 rxchunk barrier b2 sync expect_close } -run # Delay to allow expiry thread to do it's job delay 1 varnish v1 -expect n_object == 0 varnish-7.5.0/bin/varnishtest/tests/r01478.vtc000066400000000000000000000017311457605730600210630ustar00rootroot00000000000000varnishtest "panic due to hash_ignore_busy" barrier b1 cond 2 barrier b2 cond 2 barrier b3 cond 2 barrier b4 cond 2 server s1 { rxreq txresp -nolen -hdr "Transfer-Encoding: chunked" chunkedlen 10 delay .5 barrier b1 sync delay .5 chunkedlen 10 delay .5 barrier b2 sync delay .5 chunkedlen 10 delay .5 barrier b3 sync delay .5 chunkedlen 10 delay .5 barrier b4 sync delay .5 chunkedlen 10 delay .5 chunkedlen 0 } -start varnish v1 -vcl+backend { sub vcl_recv { set req.hash_ignore_busy = true; return(hash); } sub vcl_backend_response { # we assume streaming for all objects as default: set beresp.do_stream = true; set beresp.ttl = 2s; } } -start client c1 { txreq -hdr "client: c1" rxresp } -start barrier b1 sync client c2 { txreq -hdr "client: c2" barrier b2 sync rxresp } -start barrier b3 sync client c3 { txreq -hdr "client: c3" barrier b4 sync rxresp } -start client c1 -wait client c2 -wait client c3 -wait varnish-7.5.0/bin/varnishtest/tests/r01485.vtc000066400000000000000000000010761457605730600210630ustar00rootroot00000000000000varnishtest "#1485: Wrong response reason phrase" server s1 { rxreq txresp -hdr {Etag: "foo"} -body "1" rxreq expect req.http.If-None-Match == {"foo"} txresp -status 304 -reason "Not Modified" } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.ttl = 1ms; set beresp.grace = 0s; set beresp.keep = 1h; } } -start client c1 { txreq rxresp expect resp.status == 200 expect resp.bodylen == 1 expect resp.reason == "OK" delay 0.1 txreq rxresp expect resp.status == 200 expect resp.bodylen == 1 expect resp.reason == "OK" } -run varnish-7.5.0/bin/varnishtest/tests/r01490.vtc000066400000000000000000000015601457605730600210550ustar00rootroot00000000000000varnishtest "#1490 - thread destruction" server s1 { } -start varnish v1 \ -arg "-p debug=+syncvsl" \ -arg "-p vsl_mask=+WorkThread" \ -arg "-p thread_pool_min=5" \ -arg "-p thread_pool_max=6" \ -arg "-p thread_pools=1" \ -arg "-p thread_pool_timeout=10" \ -vcl+backend {} varnish v1 -start # we might have over-bred delay 11 varnish v1 -expect threads == 5 logexpect l1 -v v1 -g raw { expect * 0 WorkThread {^\S+ start$} expect * 0 WorkThread {^\S+ end$} } -start varnish v1 -cliok "param.set thread_pool_min 6" # Have to wait longer than thread_pool_timeout delay 11 varnish v1 -expect threads == 6 varnish v1 -cliok "param.set thread_pool_min 5" varnish v1 -cliok "param.set thread_pool_max 5" # Have to wait longer than thread_pool_timeout delay 11 varnish v1 -expect threads == 5 # Use logexpect to see that the thread actually exited logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/r01493.vtc000066400000000000000000000005001457605730600210510ustar00rootroot00000000000000varnishtest "restart in vcl_purge" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { sub vcl_recv { if (req.method == "PURGE") { return (purge); } } sub vcl_purge { set req.method = "GET"; return (restart); } } -start client c1 { txreq -req PURGE rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/r01494.vtc000066400000000000000000000006511457605730600210610ustar00rootroot00000000000000varnishtest "Test retry in be_resp w/conn: close" server s1 { rxreq txresp -hdr "Connection: close" -bodylen 3 expect_close accept rxreq txresp -hdr "Connection: close" -bodylen 5 expect_close } -start varnish v1 -vcl+backend { sub vcl_backend_response { if (bereq.retries == 0) { return(retry); } } } -start client c1 { txreq -url "/" rxresp expect resp.status == 200 expect resp.bodylen == 5 } -run varnish-7.5.0/bin/varnishtest/tests/r01498.vtc000066400000000000000000000004741457605730600210700ustar00rootroot00000000000000varnishtest "backend name VCC crash" varnish v1 -vcl { vcl 4.0; import directors; backend s1 { .host = "${localhost}"; .port = "80"; } sub vcl_init { new static = directors.random(); static.add_backend(s1, 100.0); } sub vcl_backend_fetch { set bereq.backend = static.backend(); } } varnish-7.5.0/bin/varnishtest/tests/r01499.vtc000066400000000000000000000010311457605730600210570ustar00rootroot00000000000000varnishtest "#1499 - objcore ref leak on IMS update" server s1 { rxreq txresp -hdr {Etag: "foo"} -body {1} rxreq expect req.http.if-none-match == {"foo"} txresp -hdr "X-Resp: 2" -body {12} } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.ttl = 0.001s; set beresp.grace = 0s; set beresp.keep = 2s; } } -start client c1 { txreq rxresp expect resp.bodylen == 1 delay 0.5 txreq rxresp expect resp.http.x-resp == "2" expect resp.bodylen == 2 } -run delay 3 varnish v1 -expect n_object == 0 varnish-7.5.0/bin/varnishtest/tests/r01501.vtc000066400000000000000000000004001457605730600210360ustar00rootroot00000000000000varnishtest "director fails to pick backend" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { import vtc; sub vcl_recv { set req.backend_hint = vtc.no_backend(); } } -start client c1 { txreq rxresp expect resp.status == 503 } -run varnish-7.5.0/bin/varnishtest/tests/r01504.vtc000066400000000000000000000006101457605730600210440ustar00rootroot00000000000000varnishtest "unreferenced or null acls" varnish v1 -arg "-p vcc_feature=-err_unref" -vcl { import vtc; backend s1 { .host = "${bad_backend}"; } acl foo { "${localhost}"; } acl bar { "${localhost}"; } sub vcl_recv { if (vtc.no_ip() ~ bar) { return (synth(200)); } } } -start client c1 { txreq rxresp expect resp.status == 503 expect resp.body ~ "VCL failed" } -run varnish-7.5.0/bin/varnishtest/tests/r01506.vtc000066400000000000000000000030411457605730600210470ustar00rootroot00000000000000varnishtest "range requests on streamed response" barrier b1 cond 2 -cyclic server s0 { rxreq txresp -nolen \ -hdr "Transfer-Encoding: chunked" \ -hdr "Connection: close" send "11\r\n0_23456789abcdef\n" send "11\r\n1_23456789abcdef\n" send "11\r\n2_23456789abcdef\n" send "11\r\n3_23456789abcdef\n" barrier b1 sync send "11\r\n4_23456789abcdef\n" send "11\r\n5_23456789abcdef\n" send "11\r\n6_23456789abcdef\n" send "11\r\n7_23456789abcdef\n" chunkedlen 0 } -dispatch varnish v1 -vcl+backend {} -start varnish v1 -cliok "param.set debug +syncvsl" client c1 { txreq -url /1 -hdr "Range: bytes=17-101" rxresphdrs expect resp.status == 206 barrier b1 sync rxrespbody expect resp.bodylen == 85 # We cannot do tail-ranges when streaming txreq -url /2 -hdr "Range: bytes=-10" rxresphdrs expect resp.status == 200 expect resp.http.Transfer-Encoding == chunked barrier b1 sync rxrespbody expect resp.bodylen == 136 # We cannot do open-ranges when streaming txreq -url /3 -hdr "Range: bytes=17-" rxresphdrs expect resp.status == 200 expect resp.http.Transfer-Encoding == chunked barrier b1 sync rxrespbody expect resp.bodylen == 136 # Handles out of bounds range txreq -url /4 -hdr "Range: bytes=102-200" rxresphdrs expect resp.status == 206 barrier b1 sync rxrespbody expect resp.bodylen == 34 # Keeps working after short response txreq -url /5 -hdr "Range: bytes=17-101" rxresphdrs expect resp.status == 206 barrier b1 sync rxrespbody expect resp.bodylen == 85 } -run varnish v1 -expect sc_range_short == 0 varnish-7.5.0/bin/varnishtest/tests/r01510.vtc000066400000000000000000000005441457605730600210470ustar00rootroot00000000000000varnishtest "Duplicate object names" varnish v1 -errvcl {Instance 'first' redefined.} { import debug; sub vcl_init { new first = debug.obj("FOO"); new first = debug.obj("BAH"); } } varnish v1 -errvcl {Name 'first' already defined.} { import debug; backend first { .host = "${localhost}"; } sub vcl_init { new first = debug.obj("FOO"); } } varnish-7.5.0/bin/varnishtest/tests/r01512.vtc000066400000000000000000000015731457605730600210540ustar00rootroot00000000000000varnishtest "Regression test for #1512" # First test bereq changes across v_b_r and v_b_f server s1 { rxreq txresp -status 700 } -start varnish v1 -vcl+backend { sub vcl_backend_fetch { if (bereq.http.x-abandon == "1") { return (abandon); } } sub vcl_backend_response { if (beresp.status == 700) { set bereq.http.x-abandon = "1"; return (retry); } } sub vcl_synth { set resp.status = 701; } } -start client c1 { txreq rxresp expect resp.status == 701 } -run # Then across v_b_e and v_b_f varnish v1 -vcl { backend bad { .host = "${bad_backend}"; } sub vcl_backend_fetch { set bereq.backend = bad; if (bereq.http.x-abandon == "2") { return (abandon); } } sub vcl_backend_error { set bereq.http.x-abandon = "2"; return (retry); } sub vcl_synth { set resp.status = 702; } } client c1 { txreq rxresp expect resp.status == 702 } -run varnish-7.5.0/bin/varnishtest/tests/r01518.vtc000066400000000000000000000010351457605730600210530ustar00rootroot00000000000000varnishtest "304 body handling with ESI" server s1 { rxreq txresp -hdr "Last-Modified: Wed, 04 Jun 2014 08:48:52 GMT" \ -body {} rxreq expect req.url == "/bar" txresp -body {} } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_esi = true; } } -start client c1 { txreq rxresp expect resp.bodylen == 12 expect resp.status == 200 txreq -hdr "If-Modified-Since: Wed, 04 Jun 2014 08:48:52 GMT" rxresp expect resp.status == 304 expect resp.bodylen == 0 } -run varnish-7.5.0/bin/varnishtest/tests/r01524.vtc000066400000000000000000000004761457605730600210600ustar00rootroot00000000000000varnishtest "pipe of chunked request" server s1 { rxreq expect req.bodylen == 3 txresp -body "ABCD" } -start varnish v1 -vcl+backend { } -start client c1 { txreq -req MAGIC -nolen -hdr "Transfer-encoding: chunked" chunked {BLA} chunkedlen 0 rxresp expect resp.status == 200 expect resp.bodylen == 4 } -run varnish-7.5.0/bin/varnishtest/tests/r01532.vtc000066400000000000000000000005061457605730600210510ustar00rootroot00000000000000varnishtest "Incorrect representation when using reals - #1532" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { import std; sub vcl_deliver { set resp.http.x-foo = std.real2time(1140618699.00, now); } } -start client c1 { txreq rxresp expect resp.http.x-foo == "Wed, 22 Feb 2006 14:31:39 GMT" } -run varnish-7.5.0/bin/varnishtest/tests/r01557.vtc000066400000000000000000000004611457605730600210600ustar00rootroot00000000000000varnishtest "Test case for #1557" server s1 { rxreq expect req.url == "/?foobar=2" txresp } -start varnish v1 -vcl+backend { sub vcl_recv { set req.url = regsuball(req.url, "(?<=[&\?])(foo|bar)=[^&]+(?:&|$)", ""); } } -start client c1 { txreq -url "/?foo=0&bar=1&foobar=2" rxresp } -run varnish-7.5.0/bin/varnishtest/tests/r01562.vtc000066400000000000000000000015021457605730600210510ustar00rootroot00000000000000varnishtest "retrying a short client body read should not panic varnish" server s1 { non_fatal rxreq txresp -status 200 -hdr "Foo: BAR" -body "1234" } -start server s2 { non_fatal rxreq txresp -status 200 -hdr "Foo: Foo" -body "56" } -start varnish v1 -cliok "param.set vcc_feature +allow_inline_c" -vcl+backend { sub vcl_recv { return (pass); } sub vcl_backend_fetch { if (bereq.retries >= 1) { set bereq.backend = s2; } else { set bereq.backend = s1; } } sub vcl_backend_error { return (retry); } } -start varnish v1 -cliok "param.set debug +syncvsl" client c1 { txreq -req "POST" -nolen -hdr "Content-Length: 10000" -bodylen 9999 } -run delay .4 server s1 { rxreq txresp -status 200 -bodylen 11 } -start client c1 { txreq rxresp expect resp.status == 200 expect resp.bodylen == 11 } -run varnish-7.5.0/bin/varnishtest/tests/r01566.vtc000066400000000000000000000002641457605730600210610ustar00rootroot00000000000000varnishtest "escape issue in regexp" varnish v1 -vcl { backend b1 { .host = "${localhost}"; } sub vcl_recv { set req.url = regsuball(req.url, "\??(p|pi)=.*?(&|$)", ""); } } varnish-7.5.0/bin/varnishtest/tests/r01569.vtc000066400000000000000000000004131457605730600210600ustar00rootroot00000000000000varnishtest "symbol lookup order issue" varnish v1 -errvcl {Name 'debug' already defined.} { vcl 4.0; import debug; backend debug { .host = "${localhost}"; .port = "80"; } sub debug { set req.backend_hint = debug; } sub vcl_recv { call debug; } } varnish-7.5.0/bin/varnishtest/tests/r01575.vtc000066400000000000000000000016711457605730600210640ustar00rootroot00000000000000varnishtest "#1575 - random director exhaust backend list" # Add 5 backends to a random director, with the 5th having very low weight. # Mark the first 4 sick, and make sure that the 5th will be selected. server s1 { rxreq txresp } -start server s2 { rxreq txresp } -start server s3 { rxreq txresp } -start server s4 { rxreq txresp } -start server s5 { rxreq txresp } -start varnish v1 -vcl+backend { import directors; sub vcl_init { new rd = directors.random(); rd.add_backend(s1, 10000); rd.add_backend(s2, 10000); rd.add_backend(s3, 10000); rd.add_backend(s4, 10000); rd.add_backend(s5, 1); } sub vcl_backend_fetch { set bereq.backend = rd.backend(); } } -start varnish v1 -cliok "backend.set_health s1 sick" varnish v1 -cliok "backend.set_health s2 sick" varnish v1 -cliok "backend.set_health s3 sick" varnish v1 -cliok "backend.set_health s4 sick" client c1 { txreq rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/r01576.vtc000066400000000000000000000027151457605730600210650ustar00rootroot00000000000000varnishtest "Test recursive regexp's fail before consuming all the stack" # Use 64bit defaults also on 32bit varnish v1 -cliok "param.set workspace_client 64k" varnish v1 -cliok "param.set http_req_size 32k" # If you want to play around, uncomment the next lines and adjust # the length of the ABAB strings below to suit your needs. # Better yet: Rewrite your regexps to avoid this madness. # Tweaks you may add to this test: # varnish v1 -cliok "param.set thread_pool_stack 48k" # varnish v1 -cliok "param.set pcre2_match_limit 100" # varnish v1 -cliok "param.set pcre2_depth_limit 10000" # varnish v1 -cliok "param.set pcre2_jit_compilation off" # Approximate formula for FreeBSD/amd64: # pcre2_depth_limit = thread_pool_stack * 2 - 9 varnish v1 -vcl+backend { backend proforma none; sub vcl_recv { return (synth(200)); } sub vcl_synth { # shamelessly copied from "bugzilla77 at gmail dot com" # https://bugs.php.net/bug.php?id=70110 if (req.url ~ "^/(A{1,2}B)+$") { set resp.http.found = "1"; } } } -start # This should succeed with default params and JIT/no-JIT client c1 { txreq -url "/ABAABABAABABABAB" rxresp expect resp.status == 200 expect resp.http.found == 1 } -run logexpect l1 -v v1 { expect * * VCL_Error "(match|depth|recursion|JIT stack) limit" } -start # This should fail with default params and JIT/no-JIT client c1 { txreq -url "/${string,repeat,8192,AB}" rxresp expect resp.status == 500 expect_close } -run logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/r01577.vtc000066400000000000000000000014061457605730600210620ustar00rootroot00000000000000varnishtest "#1577: reqbody and synth from recv" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { sub vcl_recv { if (req.url == "/1") { return (synth(200, "OK")); } } sub vcl_synth { set resp.http.x-url = req.url; } sub vcl_deliver { set resp.http.x-url = req.url; } } -start client c1 { # Send a body that happens to be a valid HTTP request # This one is answered by synth() txreq -url "/1" -body "GET /foo HTTP/1.1\r\n\r\n" rxresp expect resp.status == 200 expect resp.http.x-url == "/1" # Make sure that a second request on the same connection goes through # and that the body of the previous one isn't interpreted as the # next request txreq -url "/2" rxresp expect resp.status == 200 expect resp.http.x-url == "/2" } -run varnish-7.5.0/bin/varnishtest/tests/r01578.vtc000066400000000000000000000013651457605730600210670ustar00rootroot00000000000000varnishtest "max-age and age" server s1 { rxreq txresp -hdr "Cache-Control: max-age=23" -hdr "Age: 4" -bodylen 40 } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.http.x-ttl = beresp.ttl; } sub vcl_hit { set req.http.x-remaining-ttl = obj.ttl; } sub vcl_deliver { set resp.http.x-remaining-ttl = req.http.x-remaining-ttl; } } -start client c1 { txreq rxresp expect resp.bodylen == 40 expect resp.http.x-ttl == 19.000 expect resp.http.Age == 4 delay 2 txreq rxresp expect resp.bodylen == 40 expect resp.http.x-ttl == 19.000 expect resp.http.x-remaining-ttl <= 17.000 delay 2 txreq rxresp expect resp.bodylen == 40 expect resp.http.x-ttl == 19.000 expect resp.http.x-remaining-ttl <= 15.000 } -run varnish-7.5.0/bin/varnishtest/tests/r01581.vtc000066400000000000000000000005031457605730600210520ustar00rootroot00000000000000varnishtest "Duplcate SLT_Begin" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { } -start logexpect l1 -v v1 -g request { expect 0 1001 Begin expect * = ReqStart expect * = End } -start client c1 { delay 0.1 send "GET / HTTP/1.1\r\nHost: example.com\r\n\r\n" rxresp } -run logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/r01598.vtc000066400000000000000000000007321457605730600210660ustar00rootroot00000000000000varnishtest "#1598 - Missing ':' in server response headers" server s1 { rxreq txresp -hdr "ETag: \"tag\"" -hdr "foo" accept rxreq txresp } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.ttl = 1s; set beresp.grace = 0s; set beresp.keep = 60s; } } -start varnish v1 -cliok "param.set debug +syncvsl" client c1 { txreq rxresp expect resp.status == 503 } -run delay .1 client c1 { txreq rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/r01602.vtc000066400000000000000000000012071457605730600210460ustar00rootroot00000000000000varnishtest "Test case for #1602" server s1 { rxreq expect req.url == "/bar" txresp -gzipbody {} rxreq expect req.url == "/foo" txresp -hdr "Content-Encoding: gzip" rxreq expect req.url == "/baz" txresp -gzipbody {} rxreq expect req.url == "/quux" txresp -status 204 -hdr "Content-Encoding: gzip" } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_esi = true; } } -start client c1 { txreq -url /bar -hdr "Accept-Encoding: gzip" rxresp expect resp.status == 200 txreq -url /baz -hdr "Accept-Encoding: gzip" rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/r01608.vtc000066400000000000000000000004431457605730600210550ustar00rootroot00000000000000varnishtest "Increment counter if http_req_size is exhausted" server s1 { } -start varnish v1 -arg "-p http_req_size=1024" -vcl+backend { } -start client c1 { non_fatal send "GET /" send_n 2048 "A" send " HTTP/1.1\r\n\r\n" expect_close } -run varnish v1 -expect sc_rx_overflow == 1 varnish-7.5.0/bin/varnishtest/tests/r01612.vtc000066400000000000000000000016331457605730600210520ustar00rootroot00000000000000varnishtest "Missing Content-Length/T-E on passed empty chunked responses." server s1 { # Empty c-l response is OK. rxreq expect req.url == "/0" txresp # Nonzero chunked response is OK. rxreq expect req.url == "/1" send "HTTP/1.1 200 OK\n" send "Transfer-encoding: chunked\n" send "\n" chunkedlen 20 chunkedlen 0 # Empty chunked response is not. rxreq expect req.url == "/2" send "HTTP/1.1 200 OK\n" send "Transfer-encoding: chunked\n" send "\n" chunkedlen 0 } -start varnish v1 -vcl+backend { sub vcl_recv { return (pass); } sub vcl_backend_response { set beresp.do_stream = false; } } -start client c1 { txreq -url "/0" rxresp expect resp.bodylen == 0 expect resp.http.Content-Length == "0" txreq -url "/1" rxresp expect resp.bodylen == 20 expect resp.http.Content-Length == "20" txreq -url "/2" rxresp expect resp.bodylen == 0 expect resp.http.Content-Length == "0" } -run varnish-7.5.0/bin/varnishtest/tests/r01613.vtc000066400000000000000000000007461457605730600210570ustar00rootroot00000000000000varnishtest "Extra Connection header erroneously inserted." server s1 { rxreq txresp } -start varnish v2 -vcl+backend { sub vcl_deliver { set resp.http.Connection = "close"; } } -start varnish v1 -vcl { import std; backend b { .host = "${v2_addr}"; .port = "${v2_port}"; } sub vcl_backend_response { std.collect(beresp.http.Connection); set beresp.http.foo = beresp.http.Connection; } } -start client c1 { txreq rxresp expect resp.http.foo == "close" } -run varnish-7.5.0/bin/varnishtest/tests/r01624.vtc000066400000000000000000000016321457605730600210540ustar00rootroot00000000000000varnishtest "broken gunzip delivery" server s1 { rxreq txresp -nolen \ -hdr "Content-Encoding: gzip" \ -hdr "Transfer-Encoding: Chunked" send "164\r\n" sendhex "1f 8b 08 00 f3 7e 60 54 00 03 9d 94 d1 6e 82 30" sendhex "14 86 ef fb 14 ff 23 70 0e 28 7a 69 b2 78 61 e2" sendhex "76 c1 92 5d a3 69 e6 12 27 a6 b3 4b f6 f6 93 a3" sendhex "24 a5 2d 16 e9 0d f0 51 4e db bf 1f 05 52 8d 33" sendhex "2a 54 7b b3 a9 4f b6 36 7f ce ab b5 de 99 3e da" sendhex "d6 66 7f e8 7d be 3a 9b af 63 8f a8 6d 23 d7 39" sendhex "28 bf 56 07 97 dd 9b 1c 94 81 4a 70 11 21 39 09" # Truncated delay .2 accept rxreq txresp -bodylen 7 } -start varnish v1 -vcl+backend {} -start client c1 { txreq rxresphdrs expect resp.status == 200 non_fatal rxrespbody } -run delay .2 client c1 { # Test varnishd is still running txreq -url /2 rxresp expect resp.bodylen == 7 } -run varnish v1 -expect fetch_failed == 1 varnish-7.5.0/bin/varnishtest/tests/r01627.vtc000066400000000000000000000005701457605730600210570ustar00rootroot00000000000000varnishtest "#1627, wrong CL for gzipped+streamed content with HTTP/1.0 client" server s1 { rxreq txresp -body "Testing" } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_stream = true; set beresp.do_gzip = true; } } -start client c1 { txreq -proto "HTTP/1.0" -hdr "Accept-Encoding: gzip" rxresp gunzip expect resp.bodylen == 7 } -run varnish-7.5.0/bin/varnishtest/tests/r01637.vtc000066400000000000000000000024031457605730600210550ustar00rootroot00000000000000varnishtest "do_esi + do_gzip + out of storage in VFP: #1637" # see also r03502 for failure case in vbf_beresp2obj() server s1 { # First consume (almost) all of the storage rxreq expect req.url == /url1 txresp -bodylen 1040000 rxreq expect req.url == / txresp -bodylen 9000 } -start varnish v1 -arg "-sdefault,1M" -arg "-p nuke_limit=0 -p gzip_level=0" \ -vcl+backend { sub vcl_backend_response { set beresp.http.free = storage.s0.free_space; if (bereq.url == "/") { set beresp.do_esi = true; set beresp.do_gzip = true; } } } -start logexpect l1 -v v1 -g vxid -q "vxid == 1004" { expect 26 1004 VCL_call {^BACKEND_RESPONSE} expect 0 = BerespHeader {^free:} expect 0 = VCL_return {^deliver} expect 0 = Timestamp {^Process} expect 0 = Filters {^ esi_gzip} expect 0 = BerespUnset {^Content-Length:} expect 0 = BerespHeader {^Content-Encoding: gzip} expect 0 = BerespHeader {^Vary: Accept-Encoding} expect 0 = Storage { s0$} expect 0 = Fetch_Body expect 0 = FetchError {^Could not get storage} expect 0 = Gzip expect 0 = BackendClose } -start client c1 { txreq -url /url1 rxresp expect resp.status == 200 txreq rxresp expect resp.status == 503 } -run logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/r01638.vtc000066400000000000000000000014351457605730600210620ustar00rootroot00000000000000varnishtest "Test retry of straight insufficient bytes pass-fetch : do_stream = false" server s1 { rxreq txresp -hdr "Content-Length: 10000" -nolen -bodylen 5000 } server s2 { rxreq txresp -bodylen 5 } server s1 -start server s2 -start varnish v1 -vcl+backend { sub vcl_recv { return (pass); } sub vcl_backend_fetch { set bereq.between_bytes_timeout = 10s; if (bereq.retries == 0) { set bereq.backend = s1; } else { set bereq.backend = s2; } } sub vcl_backend_response { set beresp.do_stream = false; } sub vcl_backend_error { if (bereq.retries == 0) { return (retry); } else { return (deliver); } } } -start client c1 { timeout 10 txreq -url "/" rxresp expect resp.status == 200 expect resp.bodylen == 5 } -run server s1 -wait server s2 -wait varnish-7.5.0/bin/varnishtest/tests/r01641.vtc000066400000000000000000000014761457605730600210610ustar00rootroot00000000000000varnishtest "Test retry of straight insufficient bytes pass-fetch : do_esi = true" server s1 { rxreq txresp -hdr "Content-Length: 10000" -nolen -bodylen 5000 } server s2 { rxreq txresp -bodylen 5 } server s1 -start server s2 -start varnish v1 -arg "-p feature=+esi_disable_xml_check" -vcl+backend { sub vcl_recv { return (pass); } sub vcl_backend_fetch { set bereq.between_bytes_timeout = 10s; if (bereq.retries == 0) { set bereq.backend = s1; } else { set bereq.backend = s2; } } sub vcl_backend_response { set beresp.do_esi = true; } sub vcl_backend_error { if (bereq.retries == 0) { return (retry); } else { return (deliver); } } } -start client c1 { timeout 10 txreq -url "/" rxresp expect resp.status == 200 expect resp.bodylen == 5 } -run server s1 -wait server s2 -wait varnish-7.5.0/bin/varnishtest/tests/r01644.vtc000066400000000000000000000006701457605730600210570ustar00rootroot00000000000000varnishtest "test access to cache_param from vmod" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { import debug; sub vcl_deliver { set resp.http.foo = debug.vre_limit(); } } -start varnish v1 -cliok "param.set pcre2_match_limit 100" client c1 { txreq rxresp expect resp.http.foo == 100 } -run varnish v1 -cliok "param.set pcre2_match_limit 200" client c1 { txreq rxresp expect resp.http.foo == 200 } -run varnish-7.5.0/bin/varnishtest/tests/r01648.vtc000066400000000000000000000032061457605730600210610ustar00rootroot00000000000000varnishtest "#1648 - corrupt object in cache through IMS update" barrier b1 cond 3 # This server sends a broken response body server s1 { rxreq txresp -nolen -hdr "Transfer-Encoding: chunked" -hdr "Etag: \"foo\"" -hdr "Server: s1" barrier b1 sync delay 1 chunked "abc" } -start # This server validates the streaming response from s1 as it hasn't failed yet server s2 { rxreq expect req.http.If-None-Match == "\"foo\"" barrier b1 sync txresp -status 304 -nolen -hdr "Server: s2" } -start # This server sends the proper response body server s3 { rxreq txresp -hdr "Server: s3" -body "abcdef" } -start varnish v1 -vcl+backend { sub vcl_recv { if (req.http.client == "c1") { set req.backend_hint = s1; } else if (req.http.client == "c2") { set req.backend_hint = s2; } else if (req.http.client == "c3") { set req.backend_hint = s3; } } sub vcl_backend_response { if (bereq.http.client == "c1") { set beresp.ttl = 0.1s; set beresp.grace = 0s; set beresp.keep = 60s; } } } -start varnish v1 -cliok "param.set debug +syncvsl" # This client gets a streaming failing result from s1 client c1 { txreq -hdr "Client: c1" rxresphdrs expect resp.status == 200 expect resp.http.transfer-encoding == "chunked" } -start delay 1 # This client gets a streaming failing result from s1 through # IMS update by s2 client c2 { txreq -hdr "Client: c2" barrier b1 sync rxresphdrs expect resp.status == 503 } -run delay 1 # This client should get a fresh fetch from s3 client c3 { txreq -hdr "Client: c3" rxresp expect resp.status == 200 expect resp.body == "abcdef" } -run client c1 -wait varnish v1 -expect fetch_failed == 2 varnish-7.5.0/bin/varnishtest/tests/r01650.vtc000066400000000000000000000004471457605730600210560ustar00rootroot00000000000000varnishtest "xff handling discards multiple headers" server s1 { rxreq expect req.http.X-Forwarded-For == "1.2.3.4, 5.6.7.8, ${localhost}" txresp } -start varnish v1 -vcl+backend { } -start client c1 { txreq -hdr "X-Forwarded-For: 1.2.3.4" -hdr "X-Forwarded-For: 5.6.7.8" rxresp } -run varnish-7.5.0/bin/varnishtest/tests/r01660.vtc000066400000000000000000000005271457605730600210560ustar00rootroot00000000000000varnishtest "#1660: range and synth" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { sub vcl_recv { return (synth(200, "OK")); } } -start client c1 { txreq -hdr "Range: 0-1" rxresp expect resp.status == 416 txreq -hdr "Range: bytes=0-1" rxresp expect resp.status == 206 expect resp.http.content-length == 2 } -run varnish-7.5.0/bin/varnishtest/tests/r01662.vtc000066400000000000000000000006751457605730600210640ustar00rootroot00000000000000varnishtest "Unhide http_ForceField() calls in VSL" server s1 { rxreq expect req.method == "GET" expect req.proto == "HTTP/1.1" txresp } -start varnish v1 -vcl+backend { } -start logexpect l1 -v v1 -g request { expect * 1001 ReqMethod "HEAD" expect * = ReqProtocol "HTTP/1.0" expect * 1002 BereqMethod "GET" expect 1 = BereqProtocol "HTTP/1.1" } -start client c1 { txreq -req HEAD -proto HTTP/1.0 rxresp } -run logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/r01665.vtc000066400000000000000000000003701457605730600210570ustar00rootroot00000000000000varnishtest "Ticket 1665 regression test: wrong behavior of timeout_req" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { } -start client c1 { delay 1 send "GET " delay 1.8 send "/bar\n\n " delay 0.1 send "GET" rxresp } -run varnish-7.5.0/bin/varnishtest/tests/r01672.vtc000066400000000000000000000007511457605730600210600ustar00rootroot00000000000000varnishtest "#1672: Bogus 304 backend reply" # First serve a non-200 status object to the cache, # then revalidate it unconditionally server s1 { rxreq txresp -status 404 rxreq txresp -status 304 } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.ttl = 0.1s; set beresp.grace = 0s; set beresp.keep = 10s; } } -start client c1 { txreq rxresp expect resp.status == 404 } -run delay 0.2 client c1 { txreq rxresp expect resp.status == 503 } -run varnish-7.5.0/bin/varnishtest/tests/r01684.vtc000066400000000000000000000014171457605730600210630ustar00rootroot00000000000000varnishtest "Regression test for #1684" server s1 { rxreq txresp -hdr "foo: 1" accept rxreq txresp -hdr "foo: 2" accept rxreq txresp -hdr "foo: 3" } -start varnish v1 -vcl+backend { sub vcl_recv { return (pass); } sub vcl_backend_response { set beresp.http.bar = bereq.retries; if (beresp.http.foo != bereq.http.stop) { return (retry); } } } -start # check log for the aborted POST logexpect l1 -v v1 -g request { expect * * Begin "^bereq .* retry" expect * = BereqHeader "^X-Varnish:" expect * = BereqUnset "^X-Varnish:" expect * = BereqHeader "^X-Varnish:" } -start varnish v1 -cliok "param.set debug +syncvsl" varnish v1 -cliok "param.set max_retries 2" client c1 { txreq -hdr "stop: 3" rxresp expect resp.http.foo == 3 } -run logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/r01688.vtc000066400000000000000000000020621457605730600210640ustar00rootroot00000000000000varnishtest "ESI-included, compressed synthetic responses" server s1 { rxreq expect req.url == "/bar" txresp -gzipbody {} rxreq expect req.url == "/baz" txresp -gzipbody {} } -start varnish v1 -vcl+backend { sub vcl_recv { if (req.url == "/foo" || req.url == "/quux") { return(synth(998, "included synthetic response")); } } sub vcl_synth { if (resp.status == 998) { set resp.status = 200; set resp.body = "this is the body of an " + "included synthetic response"; return(deliver); } } sub vcl_backend_response { set beresp.do_esi = true; } } -start client c1 { txreq -url /bar rxresp expect resp.status == 200 delay .1 expect resp.body == "this is the body of an included synthetic response" txreq -url /baz -hdr "Accept-Encoding: gzip" timeout 2 rxresp expect resp.status == 200 expect resp.http.Content-Encoding == "gzip" gunzip expect resp.body == "this is the body of an included synthetic response" } -run varnish-7.5.0/bin/varnishtest/tests/r01691.vtc000066400000000000000000000004431457605730600210570ustar00rootroot00000000000000varnishtest "Test bogus Content-Length header" server s1 { rxreq txresp -nolen -hdr "Content-Length: bogus" } -start varnish v1 -vcl+backend { } -start logexpect l1 -v v1 { expect * 1002 Error "Body cannot be fetched" } -start client c1 { txreq rxresp } -run logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/r01693.vtc000066400000000000000000000007031457605730600210600ustar00rootroot00000000000000varnishtest "Check hash is logged when the Hash bit is set" server s1 { rxreq txresp } -start varnish v1 -arg "-p vsl_mask=+Hash" -vcl+backend { sub vcl_hash { hash_data("1" + req.http.foo + "3"); } } -start logexpect l1 -v v1 { expect * 1001 Hash "1" expect 0 1001 Hash "bar" expect 0 1001 Hash "3" expect 0 1001 Hash "/" expect 0 1001 Hash "${localhost}" } -start client c1 { txreq -hdr "foo: bar" rxresp } -run logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/r01729.vtc000066400000000000000000000015351457605730600210640ustar00rootroot00000000000000varnishtest "C-L/T-E:chunked conflict" server s1 { non_fatal rxreq expect req.bodylen == 20 send "HTTP/1.1 200 OK\r\n" send "Content-Length: 31\r\n" send "Transfer-Encoding: chunked\r\n" send "\r\n" send "14\r\n" send "0123456789" send "0123456789" send "0\r\n" send "\r\n" } -start varnish v1 -vcl+backend { } -start client c1 { non_fatal send "PUT /1 HTTP/1.1\r\n" send "Host: foo\r\n" send "Content-Length: 31\r\n" send "Transfer-Encoding: chunked\r\n" send "\r\n" send "14\r\n" send "0123456789" send "0123456789" send "0\r\n" send "\r\n" rxresp expect resp.status == 400 } -run client c1 { fatal send "PUT /2 HTTP/1.1\r\n" send "Host: foo\r\n" send "Transfer-Encoding: chunked\r\n" send "\r\n" send "14\r\n" send "0123456789" send "0123456789" send "0\r\n" send "\r\n" rxresp expect resp.status == 503 } -run varnish-7.5.0/bin/varnishtest/tests/r01730.vtc000066400000000000000000000004071457605730600210510ustar00rootroot00000000000000varnishtest "Test connection error on pipe" varnish v1 -vcl { backend default { .host = "${bad_backend}"; } sub vcl_recv { return (pipe); } } -start client c1 { txreq expect_close } -run varnish v1 -expect sc_tx_error == 1 varnish-7.5.0/bin/varnishtest/tests/r01732.vtc000066400000000000000000000012141457605730600210500ustar00rootroot00000000000000varnishtest "range related panic" barrier b1 cond 2 barrier b2 cond 2 server s1 { rxreq txresp -nolen -hdr "Transfer-Encoding: chunked" chunkedlen 10 chunkedlen 10 barrier b1 sync chunkedlen 10 chunkedlen 10 chunkedlen 10 chunkedlen 0 delay .1 barrier b2 sync } -start varnish v1 -vcl+backend { } -start client c1 { txreq -hdr "Range: bytes=0-100" rxresphdrs expect resp.status == 206 expect resp.http.Content-Range == "bytes 0-100/*" } -run delay .1 barrier b1 sync barrier b2 sync delay .4 client c1 { txreq -hdr "Range: bytes=0-100" rxresp expect resp.status == 206 expect resp.http.Content-Range == "bytes 0-49/50" } -run varnish-7.5.0/bin/varnishtest/tests/r01737.vtc000066400000000000000000000017751457605730600210710ustar00rootroot00000000000000varnishtest "#1737 - ESI sublevel session close" barrier b1 cond 2 # Build a esi request tree that fails on flush before include at two different # levels. Synchronize a client close after the response headers have been # received by the client. This produces write erros for the body parts in all # fragments. server s1 { rxreq txresp -body {} barrier b1 sync rxreq delay 1 txresp -body {22} rxreq txresp -body {1} rxreq expect req.url == "/check" rxresp } -start # give enough stack to 32bit systems varnish v1 -cliok "param.set thread_pool_stack 80k" varnish v1 -cliok "param.set feature +esi_disable_xml_check" varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_esi = true; } } -start # Send request, read response headers then close connection client c1 { txreq rxresp -no_obj barrier b1 sync } -run delay 3 # Check that Varnish is alive client c1 { txreq rxresp } -run varnish-7.5.0/bin/varnishtest/tests/r01739.vtc000066400000000000000000000010511457605730600210560ustar00rootroot00000000000000varnishtest "Check workspace overflow in fetch processor" server s1 { rxreq txresp -bodylen 1024 } -start varnish v1 -vcl+backend { import vtc; sub vcl_backend_response { set beresp.do_gzip = true; vtc.workspace_alloc(backend, vtc.workspace_free(backend) - 16); } } -start logexpect l1 -v v1 -g raw { expect * 1002 FetchError {^Workspace overflow} expect * = Error {^out of workspace [(]bo[)]} } -start client c1 { txreq rxresp expect resp.status == 503 } -run logexpect l1 -wait varnish v1 -expect ws_backend_overflow == 1 varnish-7.5.0/bin/varnishtest/tests/r01746.vtc000066400000000000000000000006231457605730600210600ustar00rootroot00000000000000varnishtest "ESI include is 204" server s1 { rxreq txresp -gzipbody {
} rxreq expect req.url == /b txresp -status 204 -nolen \ -hdr "Content-encoding: gzip" } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_esi = true; } } -start client c1 { txreq rxresp expect resp.body == "" } -run varnish v1 -expect esi_errors == 0 varnish-7.5.0/bin/varnishtest/tests/r01755.vtc000066400000000000000000000006301457605730600210560ustar00rootroot00000000000000varnishtest "The (struct backend).n_conn counter is never decremented" server s1 { rxreq txresp rxreq txresp } -start varnish v1 -vcl { backend s1 { .host = "${s1_addr}"; .port = "${s1_port}"; .max_connections = 1; } } -start client c1 { txreq -url "/foo" rxresp expect resp.status == 200 txreq -url "/bar" rxresp expect resp.status == 200 } -run varnish v1 -expect backend_busy == 0 varnish-7.5.0/bin/varnishtest/tests/r01761.vtc000066400000000000000000000010151457605730600210510ustar00rootroot00000000000000varnishtest "204 response with body" server s1 { rxreq txresp -status 204 -body "HiHo" } -start varnish v1 -vcl+backend { sub vcl_backend_response { if (beresp.http.foo == "bar") { set beresp.status = 204; } } } -start client c1 { txreq rxresp expect resp.status == 503 } -run server s1 { rxreq txresp -hdr "Foo: bar" -body "HiHo" } -start client c1 { txreq -url /2 rxresp expect resp.status == 204 expect resp.http.content-length == "" expect resp.http.transfer-encoding == "" } -run varnish-7.5.0/bin/varnishtest/tests/r01762.vtc000066400000000000000000000041121457605730600210530ustar00rootroot00000000000000varnishtest "test vsl api handling of incomplete vtxes combined with bad vxids" feature cmd false server s1 { rxreq txresp accept rxreq delay 5 txresp } -start varnish v1 -vcl+backend { sub vcl_backend_response { if (bereq.url == "/retry" && bereq.retries == 0) { return (retry); } } sub vcl_deliver { if (req.url == "/restart" && req.restarts == 0) { return (restart); } } } -start # xid spill into client marker emitting a bad SLT_Link for bereq retry # triggering vtx_force by way of a timeout # # VSLb(bo->vsl, SLT_Link, "bereq %u retry", wid); varnish v1 -cliok "param.set debug +syncvsl" # vxid wrap at 1<<30 varnish v1 -cliok "debug.xid 1073741823" logexpect l1 -v v1 -g request -T 2 { expect 0 1 Begin "req 107374182" expect * = ReqStart expect 0 = ReqMethod GET expect 0 = ReqURL / expect 0 = ReqProtocol HTTP/1.1 expect * = ReqHeader "Foo: bar" expect * = Link "bereq 2 fetch" expect * = VSL "timeout" expect * = End "synth" expect 0 2 Begin "bereq 1" expect * 2 Link "bereq 3 retry" expect * = End expect 0 3 Begin "bereq 2 retry" expect * = End } -start client c1 { txreq -url "/retry" -hdr "Foo: bar" rxresp expect resp.status == 200 } -run logexpect l1 -wait ################################################################################ # case xid spill into client marker emitting a bad SLT_Link for restart # # VSLb(req->vsl, SLT_Link, "req %u restart", wid); server s1 { rxreq txresp } -start varnish v1 -cliok "param.set debug +syncvsl" # vxid wrap at 1<<30 varnish v1 -cliok "debug.xid 1073741823" logexpect l1 -v v1 -g request { expect 0 1 Begin "req 1073741823" expect * = ReqStart expect 0 = ReqMethod GET expect 0 = ReqURL / expect 0 = ReqProtocol HTTP/1.1 expect * = ReqHeader "Foo: bar" expect * = Link "bereq 2 fetch" expect * = Link "req 3 restart" expect * = End expect 0 2 Begin "bereq 1" expect * = End expect 0 3 Begin "req 1 restart" expect * = End } -start client c1 { txreq -url "/restart" -hdr "Foo: bar" rxresp expect resp.status == 200 } -run logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/r01764.vtc000066400000000000000000000015411457605730600210600ustar00rootroot00000000000000varnishtest "Test nuke_limit" server s1 { # First consume (almost) all of the storage rxreq expect req.url == /url1 txresp -bodylen 200000 rxreq expect req.url == /url2 txresp -bodylen 200000 rxreq expect req.url == /url3 txresp -bodylen 200000 rxreq expect req.url == /url4 txresp -bodylen 200000 non_fatal rxreq expect req.url == /url5 txresp -bodylen 1000000 } -start varnish v1 -arg "-sdefault,1M" -arg "-p nuke_limit=1" -vcl+backend { sub vcl_backend_response { set beresp.do_stream = false; } } -start client c1 { txreq -url /url1 rxresp expect resp.status == 200 txreq -url /url2 rxresp expect resp.status == 200 txreq -url /url3 rxresp expect resp.status == 200 txreq -url /url4 rxresp expect resp.status == 200 txreq -url /url5 rxresp expect resp.status == 503 } -run varnish v1 -expect n_lru_limited == 1 varnish-7.5.0/bin/varnishtest/tests/r01765.vtc000066400000000000000000000005401457605730600210570ustar00rootroot00000000000000varnishtest "subset headers in packed format" server s1 { rxreq txresp -hdr "Foo:bar" rxreq txresp -hdr "Foo:bax" } -start varnish v1 -vcl+backend { sub vcl_hit { if (obj.http.foo == "bar") { return (pass); } } } -start client c1 { txreq rxresp expect resp.http.foo == bar txreq rxresp expect resp.http.foo == bax } -run delay 1 varnish-7.5.0/bin/varnishtest/tests/r01768.vtc000066400000000000000000000005441457605730600210660ustar00rootroot00000000000000varnishtest "http header collision -/_" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { sub vcl_deliver { if (req.http.foo_bar == req.http.foo-bar) { set resp.http.foo = "xxx"; } else { set resp.http.foo = "yyy"; } } } -start client c1 { txreq -hdr "foo_bar: 1" -hdr "foo-bar: 2" rxresp expect resp.http.foo == yyy } -run varnish-7.5.0/bin/varnishtest/tests/r01772.vtc000066400000000000000000000006061457605730600210600ustar00rootroot00000000000000varnishtest "#1772: Honor first_byte_timeout on a recycled connection" server s1 { rxreq expect req.url == "/first" txresp rxreq expect req.url == "/second" delay 2 txresp } -start varnish v1 -arg "-p first_byte_timeout=1" -vcl+backend {} -start client c1 { txreq -url "/first" rxresp expect resp.status == 200 txreq -url "/second" rxresp expect resp.status == 503 } -run varnish-7.5.0/bin/varnishtest/tests/r01775.vtc000066400000000000000000000003141457605730600210570ustar00rootroot00000000000000varnishtest "Test loading a VCL in cold state" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { } -start varnish v1 -cliok {vcl.inline vcl2 "vcl 4.0; backend b { .host = \":80\";}" cold} varnish-7.5.0/bin/varnishtest/tests/r01777.vtc000066400000000000000000000006501457605730600210640ustar00rootroot00000000000000varnishtest "range asked longer than object" server s1 { rxreq txresp -nolen -hdr "Transfer-Encoding: chunked" delay .5 chunkedlen 64 chunkedlen 64 chunkedlen 0 } -start varnish v1 -vcl+backend { } -start client c1 { txreq -hdr "Range: bytes=0-129" rxresp expect resp.status == 206 expect resp.http.Content-Range == "bytes 0-129/*" expect resp.http.Content-Length == expect resp.bodylen == 128 } -run varnish-7.5.0/bin/varnishtest/tests/r01781.vtc000066400000000000000000000013231457605730600210550ustar00rootroot00000000000000varnishtest "#1781 gzip checksum with multilevel esi" server s1 { rxreq txresp -body {Baz} rxreq expect req.url == /1 txresp -body {Bar} rxreq expect req.url == /2 txresp -body {Foo} } -start # give enough stack to 32bit systems varnish v1 -cliok "param.set thread_pool_stack 80k" varnish v1 -cliok "param.set feature +esi_disable_xml_check" varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_gzip = true; set beresp.do_esi = true; } } -start client c1 { txreq rxresp expect resp.body == "FooBarBaz" txreq -hdr "Accept-Encoding: gzip" rxresp expect resp.http.content-encoding == "gzip" gunzip expect resp.body == "FooBarBaz" } -run varnish-7.5.0/bin/varnishtest/tests/r01783.vtc000066400000000000000000000005301457605730600210560ustar00rootroot00000000000000varnishtest "POST with no body" server s1 { rxreq txresp -hdr "foo: 1" rxreq txresp -nolen -hdr "foo: 2" -hdr "Transfer-Encoding: chunked" chunkedlen 0 } -start varnish v1 -vcl+backend {} -start client c1 { txreq -req "POST" -nolen rxresp expect resp.http.foo == 1 txreq -req "POST" -nolen rxresp expect resp.http.foo == 2 } -run varnish-7.5.0/bin/varnishtest/tests/r01801.vtc000066400000000000000000000026451457605730600210560ustar00rootroot00000000000000varnishtest "Test parsing IP constants" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { import std; sub vcl_deliver { set resp.http.foo1 = std.ip("..", "1.2.3.4"); set resp.http.foo2 = std.ip("..", "1.2.3.4:8000"); set resp.http.foo3 = std.ip("..", "1.2.3.4 8000"); set resp.http.foo4 = std.ip("..", "::1"); set resp.http.foo5 = std.ip("..", "[::1]"); set resp.http.foo6 = std.ip("..", "[::1]:8000"); set resp.http.bar1 = std.port("1.2.3.4"); set resp.http.bar2 = std.port("1.2.3.4:8000"); set resp.http.bar3 = std.port("1.2.3.4 8000"); set resp.http.bar4 = std.port("::1"); set resp.http.bar5 = std.port("[::1]"); set resp.http.bar6 = std.port("[::1]:8000"); } } -start client c1 { txreq rxresp expect resp.http.foo1 == "1.2.3.4" expect resp.http.foo2 == "1.2.3.4" expect resp.http.foo3 == "1.2.3.4" expect resp.http.foo4 == "::1" expect resp.http.foo5 == "::1" expect resp.http.foo6 == "::1" expect resp.http.bar1 == "80" expect resp.http.bar2 == "8000" expect resp.http.bar3 == "8000" expect resp.http.bar4 == "80" expect resp.http.bar5 == "80" expect resp.http.bar6 == "8000" } -run varnish v1 -errvcl "could not be resolved to an IP address" { import std; sub vcl_deliver { set resp.http.foo = std.ip("..", "::1::2"); } } varnish v1 -errvcl "could not be resolved to an IP address" { import std; sub vcl_deliver { set resp.http.foo = std.ip("..", "1.2.3.4::80"); } } varnish-7.5.0/bin/varnishtest/tests/r01804.vtc000066400000000000000000000015661457605730600210620ustar00rootroot00000000000000varnishtest "#1804: varnishapi transaction grouping fails for PROXY" server s1 { rxreq txresp } -start varnish v1 -arg "-a foo=${listen_addr},PROXY" varnish v1 -arg "-p thread_pools=1" varnish v1 -vcl+backend "" -start logexpect l1 -v v1 -g session { expect * 1000 Begin {^sess .* PROXY$} expect 0 = SessOpen {^.* foo .*} expect * = Proxy {^1 } expect * 1001 Begin {^req} } -start client c1 { send "PROXY TCP4 1.2.3.4 5.6.7.8 1234 5678\r\n" txreq rxresp } -run logexpect l1 -wait logexpect l2 -v v1 -g session { expect * 1003 Begin {^sess .* PROXY$} expect 0 = SessOpen {^.* foo .*} expect * = Proxy {^2 } expect * 1004 Begin {^req} } -start client c2 { # good IPv4 sendhex "0d 0a 0d 0a 00 0d 0a 51 55 49 54 0a" sendhex "21 11 00 0c" sendhex "01 02 03 04" sendhex "05 06 07 08" sendhex "09 0a" sendhex "0b 0c" txreq rxresp } -run logexpect l2 -wait varnish-7.5.0/bin/varnishtest/tests/r01806.vtc000066400000000000000000000010341457605730600210520ustar00rootroot00000000000000varnishtest "#1806: return pipe w/ STOLEN connection" server s1 { rxreq expect req.url == "/pass-me" txresp -body "I was PASSed\n" rxreq expect req.url == "/pipe-me" txresp -body "I was PIPEd\n" } -start varnish v1 -vcl+backend { sub vcl_recv { if (req.url == "/pipe-me") { return (pipe); } return (pass); } } -start varnish v1 -cliok "param.set debug +syncvsl" client c1 { txreq -url /pass-me rxresp expect resp.status == 200 txreq -req POST -url /pipe-me -body "asdf" rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/r01807.vtc000066400000000000000000000014221457605730600210540ustar00rootroot00000000000000varnishtest "Decreasing http_max_hdr" server s1 { rxreq txresp \ -hdr "h00: 00" \ -hdr "h01: 01" \ -hdr "h02: 02" \ -hdr "h03: 03" \ -hdr "h04: 04" \ -hdr "h05: 05" \ -hdr "h06: 06" \ -hdr "h07: 07" \ -hdr "h08: 08" \ -hdr "h09: 09" \ -hdr "h10: 10" \ -hdr "h11: 11" \ -hdr "h12: 12" \ -hdr "h13: 13" \ -hdr "h14: 14" \ -hdr "h15: 15" \ -hdr "h16: 16" \ -hdr "h17: 17" \ -hdr "h18: 18" \ -hdr "h19: 19" \ -hdr "h20: 20" \ -hdr "h21: 21" \ -hdr "h22: 22" \ -hdr "h23: 23" \ -hdr "h24: 24" } -start varnish v1 -vcl+backend { } -start client c1 { txreq rxresp expect resp.status == 200 expect resp.http.h24 == 24 } -run varnish v1 -cliok {param.set http_max_hdr 32} client c1 { txreq rxresp expect resp.status == 500 } -run varnish-7.5.0/bin/varnishtest/tests/r01810.vtc000066400000000000000000000005621457605730600210520ustar00rootroot00000000000000varnishtest "POST HTTP/1.0 response" server s1 { non_fatal rxreq txresp -proto HTTP/1.1 -nolen -hdr "Connection: close" send "Hello World\n" delay .4 } -start varnish v1 -syntax 4.0 -vcl+backend { sub vcl_backend_fetch { set bereq.proto = "HTTP/1.0"; } } -start client c1 { txreq -req POST -hdr "Content-Length: 0" rxresp expect resp.bodylen == 12 } -run varnish-7.5.0/bin/varnishtest/tests/r01818.vtc000066400000000000000000000023411457605730600210570ustar00rootroot00000000000000varnishtest "#1818: verify that grace works for hit_for_pass objects" barrier b1 cond 2 barrier b2 cond 2 server s1 { rxreq expect req.http.a == "1" txresp rxreq expect req.http.b == "1" barrier b2 sync barrier b1 sync txresp } -start server s2 { rxreq expect req.http.c == "1" barrier b1 sync txresp } -start varnish v1 -vcl+backend { sub vcl_recv { if (req.http.c) { set req.backend_hint = s2; } } sub vcl_miss { set req.http.miss = "1"; } sub vcl_pass { set req.http.pass = "1"; } sub vcl_deliver { if (req.http.miss) { set resp.http.miss = req.http.miss; } if (req.http.pass) { set resp.http.pass = req.http.pass; } } sub vcl_backend_response { set beresp.ttl = 0.1s; set beresp.grace = 1m; set beresp.uncacheable = true; } } -start varnish v1 -cliok "param.set debug +syncvsl" varnish v1 -cliok "param.set debug +waitinglist" varnish v1 -cliok "param.show debug" client c1 { # This is a plain miss txreq -hdr "a: 1" rxresp expect resp.http.miss == "1" delay .2 # This should also miss, because the HFP is expired txreq -hdr "b: 1" rxresp expect resp.http.miss == "1" } -start client c2 { barrier b2 sync txreq -hdr "c: 1" rxresp expect resp.http.miss == "1" } -run varnish-7.5.0/bin/varnishtest/tests/r01821.vtc000066400000000000000000000015551457605730600210570ustar00rootroot00000000000000varnishtest "Slim down hit-for-miss / hit-for-miss objects" # see also #2768 server s1 { rxreq expect req.url == "/hfm" txresp -hdr "HFM: True" -bodylen 65530 rxreq expect req.url == "/hfp" txresp -hdr "HFP: True" -bodylen 65550 } -start varnish v1 -arg "-s Transient=default" -vcl+backend { sub vcl_backend_response { if (bereq.url == "/hfm") { set beresp.uncacheable = true; } else if (bereq.url == "/hfp") { return (pass(1m)); } } } -start logexpect l1 -v v1 -g raw { expect * * Storage "Transient" expect * * Storage "Transient" } -start client c1 { txreq -url "/hfm" rxresp expect resp.status == 200 expect resp.http.hfm == True txreq -url "/hfp" rxresp expect resp.status == 200 expect resp.http.hfp == True } -run logexpect l1 -wait varnish v1 -expect SM?.Transient.c_bytes > 131072 varnish v1 -expect SM?.Transient.g_bytes < 2000 varnish-7.5.0/bin/varnishtest/tests/r01826.vtc000066400000000000000000000005331457605730600210570ustar00rootroot00000000000000varnishtest "Check we ignore a zero C-L with a 204" server s1 { rxreq txresp -status 204 -bodylen 5 expect_close accept rxreq txresp -status 204 } -start varnish v1 -vcl+backend { } -start client c1 { txreq rxresp expect resp.status == 503 txreq rxresp expect resp.status == 204 expect resp.http.content-length == } -run varnish-7.5.0/bin/varnishtest/tests/r01834.vtc000066400000000000000000000035441457605730600210630ustar00rootroot00000000000000varnishtest "#1834 - Buffer overflow in backend workspace" # This test case checks fetch side for buffer overflow when there is little # workspace left. If failing it would be because we tripped the canary # at the end of the workspace. server s1 -repeat 64 { rxreq txresp } -start varnish v1 -vcl+backend { import std; import vtc; sub vcl_recv { return (pass); } sub vcl_backend_fetch { vtc.workspace_alloc(backend, -1 * std.integer(bereq.http.WS, 256)); } } -start client c1 { # Start with enough workspace to receive a good result txreq -hdr "WS: 320" rxresp expect resp.status == 200 # Continue with decreasing workspaces by decrements of 8 (sizeof void*) txreq -hdr "WS: 312" rxresp txreq -hdr "WS: 304" rxresp txreq -hdr "WS: 296" rxresp txreq -hdr "WS: 288" rxresp txreq -hdr "WS: 280" rxresp txreq -hdr "WS: 272" rxresp txreq -hdr "WS: 264" rxresp txreq -hdr "WS: 256" rxresp txreq -hdr "WS: 248" rxresp txreq -hdr "WS: 240" rxresp txreq -hdr "WS: 232" rxresp txreq -hdr "WS: 224" rxresp txreq -hdr "WS: 216" rxresp txreq -hdr "WS: 208" rxresp txreq -hdr "WS: 200" rxresp txreq -hdr "WS: 192" rxresp txreq -hdr "WS: 184" rxresp txreq -hdr "WS: 176" rxresp txreq -hdr "WS: 168" rxresp txreq -hdr "WS: 160" rxresp txreq -hdr "WS: 152" rxresp txreq -hdr "WS: 144" rxresp txreq -hdr "WS: 136" rxresp txreq -hdr "WS: 128" rxresp txreq -hdr "WS: 120" rxresp txreq -hdr "WS: 112" rxresp txreq -hdr "WS: 104" rxresp txreq -hdr "WS: 096" rxresp txreq -hdr "WS: 088" rxresp txreq -hdr "WS: 080" rxresp txreq -hdr "WS: 072" rxresp txreq -hdr "WS: 064" rxresp txreq -hdr "WS: 056" rxresp txreq -hdr "WS: 048" rxresp txreq -hdr "WS: 040" rxresp txreq -hdr "WS: 032" rxresp txreq -hdr "WS: 024" rxresp txreq -hdr "WS: 016" rxresp txreq -hdr "WS: 008" rxresp txreq -hdr "WS: 000" rxresp } -run varnish-7.5.0/bin/varnishtest/tests/r01837.vtc000066400000000000000000000002761457605730600210650ustar00rootroot00000000000000varnishtest "Test VCC errors out if probe is used before it is defined" varnish v1 -errvcl "Symbol not found: 'p'" { backend b { .host = "${localhost}"; .probe = p; } probe p { } } varnish-7.5.0/bin/varnishtest/tests/r01838.vtc000066400000000000000000000010631457605730600210610ustar00rootroot00000000000000varnishtest "Uncompressed synthetic responses as esi includes" server s1 { rxreq txresp -body {} } -start varnish v1 -vcl+backend { sub vcl_recv { if (req.url == "/foo") { return (synth(998)); } } sub vcl_synth { if (resp.status == 998) { set resp.status = 200; set resp.body = "synthetic body"; return (deliver); } } sub vcl_backend_response { set beresp.do_esi = true; } } -start client c1 { txreq rxresp expect resp.status == 200 expect resp.body == "synthetic body" } -run varnish-7.5.0/bin/varnishtest/tests/r01843.vtc000066400000000000000000000004721457605730600210600ustar00rootroot00000000000000varnishtest "HTTP/1.0 POST and PUT need a valid Content-Length" server s1 { } -start varnish v1 -vcl+backend {} -start client c1 { txreq -proto HTTP/1.0 -req "POST" -nolen rxresp expect resp.status == 400 } -run client c1 { txreq -proto HTTP/1.0 -req "PUT" -nolen rxresp expect resp.status == 400 } -run varnish-7.5.0/bin/varnishtest/tests/r01847.vtc000066400000000000000000000016151457605730600210640ustar00rootroot00000000000000varnishtest "Test https_scheme parameter" server s1 { rxreq expect req.url == /bar txresp rxreq expect req.url == https://www.example.com/bar txresp } -start varnish v1 -vcl+backend { sub vcl_deliver { set resp.http.rxhost = req.http.host; set resp.http.rxurl = req.url; } } -start client c1 { txreq -url http://www.example.com/bar rxresp expect resp.http.rxhost == www.example.com expect resp.http.rxurl == /bar txreq -url https://www.example.com/bar rxresp expect resp.http.rxhost == "${localhost}" expect resp.http.rxurl == https://www.example.com/bar } -run varnish v1 -cliok "param.set feature +https_scheme" client c1 { txreq -url http://www.example.com/bar rxresp expect resp.http.rxhost == www.example.com expect resp.http.rxurl == /bar txreq -url https://www.example.com/bar rxresp expect resp.http.rxhost == www.example.com expect resp.http.rxurl == /bar } -run varnish-7.5.0/bin/varnishtest/tests/r01856.vtc000066400000000000000000000005401457605730600210600ustar00rootroot00000000000000varnishtest "setting empty strings in line1 fields" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { sub vcl_recv { set req.url = ""; } } -start logexpect l1 -v v1 -g raw { expect * 1001 VCL_Error "Setting req.url to empty string" } -start client c1 { txreq rxresp expect resp.reason == "VCL failed" } -run logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/r01857.vtc000066400000000000000000000012111457605730600210550ustar00rootroot00000000000000varnishtest "Check session herding" server s1 { rxreq txresp -hdr "Connection: close" -body "012345\n" } -start varnish v1 -vcl+backend { } -start client c1 { txreq -url "/" rxresp expect resp.status == 200 expect resp.http.X-Varnish == "1001" delay 1 txreq -url "/" rxresp expect resp.status == 200 expect resp.http.X-Varnish == "1003 1002" } -run # Give varnish a chance to update stats delay .1 varnish v1 -expect sess_herd >= 1 varnish v1 -expect sess_conn == 1 varnish v1 -expect cache_hit == 1 varnish v1 -expect cache_miss == 1 varnish v1 -expect client_req == 2 varnish v1 -expect s_sess == 1 varnish v1 -expect s_fetch == 1 varnish-7.5.0/bin/varnishtest/tests/r01858.vtc000066400000000000000000000013451457605730600210660ustar00rootroot00000000000000varnishtest "Test a hit-for-miss does not issue an IMS request" server s1 { rxreq txresp \ -hdr {Etag: "foo"} \ -body "foo" rxreq expect req.http.if-none-match == txresp \ -hdr {Etag: "bar"} \ -body "bar" } -start varnish v1 -vcl+backend { sub vcl_backend_response { if (beresp.status == 200) { set beresp.ttl = 1s; set beresp.uncacheable = true; return (deliver); } } } -start # Tests logging hit-for-miss on an expired object logexpect l1 -v v1 -g vxid { expect * 1003 HitMiss "^1002 -" } -start client c1 { txreq rxresp expect resp.status == 200 expect resp.body == "foo" delay 1.5 txreq rxresp expect resp.status == 200 expect resp.body == "bar" } -run logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/r01878.vtc000066400000000000000000000013431457605730600210660ustar00rootroot00000000000000varnishtest "ESI delivering a gzip'd object works when parent is not gzip'd" server s1 { rxreq txresp -hdr "id: /" -body {<1>} rxreq expect req.url == "/foo" expect req.http.accept-encoding == "gzip" txresp -hdr "id: foo" -gzipbody {<2>} rxreq expect req.url == "/bar" txresp -hdr "id: bar" -body "<3>bar" } -start # give enough stack to 32bit systems varnish v1 -cliok "param.set thread_pool_stack 80k" varnish v1 -vcl+backend { sub vcl_backend_response { if (bereq.url != "/bar") { set beresp.do_esi = true; } return (deliver); } } -start client c1 { txreq rxresp expect resp.bodylen == 24 expect resp.body == {<1><2><3>bar} } -run varnish-7.5.0/bin/varnishtest/tests/r01879.vtc000066400000000000000000000016321457605730600210700ustar00rootroot00000000000000varnishtest "r01879: Check duplicate headers handling on IMS header merge" server s1 { rxreq txresp -hdr {etag: "foo"} -hdr "foo: a" -hdr "foo: b" -body "bdy" rxreq expect req.http.if-none-match == {"foo"} txresp -status 304 -hdr {etag: "foo"} -hdr "foo: c" -hdr "foo: d" rxreq txresp -hdr {etag: "bar"} -hdr "foo: a" -hdr "foo: b" -body "bdy" rxreq expect req.http.if-none-match == {"bar"} txresp -status 304 -hdr {etag: "bar"} } -start varnish v1 -vcl+backend { import std; sub vcl_backend_response { set beresp.ttl = 0.001s; set beresp.grace = 0.1s; set beresp.keep = 9999s; } sub vcl_deliver { std.collect(resp.http.foo); } } -start client c1 { txreq rxresp expect resp.http.foo == "a, b" delay .5 txreq rxresp expect resp.http.foo == "c, d" delay .5 } -run client c2 { txreq rxresp expect resp.http.foo == "a, b" delay .5 txreq rxresp expect resp.http.foo == "a, b" } -run varnish-7.5.0/bin/varnishtest/tests/r01881.vtc000066400000000000000000000005621457605730600210620ustar00rootroot00000000000000varnishtest "Test cached request bodies can be piped" server s1 { rxreq expect req.bodylen == 6 expect req.http.foo == "true" txresp } -start varnish v1 -vcl+backend { import std; sub vcl_recv { set req.http.foo = std.cache_req_body(10KB); return (pipe); } } -start client c1 { txreq -method POST -body "foobar" rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/r01890.vtc000066400000000000000000000003561457605730600210630ustar00rootroot00000000000000varnishtest "Test return(synth) from vcl_pipe" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { sub vcl_pipe { return (synth(401)); } } -start client c1 { txreq -req PROPFIND rxresp expect resp.status == 401 } -run varnish-7.5.0/bin/varnishtest/tests/r01914.vtc000066400000000000000000000017661457605730600210660ustar00rootroot00000000000000varnishtest "Set the storage backend for the request body" server s1 { rxreq expect req.bodylen == 100 txresp rxreq expect req.bodylen == 100 txresp } -start varnish v1 \ -arg "-s default,1MB" \ -arg "-s default,1MB" \ -arg "-s Transient=default" \ -syntax 4.0 \ -vcl+backend { import std; sub vcl_recv { if (req.url == "/1") { set req.storage = storage.s1; } std.cache_req_body(1KB); } sub vcl_backend_fetch { return (fetch); } sub vcl_backend_response { set beresp.storage = storage.s0; } } # -cli because accept_filter may not be supported varnish v1 -cli "param.set accept_filter off" varnish v1 -start client c1 { txreq -url /0 -bodylen 100 rxresp } -run varnish v1 -expect SM?.s0.c_bytes > 0 varnish v1 -expect SM?.s1.c_bytes == 0 varnish v1 -expect SM?.Transient.c_bytes > 0 client c1 { txreq -url /1 -bodylen 100 rxresp } -run varnish v1 -expect SM?.s0.c_bytes > 0 varnish v1 -expect SM?.s1.c_bytes > 0 varnish v1 -expect SM?.Transient.c_bytes > 0 varnish-7.5.0/bin/varnishtest/tests/r01918.vtc000066400000000000000000000003711457605730600210610ustar00rootroot00000000000000varnishtest "Check EOF responses work with HTTP/1.1" server s1 { rxreq txresp -nolen -bodylen 10 close accept } -start varnish v1 -vcl+backend { } -start client c1 { txreq rxresp expect resp.status == 200 expect resp.bodylen == 10 } -run varnish-7.5.0/bin/varnishtest/tests/r01924.vtc000066400000000000000000000011271457605730600210560ustar00rootroot00000000000000varnishtest "Test std.{log,syslog} from vcl_{init,fini}" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { import std; sub vcl_init { std.log("init" + " one " + "two"); std.syslog(8 + 7, "init"); } sub vcl_fini { std.log("fini" + " one " + "two"); std.syslog(8 + 7, "fini"); } } -start logexpect l1 -v v1 -g raw -d 1 { expect 0 0 CLI {^Rd vcl.load} expect 0 = VCL_Log {^init one two} expect * 0 CLI {^Rd vcl.discard} expect 0 = VCL_Log {^fini one two} } -start varnish v1 -vcl+backend { } varnish v1 -cliok "vcl.discard vcl1" logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/r01927.vtc000066400000000000000000000023011457605730600210540ustar00rootroot00000000000000varnishtest "Test requests other than GET are cacheable" barrier b1 cond 2 server s1 { rxreq expect req.method == "POST" expect req.body == "foo" txresp -body bar rxreq expect req.method == "POST" expect req.body == "foo" txresp -body baz barrier b1 sync } -start varnish v1 -vcl+backend { sub vcl_recv { # We ignore the actual body for this test. set req.http.method = req.method; set req.http.hit = "No"; return (hash); } sub vcl_hit { set req.http.hit = "Yes"; } sub vcl_deliver { set resp.http.hit = req.http.hit; } sub vcl_backend_fetch { set bereq.method = bereq.http.method; } sub vcl_backend_response { set beresp.ttl = 0.5s; } } -start client c1 { txreq -req "POST" -body "foo" rxresp expect resp.body == "bar" expect resp.http.hit == "No" txreq -req "POST" -body "foo" rxresp expect resp.body == "bar" expect resp.http.hit == "Yes" # Wait until between ttl&grace delay 1.0 # Trigger bg fetch txreq -req "POST" -body "foo" rxresp expect resp.body == "bar" expect resp.http.hit == "Yes" barrier b1 sync delay 0.1 # Get new object, from cache txreq -req "POST" -body "foo" rxresp expect resp.body == "baz" expect resp.http.hit == "Yes" } -run varnish-7.5.0/bin/varnishtest/tests/r01941.vtc000066400000000000000000000020741457605730600210570ustar00rootroot00000000000000varnishtest "ESI memory should only be fully allocated" # In this test case, the cache is almost filled. Then we force the # allocation of ESI memory that does not fit in the cache, is bigger # than fetch_chunksize, but where half the amount can be allocated by # the stevedore. server s1 { rxreq expect req.url == /big txresp -bodylen 1037480 expect_close accept rxreq txresp -nolen -hdr "Content-Length: 6545" -body "" loop 92 { send {} } send "" expect_close accept rxreq expect req.url == /long1234567890123456789012345678901234567890.txt txresp -body {fo} } -start varnish v1 -arg "-pfetch_chunksize=4k" \ -arg "-p feature=+esi_disable_xml_check" \ -arg "-sdefault,1m" -vcl+backend { sub vcl_backend_fetch { set bereq.http.connection = "close"; } sub vcl_backend_response { set beresp.do_esi = true; } } -start client c1 { txreq -url /big rxresp expect resp.bodylen == 1037480 txreq rxresp } -run varnish v1 -expect SM?.s0.c_fail == 1 varnish-7.5.0/bin/varnishtest/tests/r01953.vtc000066400000000000000000000012041457605730600210540ustar00rootroot00000000000000varnishtest "#1953: ved_stripgzip and failed object" server s1 { rxreq txresp -gzipbody {<1>} rxreq txresp -nolen -hdr "Content-Encoding: gzip" -hdr "Content-Length: 42" # No body sent, will cause fetch error # The delay allows time for the ESI deliver thread to get to the point # where it is waiting for this body delay 1 } -start varnish v1 -vcl+backend { sub vcl_backend_response { if (bereq.url == "/") { set beresp.do_esi = true; } } } -start client c1 { txreq -hdr "Accept-Encoding: gzip" rxresp expect resp.http.Content-Encoding == "gzip" gunzip expect resp.body == "<1>" } -run varnish-7.5.0/bin/varnishtest/tests/r01955.vtc000066400000000000000000000006151457605730600210630ustar00rootroot00000000000000varnishtest "Correct handling multiple Age headers in pass mode" # This affects Accept-Range as well but it's not possible to test # for that. server s1 { rxreq txresp -hdr "Cache-Control: max-age=1" -hdr "Age: 2" } -start varnish v1 -vcl+backend { import std; sub vcl_deliver { std.collect(resp.http.age); } } -start client c1 { txreq rxresp expect resp.http.age == "2" } -run varnish-7.5.0/bin/varnishtest/tests/r01956.vtc000066400000000000000000000017551457605730600210720ustar00rootroot00000000000000varnishtest "#1956: graced hit-for-miss, backend ims and obj flags inheritance" server s1 { rxreq txresp -hdr {ETag: "foo"} -body "asdf" rxreq expect req.http.if-none-match == {"foo"} txresp -status 304 -hdr {ETag: "bar"} rxreq expect req.http.if-none-match == "" txresp -status 200 -hdr {ETag: "baz"} -body "asdf" } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.ttl = 0.1s; set beresp.keep = 60s; if (beresp.http.etag == {""bar""}) { set beresp.grace = 60s; set beresp.uncacheable = true; } else { set beresp.grace = 0s; } return (deliver); } } -start client c1 { timeout 5 txreq -hdr "cnt: 1" rxresp expect resp.http.etag == {"foo"} expect resp.body == "asdf" delay 0.2 txreq -hdr "cnt: 2" rxresp expect resp.status == 200 expect resp.http.etag == {"bar"} expect resp.body == "asdf" delay 0.2 txreq -hdr "cnt: 3" rxresp expect resp.status == 200 expect resp.http.etag == {"baz"} expect resp.body == "asdf" } -run varnish-7.5.0/bin/varnishtest/tests/r01990.vtc000066400000000000000000000020431457605730600210570ustar00rootroot00000000000000varnishtest "workspace overflow with failed backend fetch" varnish v1 -vcl { import vtc; backend default { .host = "${bad_backend}"; } sub vcl_backend_fetch { # avoid LostHeader b Host: %s set bereq.http.Host = "${localhost}"; vtc.workspace_alloc(backend, vtc.workspace_free(backend)); } sub vcl_backend_error { # At this point we got no workspace to stringify # the non-standard status 299, but we will leave # the const "Unknown HTTP Status" reason set beresp.status = 299; } } -start logexpect l1 -v v1 -g raw { expect * 1002 FetchError {^out of workspace} expect * = BerespStatus {^503} expect * = BerespReason {^Backend fetch failed} expect * = Error {^out of workspace [(]bo[)]} expect * = LostHeader {^Date:} expect * = Error {^out of workspace [(]bo[)]} expect * = LostHeader {^299} } -start client c1 { txreq -url "/" rxresp expect resp.status == 503 expect resp.reason == "Unknown HTTP Status" } -run logexpect l1 -wait varnish v1 -expect ws_backend_overflow == 1 varnish-7.5.0/bin/varnishtest/tests/r02035.vtc000066400000000000000000000014551457605730600210540ustar00rootroot00000000000000varnishtest "Streaming range early finish" barrier b1 cond 2 server s1 { non_fatal rxreq txresp -nolen -hdr "Content-Length: 11" # Delay to get around Nagle. Without the delay the body bytes # would be in the same packet as the headers, and would end # up as pipelined data. Pipelined data wouldn't create a streaming # data event, breaking the test case. delay 1 send_n 10 "A" # Sema r1 halts the backend thread until client c1 is finished. barrier b1 sync send "B" } -start varnish v1 -vcl+backend { } -start client c1 { txreq -hdr "Range: bytes=0-4" rxresp expect resp.status == 206 expect resp.bodylen == 5 # The client thread should be ready to accept another request: txreq -hdr "Range: bytes=5-9" rxresp expect resp.status == 206 expect resp.bodylen == 5 barrier b1 sync } -run varnish-7.5.0/bin/varnishtest/tests/r02036.vtc000066400000000000000000000007011457605730600210460ustar00rootroot00000000000000varnishtest "NULL directors in vcl_init" server s1 { rxreq txresp } -start varnish v1 -vcl+backend {} -start varnish v1 -errvcl "None backend cannot be added" { import directors; probe cacheprobe { .initial = 0; } backend b01 { .host = "127.0.0.2"; .probe = cacheprobe; } sub vcl_init { new hash = directors.hash(); hash.add_backend(b01,1.0); new rr = directors.round_robin(); rr.add_backend(hash.backend("text")); } } varnish-7.5.0/bin/varnishtest/tests/r02042.vtc000066400000000000000000000141241457605730600210470ustar00rootroot00000000000000varnishtest "Hit-for-pass, pass, uncacheable, and conditional requests" # hit-for-pass: if the conditions apply for sending a client response # with status 304, then do so, even if the hfp TTL had just expired, # so that the backend fetch was a miss, hence not conditional, and # obtained beresp.status 200. server s1 { rxreq expect req.url == "/etag" txresp -hdr "ETag: foo" -bodylen 7 rxreq expect req.url == "/etag" expect req.http.If-None-Match == "foo" txresp -status 304 -hdr "ETag: foo" rxreq expect req.url == "/etag" expect req.http.If-None-Match == txresp -hdr "ETag: foo" -bodylen 7 close accept rxreq expect req.url == "/lm" txresp -hdr "Last-Modified: Thu, 26 Jun 2008 12:00:01 GMT" \ -bodylen 8 rxreq expect req.url == "/lm" expect req.http.If-Modified-Since == "Thu, 26 Jun 2008 12:00:01 GMT" txresp -status 304 -hdr "Last-Modified: Thu, 26 Jun 2008 12:00:01 GMT" rxreq expect req.url == "/lm" expect req.http.If-Modified-Since == txresp -hdr "Last-Modified: Thu, 26 Jun 2008 12:00:01 GMT" \ -bodylen 8 } -start varnish v1 -vcl+backend { sub vcl_miss { set req.http.X-Cache = "MISS"; } sub vcl_pass { set req.http.X-Cache = "PASS"; } sub vcl_backend_response { return (pass(1s)); } sub vcl_deliver { set resp.http.X-Cache = req.http.X-Cache; } } -start client c1 { txreq -url "/etag" rxresp expect resp.status == 200 expect resp.bodylen == 7 expect resp.http.ETag == "foo" expect resp.http.X-Cache == "MISS" txreq -url "/etag" -hdr "If-None-Match: foo" rxresp expect resp.status == 304 expect resp.bodylen == 0 expect resp.http.ETag == "foo" expect resp.http.X-Cache == "PASS" delay 1.1 txreq -url "/etag" -hdr "If-None-Match: foo" rxresp expect resp.status == 304 expect resp.bodylen == 0 expect resp.http.ETag == "foo" expect resp.http.X-Cache == "MISS" txreq -url "/lm" rxresp expect resp.status == 200 expect resp.bodylen == 8 expect resp.http.Last-Modified == "Thu, 26 Jun 2008 12:00:01 GMT" expect resp.http.X-Cache == "MISS" txreq -url "/lm" \ -hdr "If-Modified-Since: Thu, 26 Jun 2008 12:00:01 GMT" rxresp expect resp.status == 304 expect resp.bodylen == 0 expect resp.http.Last-Modified == "Thu, 26 Jun 2008 12:00:01 GMT" expect resp.http.X-Cache == "PASS" delay 1.1 txreq -url "/lm" \ -hdr "If-Modified-Since: Thu, 26 Jun 2008 12:00:01 GMT" rxresp expect resp.status == 304 expect resp.bodylen == 0 expect resp.http.Last-Modified == "Thu, 26 Jun 2008 12:00:01 GMT" expect resp.http.X-Cache == "MISS" } -run varnish v1 -vsl_catchup # The next two tests are adapted from the former r01206.vtc, reflecting # changes for the recv->pass and beresp.uncacheable (hit-for-miss) cases # after rolling back the fix for #1206, and verifying that they are # consistent with the behavior just tested for hit-for-pass. # recv->pass: requests are always passed, but client response status # is 304 if req.http.{INM,IMS} match resp.http.{ETag,Last-Modified}. server s1 { rxreq expect req.url == "/etag2" txresp -hdr "ETag: foo" -bodylen 7 rxreq expect req.url == "/etag2" expect req.http.If-None-Match == "foo" txresp -hdr "ETag: foo" -bodylen 7 close accept rxreq expect req.url == "/lm2" txresp -hdr "Last-Modified: Thu, 26 Jun 2008 12:00:01 GMT" \ -bodylen 8 rxreq expect req.url == "/lm2" expect req.http.If-Modified-Since == "Thu, 26 Jun 2008 12:00:01 GMT" txresp -hdr "Last-Modified: Thu, 26 Jun 2008 12:00:01 GMT" \ -bodylen 8 } -start varnish v1 -vcl+backend { sub vcl_recv { return(pass); } sub vcl_miss { set req.http.X-Cache = "MISS"; } sub vcl_pass { set req.http.X-Cache = "PASS"; } sub vcl_deliver { set resp.http.X-Cache = req.http.X-Cache; } } client c1 { txreq -url "/etag2" rxresp expect resp.status == 200 expect resp.bodylen == 7 expect resp.http.ETag == "foo" expect resp.http.X-Cache == "PASS" txreq -url "/etag2" -hdr "If-None-Match: foo" rxresp expect resp.status == 304 expect resp.bodylen == 0 expect resp.http.ETag == "foo" expect resp.http.X-Cache == "PASS" txreq -url "/lm2" rxresp expect resp.status == 200 expect resp.bodylen == 8 expect resp.http.Last-Modified == "Thu, 26 Jun 2008 12:00:01 GMT" expect resp.http.X-Cache == "PASS" txreq -url "/lm2" \ -hdr "If-Modified-Since: Thu, 26 Jun 2008 12:00:01 GMT" rxresp expect resp.status == 304 expect resp.bodylen == 0 expect resp.http.Last-Modified == "Thu, 26 Jun 2008 12:00:01 GMT" expect resp.http.X-Cache == "PASS" } -run # beresp.uncacheable: requests are always misses, but 304 client # response status depends on req.http.{INM,IMS} and # resp.http.{ETag,Last-Modified}. server s1 { rxreq expect req.url == "/etag3" txresp -hdr "ETag: foo" -bodylen 7 rxreq expect req.url == "/etag3" expect req.http.If-None-Match == txresp -hdr "ETag: foo" -bodylen 7 close accept rxreq expect req.url == "/lm3" txresp -hdr "Last-Modified: Thu, 26 Jun 2008 12:00:01 GMT" \ -bodylen 8 rxreq expect req.url == "/lm3" expect req.http.If-Modified-Since == txresp -hdr "Last-Modified: Thu, 26 Jun 2008 12:00:01 GMT" \ -bodylen 8 } -start varnish v1 -vcl+backend { sub vcl_miss { set req.http.X-Cache = "MISS"; } sub vcl_pass { set req.http.X-Cache = "PASS"; } sub vcl_backend_response { set beresp.uncacheable = true; } sub vcl_deliver { set resp.http.X-Cache = req.http.X-Cache; } } client c1 { txreq -url "/etag3" rxresp expect resp.status == 200 expect resp.bodylen == 7 expect resp.http.ETag == "foo" expect resp.http.X-Cache == "MISS" txreq -url "/etag3" -hdr "If-None-Match: foo" rxresp expect resp.status == 304 expect resp.bodylen == 0 expect resp.http.ETag == "foo" expect resp.http.X-Cache == "MISS" txreq -url "/lm3" rxresp expect resp.status == 200 expect resp.bodylen == 8 expect resp.http.Last-Modified == "Thu, 26 Jun 2008 12:00:01 GMT" expect resp.http.X-Cache == "MISS" txreq -url "/lm3" \ -hdr "If-Modified-Since: Thu, 26 Jun 2008 12:00:01 GMT" rxresp expect resp.status == 304 expect resp.bodylen == 0 expect resp.http.Last-Modified == "Thu, 26 Jun 2008 12:00:01 GMT" expect resp.http.X-Cache == "MISS" } -run varnish-7.5.0/bin/varnishtest/tests/r02069.vtc000066400000000000000000000007741457605730600210660ustar00rootroot00000000000000varnishtest "Probe response without a reason" server s1 -repeat 20 { rxreq send "HTTP/1.1 200\r\n\r\n" } -start varnish v1 -vcl { import std; backend s1 { .host = "${s1_addr}"; .port = "${s1_port}"; .probe = { .initial = 0; .window = 5; .threshold = 5; .interval = 100ms; } } sub vcl_recv { if (std.healthy(req.backend_hint)) { return (synth(200)); } else { return (synth(500)); } } } -start delay 1 client c1 { txreq rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/r02084.vtc000066400000000000000000000010641457605730600210540ustar00rootroot00000000000000varnishtest "REAL - INT types" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { import std; sub vcl_deliver { set resp.http.add = 10.0 + std.integer(req.http.foo, 0); set resp.http.sub = 10.0 - std.integer(req.http.foo, 0); set resp.http.mul = 10.0 * std.integer(req.http.foo, 0); set resp.http.div = 10.0 / std.integer(req.http.foo, 0); } } -start client c1 { txreq -hdr "foo: 3" rxresp expect resp.http.add == 13.000 expect resp.http.sub == 7.000 expect resp.http.mul == 30.000 expect resp.http.div == 3.333 } -run varnish-7.5.0/bin/varnishtest/tests/r02105.vtc000066400000000000000000000010331457605730600210420ustar00rootroot00000000000000varnishtest "Always consume the request body in bgfetch" server s1 { rxreq expect req.bodylen == 0 txresp rxreq expect req.bodylen == 0 txresp } -start # -cli because accept_filter may not be supported varnish v1 -cli "param.set accept_filter off" varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.ttl = 0.5s; } } -start client c1 { txreq -bodylen 10 rxresp expect resp.status == 200 delay 1 txreq -bodylen 10 rxresp expect resp.status == 200 txreq -bodylen 10 rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/r02135.vtc000066400000000000000000000007421457605730600210530ustar00rootroot00000000000000varnishtest "#2135: fetch retry logic" barrier b1 cond 2 server s1 { rxreq txresp rxreq expect req.url == "/foo" close accept rxreq expect req.url == "/foo" barrier b1 sync close expect_close } -start varnish v1 -vcl+backend { sub vcl_backend_error { set beresp.http.url = bereq.url; } } -start client c1 { txreq rxresp expect resp.status == "200" txreq -url "/foo" barrier b1 sync rxresp expect resp.status == "503" expect resp.http.url == "/foo" } -run varnish-7.5.0/bin/varnishtest/tests/r02142.vtc000066400000000000000000000005411457605730600210460ustar00rootroot00000000000000varnishtest "Compare IP addresses" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { sub vcl_deliver { set resp.http.exact-match = (client.ip == remote.ip); set resp.http.loose-match = (client.ip == server.ip); } } -start client c1 { txreq rxresp expect resp.http.exact-match == true expect resp.http.loose-match == true } -run varnish-7.5.0/bin/varnishtest/tests/r02148.vtc000066400000000000000000000005741457605730600210620ustar00rootroot00000000000000varnishtest "Whitespace after colon is optional" server s1 { rxreq txresp } -start varnish v1 -vcl+backend {} -start client c1 { txreq -hdr "Host:qux" -hdr "Authorization:Basic Zm9vOmJhcg==" rxresp } -run delay 1 shell -expect "foo qux" \ {varnishncsa -d -n ${v1_name} -F "%u %{Host}i"} shell -expect "GET http://qux/ HTTP/1.1" \ {varnishncsa -d -n ${v1_name} -F "%r"} varnish-7.5.0/bin/varnishtest/tests/r02157.vtc000066400000000000000000000012751457605730600210610ustar00rootroot00000000000000varnishtest "Long vcl/backend names" server s1 { rxreq txresp } -start varnish v1 -vcl+backend {} -start # 64 chars vlc name, 64 chars backend name varnish v1 -cliok { vcl.inline abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789 "vcl 4.0; backend abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789 {.host = \"${localhost}\"; .port = \"${s1_port}\";} " } # 127 chars vlc name, 1 char backend name varnish v1 -cliok { vcl.inline abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef012345678 "vcl 4.0; backend a {.host = \"${localhost}\"; .port = \"${s1_port}\";} " } client c1 { txreq rxresp } -run varnish-7.5.0/bin/varnishtest/tests/r02175.vtc000066400000000000000000000011361457605730600210550ustar00rootroot00000000000000varnishtest "leak backend" server s1 { rxreq txresp -hdr "Leak: no" } -start server s2 { rxreq txresp -hdr "Leak: yes" } -start varnish v1 -vcl { backend s1 { .host="${s1_addr}"; .port="${s1_port}"; } sub vcl_deliver { set resp.http.Label = "yes"; } } varnish v1 -cli "vcl.label lbl vcl1" varnish v1 -vcl { backend s2 { .host="${s2_addr}"; .port="${s2_port}"; } sub vcl_recv { set req.backend_hint = s2; return (vcl(lbl)); } sub vcl_deliver { set resp.http.Label = "no"; } } -start client c1 { txreq rxresp expect resp.http.Label == yes expect resp.http.Leak == no } -run varnish-7.5.0/bin/varnishtest/tests/r02177.vtc000066400000000000000000000020151457605730600210540ustar00rootroot00000000000000varnishtest "restart then switch to label" server s1 { rxreq txresp } -start server s2 { rxreq txresp -status 404 } -start varnish v1 -vcl { backend s1 { .host="${s1_addr}"; .port="${s1_port}"; } } varnish v1 -cli "vcl.label lbl1 vcl1" varnish v1 -vcl { backend s1 { .host="${s1_addr}"; .port="${s1_port}"; } sub vcl_recv { return (vcl(lbl1)); } } varnish v1 -cli "vcl.label lbl2 vcl2" varnish v1 -vcl { backend s2 { .host="${s2_addr}"; .port="${s2_port}"; } sub vcl_recv { if (req.restarts > 0) { return (vcl(lbl1)); } if (req.http.restart) { return (vcl(lbl2)); } } sub vcl_miss { return (restart); } } varnish v1 -cliok "vcl.list" varnish v1 -start varnish v1 -cliok "vcl.list" logexpect l1 -v v1 -g raw { expect * * VCL_Error "Illegal return.vcl.: Not after restarts" expect * * VCL_Error "Illegal return.vcl.: Only from active VCL" } -start client c1 { txreq rxresp expect resp.status == 503 txreq -hdr "restart: yes" rxresp expect resp.status == 503 } -run logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/r02219.vtc000066400000000000000000000014111457605730600210500ustar00rootroot00000000000000varnishtest "#2219: PROXY tight workspace handling" server s1 { rxreq txresp rxreq txresp rxreq txresp } -start varnish v1 -arg "-p workspace_client=9k" \ -arg "-p vsl_buffer=4k" \ -proto PROXY \ -vcl+backend { import std; import vtc; sub vcl_recv { std.log(vtc.workspace_free(client)); return (pass); } } -start client c1 { send "PROXY TCP4 127.0.0.1 127.0.0.1 1111 2222\r\nGET /${string,repeat,736,A} HTTP/1.1\r\n\r\n" rxresp } -run client c2 { # UNSPEC proto sendhex { 0d 0a 0d 0a 00 0d 0a 51 55 49 54 0a 21 00 00 00 47 45 54 20 2f ${string,repeat,732,"42 "} 20 48 54 54 50 2f 31 2e 31 0d 0a 0d 0a } rxresp } -run client c3 { send "PROXY TCP4 127.0.0.1 127.0.0.1 1111 2222\r\nGET /${string,repeat,740,C} HTTP/1.1\r\n\r\n" rxresp } -run varnish-7.5.0/bin/varnishtest/tests/r02233.vtc000066400000000000000000000011151457605730600210450ustar00rootroot00000000000000varnishtest "Fail earlier if we cannot fit the query string" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { import std; import vtc; sub vcl_recv { vtc.workspace_alloc(client, -92); if (std.querysort(req.url) == req.url) { std.log("querysort failed"); } } } -start logexpect l1 -v v1 { expect * * VCL_Log "querysort failed" } -start client c1 { txreq -url /?a=2&a=1&a=1&a=1&a=1&a=1&a=1&a=1&a=1&a=1&a=1&a=1&a=1 rxresp expect resp.status == 500 } -run logexpect l1 -wait varnish v1 -expect client_resp_500 == 1 varnish v1 -expect ws_client_overflow == 1 varnish-7.5.0/bin/varnishtest/tests/r02258.vtc000066400000000000000000000016141457605730600210600ustar00rootroot00000000000000varnishtest "Streaming range premature finish" server s1 { rxreq txresp -nolen -hdr "Content-length: 9" delay 1 send "BLA" delay .4 send "BLA" delay .3 send "BL" delay .3 } -start varnish v1 -vcl+backend { } -start logexpect l1 -v v1 -g session { expect 0 1000 Begin sess expect * = SessClose RANGE_SHORT } -start client c1 { txreq -hdr "range: bytes=0-16" rxresp -no_obj expect resp.status == 206 expect resp.http.content-length == 9 recv 8 expect_close } -run varnish v1 -expect MAIN.sc_range_short == 1 logexpect l1 -wait delay .3 varnish v1 -cliok "param.set feature +http2" server s1 -start client c2 { stream 1 { txreq -hdr "range" "bytes=0-16" rxhdrs expect resp.status == 206 expect resp.http.content-length == 9 rxdata -some 3 expect resp.body == BLABLABL rxrst expect rst.err == INTERNAL_ERROR } -run } -run varnish v1 -expect MAIN.sc_range_short == 1 varnish-7.5.0/bin/varnishtest/tests/r02262.vtc000066400000000000000000000002601457605730600210470ustar00rootroot00000000000000varnishtest "varnishtest -p" shell { cat >_.vtc <= 1 varnish v1 -expect busy_wakeup >= 1 varnish-7.5.0/bin/varnishtest/tests/r02270.vtc000066400000000000000000000006531457605730600210540ustar00rootroot00000000000000varnishtest "Test that never used VCLs go cold automatically" server s1 { rxreq txresp } -start varnish v1 -arg "-p vcl_cooldown=1" -vcl+backend { } -start varnish v1 -expect VBE.vcl1.s1.happy >= 0 varnish v1 -cliok {vcl.inline vcl2 "vcl 4.0; backend s1 {.host = \"${s1_addr}\"; .port = \"${s1_port}\"; }"} varnish v1 -expect VBE.vcl2.s1.happy >= 0 delay 5 varnish v1 -vsc VBE.* varnish v1 -expect !VBE.vcl2.s1.happy varnish-7.5.0/bin/varnishtest/tests/r02275.vtc000066400000000000000000000027011457605730600210550ustar00rootroot00000000000000varnishtest "Chunked with no space for iov's on workspace" # For this test to work with iovecs on the thread workspace (where # they belong!), we would need to circumvent the very sensible vrt # check that vcl make no permanent reservations on the thread # workspace (see vcl_call_method()). # # One possible way would be to push a VDP just for this. # # For now, we consider this issue low priority because with v1l living # on the thread workspace, the case for hitting #2275 again is as # exotic as can be. # # The values used in this test have been carefully worked out and # tested with v1l on the session workspace, so they should work 1:1 on # the thread workspace. feature cmd false server s1 -repeat 2 { rxreq txresp -nolen -hdr "Transfer-encoding: chunked" delay 1 chunkedlen 10 delay .1 chunkedlen 10 delay .1 chunkedlen 0 } -start varnish v1 -vcl+backend { import vtc; sub vcl_deliver { # struct v1l is 13 - 15 pointer-sizes, # an iovec should be two pointer-sizes # # we want to hit the case where we got just not # enough space for three iovecs if (req.url == "/1") { vtc.workspace_alloc(thread, -1 * vtc.typesize("p") * (13 + 4)); } else { vtc.workspace_alloc(thread, -1 * vtc.typesize("p") * (15 + 6)); } set resp.http.foo = vtc.workspace_free(thread); } } -start client c1 { txreq -url /1 rxresp expect resp.status == 500 } -run client c1 { txreq -url /2 rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/r02291.vtc000066400000000000000000000012531457605730600210540ustar00rootroot00000000000000varnishtest "Collect cookie headers in HTTP/2" server s1 -repeat 2 { rxreq txresp } -start varnish v1 -vcl+backend { } -start varnish v1 -cliok "param.set feature +http2" logexpect l1 -v v1 { expect * * BereqProtocol HTTP/1.1 expect * = BereqHeader "Cookie: user=alice" expect * = BereqHeader "Cookie: peer=bob" expect * * BereqProtocol HTTP/2.0 expect * = BereqHeader "cookie: user=alice; peer=bob" } -start client c1 { txreq -hdr "Cookie: user=alice" -hdr "Cookie: peer=bob" rxresp expect resp.status == 200 } -run client c2 { stream 1 { txreq -hdr cookie user=alice -hdr cookie peer=bob rxresp expect resp.status == 200 } -run } -run logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/r02295.vtc.disabled000066400000000000000000000015771457605730600226370ustar00rootroot00000000000000varnishtest "Test cooled dynamic backend clean up" # This test is disabled because it needs a timeout of 80 seconds (-t80 to # varnishtest). This is because the cooled backend timeout in varnish core # is hard coded to 60 seconds. server s1 { rxreq delay 70 txresp } -start varnish v1 -arg "-p cli_timeout=2 -p first_byte_timeout=80" -vcl { import debug; backend dummy { .host = "${bad_backend}"; } sub vcl_init { new s1 = debug.dyn("${s1_addr}", "${s1_port}"); } sub vcl_recv { if (req.url == "/refresh") { s1.refresh("${s1_addr}", "${s1_port}"); return (synth(200, "OK")); } } sub vcl_backend_fetch { set bereq.backend = s1.backend(); } } -start client c1 { timeout 120 txreq rxresp expect resp.status == 200 } -start delay 1 client c2 { txreq -url /refresh rxresp expect resp.status == 200 } -run delay 61 varnish v1 -cliok "ping" client c1 -wait varnish-7.5.0/bin/varnishtest/tests/r02300.vtc000066400000000000000000000007761457605730600210540ustar00rootroot00000000000000varnishtest "ESI Cookie" server s1 { fatal rxreq expect req.http.cookie == "Foo=Bar; B=C" txresp -body "" rxreq expect req.http.cookie == "Foo=Bar; B=C" txresp } -start varnish v1 -cliok {param.set feature +http2} varnish v1 -cliok {param.set debug +syncvsl} varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_esi = true; } } -start client c1 { stream 1 { txreq -url "/" -hdr cookie "Foo=Bar" -hdr cookie "B=C" rxresp } -start } -run varnish-7.5.0/bin/varnishtest/tests/r02305.vtc000066400000000000000000000022221457605730600210450ustar00rootroot00000000000000varnishtest "#2305: h/2 reembark with a request body" barrier b1 cond 2 barrier b2 cond 2 barrier b3 cond 2 server s1 { rxreq expect req.url == "/" barrier b1 sync barrier b2 sync txresp } -start varnish v1 -cliok "param.set feature +http2" varnish v1 -cliok "param.set debug +syncvsl" varnish v1 -cliok "param.set debug +waitinglist" varnish v1 -vcl+backend { sub vcl_recv { return (hash); } } -start client c1 { stream 1 { txreq rxresp expect resp.status == 200 barrier b3 sync } -start stream 3 { barrier b1 sync txreq -req POST -body "foo" barrier b2 sync barrier b3 sync rxresp expect resp.status == 200 } -run } -run client c2 { stream 1 { txreq -hdr "content-length" "23" } -run stream 0 { rxgoaway expect goaway.err == PROTOCOL_ERROR expect goaway.laststream == 1 } -run } -run # Allow content-length: 0 client c2 { stream 1 { txreq -hdr "content-length" "0" rxresp expect resp.status == 200 } -run } -run varnish v1 -vsl_catchup varnish v1 -expect MEMPOOL.req0.live == 0 varnish v1 -expect MEMPOOL.sess0.live == 0 varnish v1 -expect MEMPOOL.req1.live == 0 varnish v1 -expect MEMPOOL.sess1.live == 0 varnish-7.5.0/bin/varnishtest/tests/r02310.vtc000066400000000000000000000013111457605730600210370ustar00rootroot00000000000000varnishtest "#2310: Panic on premature hangup after Upgrade: h2c" barrier b1 cond 2 server s1 { rxreq barrier b1 sync txresp } -start varnish v1 -vcl+backend {} -start varnish v1 -cliok "param.set feature +http2" client c1 { send "GET / HTTP/1.1\r\n" send "Host: foo\r\n" send "Connection: Upgrade, HTTP2-Settings\r\n" send "Upgrade: h2c\r\n" send "HTTP2-Settings: AAMAAABkAARAAAAA\r\n" send "\r\n" rxresp expect resp.status == 101 expect resp.http.upgrade == h2c expect resp.http.connection == Upgrade txpri stream 0 { rxsettings txsettings txsettings -ack rxsettings expect settings.ack == true } -run } -run barrier b1 sync varnish v1 -expect client_req != 0 varnish v1 -wait varnish-7.5.0/bin/varnishtest/tests/r02319.vtc000066400000000000000000000011301457605730600210470ustar00rootroot00000000000000varnishtest "Test that oc->oa_present is cleared when we create a new object" server s1 { rxreq txresp -hdr "Vary: buzz" -nolen -hdr "Content-Length: 10240" \ -body 1024 } -start varnish v1 -vcl+backend { sub vcl_backend_fetch { set bereq.between_bytes_timeout = 0.001s; } sub vcl_backend_response { set beresp.do_stream = false; } sub vcl_backend_error { set beresp.status = 200; set beresp.ttl = 1h; return (deliver); } } -start client c1 { txreq -url "/" rxresp expect resp.status == 200 } -run client c2 { txreq -url "/" rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/r02321.vtc000066400000000000000000000011601457605730600210430ustar00rootroot00000000000000varnishtest "Storage name collisions" # intentional collision shell -err -expect "Error: (-s main=default,100m) 'main' is already defined" { exec varnishd -a :0 -f '' -n ${tmpdir} \ -s main=default,10m -s main=default,100m } # pseudo-accidental collision shell -err -expect "Error: (-s s0=default,100m) 's0' is already defined" { exec varnishd -a :0 -f '' -n ${tmpdir} \ -s default,10m -s s0=default,100m } # Transient collision shell -err -expect "Error: (-s Transient=default,100m) 'Transient' is already defined" { exec varnishd -a :0 -f '' -n ${tmpdir} \ -s Transient=default,10m -s Transient=default,100m } varnish-7.5.0/bin/varnishtest/tests/r02325.vtc000066400000000000000000000004121457605730600210460ustar00rootroot00000000000000varnishtest "validate storage identifiers" shell -err -expect {Error: invalid -s name "///"=[...]} { varnishd -a :0 -n ${tmpdir} -F -f '' -s ///=default } shell -err -expect {Error: Empty named -s argument "foo="} { varnishd -a :0 -n ${tmpdir} -F -f '' -s foo= } varnish-7.5.0/bin/varnishtest/tests/r02339.vtc000066400000000000000000000037161457605730600210650ustar00rootroot00000000000000varnishtest "VMOD purge" server s1 -repeat 12 { rxreq txresp } -start varnish v1 -cliok "param.set thread_pools 1" varnish v1 -cliok "param.set vsl_mask +ExpKill" varnish v1 -vcl+backend { import purge; sub vcl_miss { if (req.url == "miss") { purge.hard(); } } sub vcl_hit { if (req.url == "hit") { purge.hard(); } } } -start varnish v1 -cliok "param.set timeout_idle 2" logexpect l0 -v v1 -g raw { expect * 0 ExpKill "EXP_Removed x=1002" } -start logexpect l2 -v v1 -g raw { expect * 1002 Begin "bereq 1001 fetch" } -start logexpect l1 -v v1 { expect * 1003 VCL_call HIT expect 0 = VCL_return deliver expect * 1004 VCL_call MISS expect 0 = VCL_return fetch } -start client c1 { txreq -url hit rxresp expect resp.status == 200 txreq -url hit rxresp expect resp.status == 200 txreq -url miss rxresp expect resp.status == 200 } -run logexpect l0 -wait logexpect l2 -wait varnish v1 -errvcl "Not available in subroutine 'vcl_purge'" { import purge; sub vcl_purge { if (req.url == "purge") { purge.hard(); } } } varnish v1 -errvcl "Not available in subroutine 'vcl_pass'" { import purge; sub vcl_pass { if (req.url == "pass") { purge.hard(); } } } varnish v1 -errvcl "Not available in subroutine 'vcl_deliver'" { import purge; sub vcl_deliver { if (req.url == "deliver") { purge.hard(); } } } varnish v1 -errvcl "Not available in subroutine 'vcl_synth'" { import purge; sub vcl_synth { if (req.url == "synth") { purge.hard(); } } } varnish v1 -errvcl "Not available in subroutine 'vcl_backend_fetch'" { import purge; sub vcl_backend_fetch { if (bereq.url == "fetch") { purge.hard(); } } } varnish v1 -errvcl "Not available in subroutine 'vcl_backend_error'" { import purge; sub vcl_backend_error { if (bereq.url == "error") { purge.hard(); } } } varnish v1 -errvcl "Not available in subroutine 'vcl_backend_response'" { import purge; sub vcl_backend_response { if (bereq.url == "response") { purge.hard(); } } } varnish-7.5.0/bin/varnishtest/tests/r02342.vtc000066400000000000000000000010001457605730600210370ustar00rootroot00000000000000varnishtest "honor vcl_path with varnishd -f" shell { cat >unlikely_file_name.vcl <<-EOF vcl 4.0; backend be { .host = "${bad_backend}"; } EOF } shell -err -expect "Cannot read -f file 'unlikely_file_name.vcl'" { varnishd -F -a :0 -n ${tmpdir}/tmp -f unlikely_file_name.vcl } varnish v1 -arg "-p vcl_path=${tmpdir} -f unlikely_file_name.vcl" -start # Even when loaded from a relative path, show an absolute one varnish v1 -cliexpect "VCL.SHOW .+ ${tmpdir}/unlikely_file_name.vcl" "vcl.show -v boot" varnish-7.5.0/bin/varnishtest/tests/r02351.vtc000066400000000000000000000021001457605730600210410ustar00rootroot00000000000000varnishtest "#2351: :path/:method error handling" server s1 { rxreq txresp } -start varnish v1 -vcl+backend {} -start varnish v1 -cliok "param.set feature +http2" varnish v1 -cliok "param.set debug +syncvsl" client c1 { # missing everything stream 1 { txreq -noadd rxrst expect rst.err == PROTOCOL_ERROR } -run # missing :path stream 3 { txreq -noadd -hdr ":authority" "example.com" \ -hdr ":method" "GET" -hdr ":scheme" "http" rxrst expect rst.err == PROTOCOL_ERROR } -run # missing :method stream 5 { txreq -noadd -hdr ":authority" "example.com" \ -hdr ":path" "/foo" -hdr ":scheme" "http" rxrst expect rst.err == PROTOCOL_ERROR } -run # Duplicate :path stream 7 { txreq -noadd -hdr ":path" "/" -hdr ":path" "/foo" \ -hdr ":method" "GET" -hdr ":authority" "example.com" \ -hdr ":scheme" "http" rxrst expect rst.err == PROTOCOL_ERROR } -run } -run varnish v1 -expect MEMPOOL.req0.live == 0 varnish v1 -expect MEMPOOL.req1.live == 0 varnish v1 -expect MEMPOOL.sess0.live == 0 varnish v1 -expect MEMPOOL.sess1.live == 0 varnish-7.5.0/bin/varnishtest/tests/r02367.vtc000066400000000000000000000005131457605730600210560ustar00rootroot00000000000000varnishtest "POST and return(vcl(..))" server s1 { rxreq expect req.bodylen == 4 txresp } -start varnish v1 -vcl+backend {} -start varnish v1 -cliok "vcl.label vclA vcl1" varnish v1 -vcl+backend { sub vcl_recv { return (vcl(vclA)); } } client c1 { txreq -req POST -body "asdf" rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/r02372.vtc000066400000000000000000000011621457605730600210530ustar00rootroot00000000000000varnishtest "Count purges when there are many variants" server s1 -repeat 72 -keepalive { rxreq txresp -hdr "Vary: x-varnish" } -start varnish v1 -arg "-p workspace_thread=512" -vcl+backend { sub vcl_recv { if (req.method == "PURGE") { return (purge); } } } -start logexpect l1 -v v1 { expect * * Notice "High number of variants .71." } -start client c1 -repeat 72 -keepalive { txreq rxresp } -run logexpect l1 -wait client c2 { txreq -req "PURGE" rxresp } -run varnish v1 -expect n_lru_nuked == 0 varnish v1 -expect cache_hit == 0 varnish v1 -expect n_purges == 1 varnish v1 -expect n_obj_purged == 72 varnish-7.5.0/bin/varnishtest/tests/r02387.vtc000066400000000000000000000011071457605730600210600ustar00rootroot00000000000000varnishtest "2387: Crash on out of order header blocks" server s1 { rxreq txresp } -start varnish v1 -vcl+backend {} -start varnish v1 -cliok "param.set feature +http2" varnish v1 -cliok "param.set debug +syncvsl" barrier b1 cond 2 barrier b2 cond 2 client c1 { stream 1 { txreq -nohdrend barrier b2 sync barrier b1 sync txcont -hdr "bar" "foo" } -start stream 3 { barrier b2 sync txreq -nohdrend barrier b1 sync txcont -hdr "bar" "foo" } -run stream 0 { rxgoaway expect goaway.laststream == "1" expect goaway.err == PROTOCOL_ERROR } -run } -run varnish-7.5.0/bin/varnishtest/tests/r02395.vtc000066400000000000000000000005731457605730600210650ustar00rootroot00000000000000varnishtest "Test between_bytes_timeout works fetching headers" server s1 { rxreq send "HTTP/1.0 " delay 2 } -start varnish v1 -vcl+backend { sub vcl_recv { return (pass); } sub vcl_backend_fetch { set bereq.between_bytes_timeout = 1s; } } -start varnish v1 -cliok "param.set debug +syncvsl" client c1 { txreq rxresp expect resp.status == 503 } -run server s1 -wait varnish-7.5.0/bin/varnishtest/tests/r02406.vtc000066400000000000000000000011051457605730600210460ustar00rootroot00000000000000varnishtest "Long backend and storage names" varnish v1 -arg "-s acme_example_com_static_assets_default_storage=default" varnish v1 -vcl { backend be_0123456789_0123456789_0123456789_0123456789_0123456789_0123456789_0123456789_0123456789_0123456789_0123456789_0123456789_0123456789 { .host = "${bad_backend}"; } } -start varnish v1 -expect SM?.acme_example_com_static_assets_default_storage.c_req == 0 varnish v1 -expect VBE.vcl1.be_0123456789_0123456789_0123456789_0123456789_0123456789_0123456789_0123456789_0123456789_0123456789_0123456789_0123456789_0123456789.req == 0 varnish-7.5.0/bin/varnishtest/tests/r02413.vtc000066400000000000000000000021161457605730600210470ustar00rootroot00000000000000varnishtest "Test feature trace" server s1 { rxreq txresp } -start varnish v1 -arg "-p feature=+trace" -vcl+backend { sub vcl_deliver { set resp.http.vcl = "vclA"; } } -start varnish v1 -cliok "param.set debug +syncvsl" varnish v1 -cliok "vcl.label vclA vcl1" varnish v1 -vcl+backend { sub vcl_recv { if (req.http.vcl == "vcl1") { return (vcl(vclA)); } set req.trace = false; } sub vcl_deliver { set resp.http.vcl = "vcl2"; } } varnish v1 -cliok "vcl.label vclB vcl2" varnish v1 -cliok "vcl.list" # Ensure old VSLs do not confuse l1 varnish v1 -vsl_catchup logexpect l1 -v v1 -g raw { expect * 1001 VCL_call "RECV" expect 0 1001 VCL_trace {^vcl2 \d+ \d+\.\d+\.\d+$} expect * 1001 VCL_call "RECV" expect 0 1001 VCL_trace {^vcl1 \d+ \d+\.\d+\.\d+$} expect * 1002 VCL_call "BACKEND_FETCH" expect 0 1002 VCL_trace {^vcl1 \d+ \d+\.\d+\.\d+$} expect * 1003 VCL_call "DELIVER" expect 0 1003 RespHeader {^vcl: vcl2} } -start client c1 { txreq -hdr "vcl: vcl1" rxresp expect resp.http.vcl == vclA txreq rxresp expect resp.http.vcl == vcl2 } -run logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/r02422.vtc000066400000000000000000000044241457605730600210530ustar00rootroot00000000000000varnishtest "long polling and low latency using req.ttl" server s1 { # s1 uses the etag as a version rxreq txresp -hdr "Etag: 5" rxreq # wait until the new version is ready delay 1 txresp -hdr "Etag: 6" } -start varnish v1 -cliok "param.set default_grace 0" varnish v1 -vcl+backend { sub vcl_recv { if (req.restarts > 0) { set req.ttl = 1ms; } } sub vcl_hit { if (req.http.If-None-Match == obj.http.ETag) { return (restart); } } sub vcl_deliver { set resp.http.Hit = obj.hits > 0; } } -start # synchronizes the setup of all clients barrier b1 cond 2 # makes c1..cN send their requests simultaneously-ish barrier b2 cond 7 client c0 { # wait for all clients to be ready barrier b1 sync # send a "new client" request (no INM) txreq rxresp expect resp.status == 200 expect resp.http.ETag == 5 expect resp.http.Hit == false # let all other clients send their requests barrier b2 sync } -start client c1 { # new client, immediate hit barrier b2 sync txreq rxresp expect resp.status == 200 expect resp.http.ETag == 5 expect resp.http.Hit == true } -start client c2 { # late client, immediate hit barrier b2 sync txreq -hdr "If-None-Match: 2" rxresp expect resp.status == 200 expect resp.http.ETag == 5 expect resp.http.Hit == true } -start client c3 { # late client, immediate hit barrier b2 sync txreq -hdr "If-None-Match: 4" rxresp expect resp.status == 200 expect resp.http.ETag == 5 expect resp.http.Hit == true } -start client c4 { # up-to-date client, long polling barrier b2 sync txreq -hdr "If-None-Match: 5" rxresp expect resp.status == 200 expect resp.http.ETag == 6 expect resp.http.Hit == false } -start client c5 { # up-to-date client, long polling barrier b2 sync # wait to make sure c4 gets the miss delay 0.2 txreq -hdr "If-None-Match: 5" rxresp expect resp.status == 200 expect resp.http.ETag == 6 expect resp.http.Hit == true } -start client c6 { # up-to-date client, long polling barrier b2 sync # wait to make sure c4 gets the miss delay 0.2 txreq -hdr "If-None-Match: 5" rxresp expect resp.status == 200 expect resp.http.ETag == 6 expect resp.http.Hit == true } -start # start c0 barrier b1 sync client c0 -wait client c1 -wait client c2 -wait client c3 -wait client c4 -wait client c5 -wait client c6 -wait varnish-7.5.0/bin/varnishtest/tests/r02429.vtc000066400000000000000000000005721457605730600210620ustar00rootroot00000000000000varnishtest "#2429 file stevedore error buffer regression test" server s1 { accept } -start varnish v1 -arg "-s Transient=file,${tmpdir}/_.file,10m" -vcl+backend { sub vcl_backend_error { set beresp.body = "foo"; return (deliver); } } -start client c1 { txreq rxresp expect resp.status == 503 expect resp.http.content-length == 3 expect resp.body == "foo" } -run varnish-7.5.0/bin/varnishtest/tests/r02432.vtc000066400000000000000000000007351457605730600210550ustar00rootroot00000000000000varnishtest "label going cold" server s1 { } -start varnish v1 -cliok "param.set vcl_cooldown 1" varnish v1 -vcl+backend { sub vcl_recv { return (synth(200)); } } -start varnish v1 -cliok "vcl.label label1 vcl1" varnish v1 -cliok "vcl.list" varnish v1 -vcl+backend { sub vcl_recv { return (vcl(label1)); } } # let the VCL poker trigger enough times to get # an occasion to update vcl1's state delay 3 client c1 { txreq rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/r02433.vtc000066400000000000000000000016771457605730600210640ustar00rootroot00000000000000varnishtest "label a cold vcl" server s1 { } -start varnish v1 -cliok "param.set vcl_cooldown 1" varnish v1 -vcl+backend { import debug; # can fail a VCL warmup } -start # a dummy vcl2 to freely use vcl1 varnish v1 -vcl+backend { } # cool vcl1 down delay 4 varnish v1 -cliok ping shell -match "auto +cold +0 +vcl1" { varnishadm -n ${v1_name} vcl.list } # first we try to label a cold VCL in auto state # the magic parameter that fails a VCL warmup in vmod-debug varnish v1 -cliok "param.set max_esi_depth 42" varnish v1 -cliexpect "max_esi_depth is not the answer" "vcl.label label1 vcl1" shell -err { varnishadm -n ${v1_name} vcl.list | grep label1 } # second we try to label VCL in cold state varnish v1 -cliok "vcl.state vcl1 cold" shell -match "cold +cold +0 +vcl1" { varnishadm -n ${v1_name} vcl.list } varnish v1 -cliexpect "set to auto or warm first" "vcl.label label1 vcl1" shell -err { varnishadm -n ${v1_name} vcl.list | grep label1 } varnish-7.5.0/bin/varnishtest/tests/r02451.vtc000066400000000000000000000034111457605730600210500ustar00rootroot00000000000000varnishtest "test PRIV_TASK in vcl_init is per vmod" server s1 { rxreq txresp -body "ech3Ooj" } -start server s2 { rxreq txresp -body "ieQu2qua" } -start server s3 { rxreq txresp -body "xiuFi3Pe" } -start varnish v1 -vcl+backend { import directors; import debug; import std; sub vcl_init { new objx = debug.obj(); new vd = directors.shard(); debug.test_priv_task("something"); debug.test_priv_task("to remember"); objx.test_priv_task(debug.test_priv_task()); vd.add_backend(s1); vd.add_backend(s2); vd.add_backend(s3); vd.reconfigure(replicas=25); std.log("func " + debug.test_priv_task()); std.log("obj " + objx.test_priv_task()); } sub vcl_recv { set req.backend_hint = vd.backend(); return(pass); } } -start logexpect l1 -v v1 -g raw -d 1 { expect 0 0 CLI {^Rd vcl.load} expect 0 * Debug {^test_priv_task.*new} expect 0 * Debug {^test_priv_task.*update} expect 0 * Debug {^test_priv_task.*exists} expect 0 * Debug {^objx.priv_task.*"something to remember".*new} expect 0 * Debug {^test_priv_task.*exists} expect 0 * VCL_Log {^func something to remember} expect 0 * Debug {^objx.priv_task.*"something to remember".$} expect 0 * VCL_Log {^obj something to remember} # string stored in obj priv_task has already been freed expect ? * Debug {^priv_task_fini} expect ? * Debug {^obj_priv_task_fini} expect 0 * Debug {^vcl1: VCL_EVENT_WARM} } -start client c1 { txreq -url /Boo0aixe rxresp expect resp.body == "ech3Ooj" txreq -url /eishoSu2 rxresp expect resp.body == "ieQu2qua" txreq -url /Aunah3uo rxresp expect resp.body == "xiuFi3Pe" } -run logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/r02471.vtc000066400000000000000000000023371457605730600210600ustar00rootroot00000000000000varnishtest "test VRT_VCL_(Un)Busy()" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { import debug; sub vcl_recv { if (req.url == "/hold") { debug.vcl_prevent_cold(); } else if (req.url == "/release") { debug.vcl_allow_cold(); } return (synth(200)); } } -start varnish v1 -vcl+backend {} varnish v1 -cliok "vcl.state vcl1 cold" # Nothing holds vcl1, so it should go gold. varnish v1 -cliexpect "cold cold 0 vcl1" "vcl.list" # Grab hold of vcl1 varnish v1 -cliok "vcl.state vcl1 auto" varnish v1 -cliok "vcl.use vcl1" client c1 { txreq -url "/hold" rxresp } -run # Flush worker threads hold varnish v1 -cliok "vcl.use vcl2" client c1 { txreq rxresp } -run # There should still be a single busy hold on vcl1 varnish v1 -cliok "vcl.state vcl1 cold" varnish v1 -cliexpect "cold busy [12] vcl1" "vcl.list" # Release hold on vcl1 varnish v1 -cliok "vcl.state vcl1 auto" varnish v1 -cliok "vcl.use vcl1" client c1 { txreq -url "/release" rxresp } -run # Flush worker threads hold varnish v1 -cliok "vcl.use vcl2" client c1 { txreq rxresp } -run # Nothing holds vcl1, so it should go cold. varnish v1 -cliok "vcl.state vcl1 cold" varnish v1 -cliexpect "cold .... [01] vcl1" "vcl.list" varnish-7.5.0/bin/varnishtest/tests/r02488.vtc000066400000000000000000000015171457605730600210670ustar00rootroot00000000000000varnishtest "Test vmod_blob wb_create for empty workspace" varnish v1 -vcl { import blob; import vtc; backend b1 {.host = "${bad_backend}";} sub vcl_init { new bl = blob.blob(HEX, "deadbeef"); } sub vcl_recv { return (synth(200)); } sub vcl_synth { if (req.url == "/empty") { vtc.workspace_alloc(client, -1); } set resp.http.foo = blob.encode(encoding=HEX, blob=bl.get()); set resp.http.bar = "bar"; } } -start logexpect l1 -v v1 -g raw { expect * 1002 VCL_call {^SYNTH$} expect 0 = VCL_Error {^vmod blob error: cannot encode, out of space$} expect 0 = LostHeader {^foo:$} expect 0 = VCL_return {^fail$} } -start client c1 { txreq rxresp expect resp.http.foo == "deadbeef" expect resp.http.bar == "bar" txreq -url "/empty" rxresp expect resp.status == 500 expect_close } -run logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/r02494.vtc000066400000000000000000000007331457605730600210630ustar00rootroot00000000000000varnishtest "vcl_back_backend_error default storage" server s1 { rxreq txresp -nolen -hdr "Content-Length: 10240" -body 1024 } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_stream = false; } sub vcl_backend_error { set beresp.status = 200; set beresp.ttl = 1h; return (deliver); } } -start client c1 { txreq -url "/" rxresp expect resp.status == 200 } -run client c2 { txreq -url "/" rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/r02527.vtc000066400000000000000000000005761457605730600210650ustar00rootroot00000000000000varnishtest "Test the n_lru_moved counter" server s1 { rxreq txresp } -start varnish v1 -arg "-p lru_interval=1" -vcl+backend { sub vcl_backend_response { set beresp.do_stream = false; } } -start client c1 { txreq rxresp } -run varnish v1 -expect MAIN.n_lru_moved == 0 delay 2 client c1 { txreq rxresp txreq rxresp } -run varnish v1 -expect MAIN.n_lru_moved == 1 varnish-7.5.0/bin/varnishtest/tests/r02530.vtc000066400000000000000000000037701457605730600210560ustar00rootroot00000000000000varnishtest "Don't test gunzip for partial responses" # The use of ETags is only here to ensure they aren't accidentally weakened. server s1 { # pass'ed range request rxreq expect req.url == "/pass" expect req.http.Accept-Encoding == gzip expect req.http.Range == bytes=0-1 txresp -status 206 -nolen \ -hdr {ETag: "abc123"} \ -hdr "Accept-Ranges: bytes" \ -hdr "Content-Encoding: gzip" \ -hdr "Content-Length: 2" \ -hdr "Content-Range: bytes 0-1/*" sendhex 1f8b # unattended partial response rxreq expect req.url == "/miss" expect req.http.Accept-Encoding == gzip expect req.http.Range == txresp -status 206 -nolen \ -hdr {ETag: "123abc"} \ -hdr "Accept-Ranges: bytes" \ -hdr "Content-Encoding: gzip" \ -hdr "Content-Length: 2" \ -hdr "Content-Range: bytes 0-1/*" sendhex 1f8b } -start varnish v1 -cliok "param.set http_range_support off" varnish v1 -vcl+backend { sub vcl_recv { if (req.url == "/pass") { return (pass); } } } -start client c1 { txreq -url "/pass" -hdr "Accept-Encoding: gzip" -hdr "Range: bytes=0-1" rxresp expect resp.status == 206 expect resp.http.Etag == {"abc123"} expect resp.http.Accept-Ranges == bytes expect resp.http.Content-Range ~ "^bytes 0-1/" expect resp.http.Content-Length == 2 expect resp.bodylen == 2 } -run varnish v1 -expect n_gzip == 0 varnish v1 -expect n_gunzip == 0 varnish v1 -expect n_test_gunzip == 0 varnish v1 -expect SM?.s0.c_req == 0 varnish v1 -expect SM?.Transient.c_req == 2 # Invalid partial response, also in Transient client c1 { txreq -url "/miss" -hdr "Accept-Encoding: gzip" -hdr "Range: bytes=0-1" rxresp expect resp.status == 206 expect resp.http.Etag == {"123abc"} expect resp.http.Accept-Ranges == bytes expect resp.http.Content-Range ~ "^bytes 0-1/" expect resp.http.Content-Length == 2 expect resp.bodylen == 2 } -run varnish v1 -expect n_gzip == 0 varnish v1 -expect n_gunzip == 0 varnish v1 -expect n_test_gunzip == 0 varnish v1 -expect SM?.s0.c_req == 0 varnish v1 -expect SM?.Transient.c_req == 4 varnish-7.5.0/bin/varnishtest/tests/r02539.vtc000066400000000000000000000006741457605730600210670ustar00rootroot00000000000000varnishtest "2359: H/2 Avoid RxStuff with a non-reserved WS" barrier b1 sock 2 server s1 { rxreq txresp } -start varnish v1 -cliok "param.set feature +http2" varnish v1 -cliok "param.set timeout_idle 1" varnish v1 -vcl+backend { import vtc; sub vcl_deliver { vtc.barrier_sync("${b1_sock}"); } } -start client c1 { stream 1 { txreq rxresp expect resp.status == 200 } -start delay 3 barrier b1 sync stream 1 -wait } -run varnish-7.5.0/bin/varnishtest/tests/r02554.vtc000066400000000000000000000017611457605730600210620ustar00rootroot00000000000000varnishtest "VFP'ing partial responses" server s1 -repeat 4 { rxreq txresp -status 206 -body "foobar" } -start varnish v1 -vcl+backend { sub vcl_recv { return(pass); } sub vcl_backend_response { if (bereq.url == "/a") { set beresp.do_esi = True; } if (bereq.url == "/b") { set beresp.do_gzip = True; } if (bereq.url == "/c") { set beresp.do_esi = True; set beresp.do_gzip = True; set beresp.status = 1206; } } } -start client c1 { txreq -url /a rxresp expect resp.status == 503 } -run varnish v1 -vsl_catchup client c1 { txreq -url /b -hdr "Accept-Encoding: gzip" rxresp expect resp.bodylen == 34 expect resp.status == 206 } -run varnish v1 -vsl_catchup client c1 { txreq -url /b rxresp expect resp.bodylen == 34 expect resp.status == 206 } -run varnish v1 -vsl_catchup client c1 { txreq -url /c -hdr "Accept-Encoding: gzip" rxresp expect resp.status == 206 expect resp.bodylen == 32 gunzip expect resp.bodylen == 3 } -run varnish-7.5.0/bin/varnishtest/tests/r02555.vtc000066400000000000000000000026531457605730600210640ustar00rootroot00000000000000varnishtest "Expiry during processing" # When a object runs out of ttl+grace during processing of a # request, we want the calculation in the builtin VCL to match # the calculation of hit+grace in the C code. barrier b1 sock 2 barrier b2 sock 2 server s1 { rxreq expect req.url == "/1" txresp # If we get a second request to /1 here, it will be because a # hit was became a miss in the builtin sub vcl_hit or because # we never got to vcl_hit. We want a hit and a deliver. rxreq expect req.url == "/2" txresp } -start varnish v1 -vcl+backend { import std; import vtc; sub vcl_recv { if (req.http.Sleep) { # This will make the object expire while inside this VCL sub vtc.barrier_sync("${b1_sock}"); vtc.barrier_sync("${b2_sock}"); } # Update the last timestamp inside varnish std.timestamp("T"); } sub vcl_hit { set req.http.X-was-hit = "true"; # no return here. Will the builtin VCL take us to vcl_miss? } sub vcl_miss { set req.http.X-was-miss = "true"; } sub vcl_backend_response { # Very little ttl, a lot of keep set beresp.ttl = 1s; set beresp.grace = 0s; set beresp.keep = 1w; } } -start client c1 { delay .5 txreq -url "/1" rxresp txreq -url "/1" -hdr "Sleep: true" rxresp # Final request to verify that the right amount of requests got to the backend txreq -url "/2" rxresp } -start # help for sleeping inside of vcl barrier b1 sync delay 1.5 barrier b2 sync client c1 -wait varnish-7.5.0/bin/varnishtest/tests/r02589.vtc000066400000000000000000000012051457605730600210630ustar00rootroot00000000000000varnishtest "workspace overrun on h/2 delivery" server s1 { rxreq txresp -hdrlen Foo 500 } -start varnish v1 -vcl+backend { import vtc; sub vcl_deliver { vtc.workspace_alloc(client, -50); } } -start varnish v1 -cliok "param.set feature +http2" varnish v1 -cliok "param.set debug +syncvsl" varnish v1 -cliok "param.set vsl_mask +H2TxHdr" varnish v1 -cliok "param.set vsl_mask +H2TxBody" client c1 { stream 1 { txreq rxresp expect resp.status == 500 expect resp.http.content-length == 0 expect resp.http.server == "Varnish" } -run } -run varnish v1 -expect client_resp_500 == 1 varnish v1 -expect ws_client_overflow == 1 varnish-7.5.0/bin/varnishtest/tests/r02617.vtc000066400000000000000000000003631457605730600210570ustar00rootroot00000000000000varnishtest "Test undefined-behaviour: signed integer overflow" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { } -start client c1 { txreq -hdr "Content-Length: 9223372036854775808" rxresp expect resp.status == 400 } -run varnish-7.5.0/bin/varnishtest/tests/r02618.vtc000066400000000000000000000033121457605730600210550ustar00rootroot00000000000000varnishtest "sweep through tight client workspace conditions in deliver" server s1 { rxreq txresp -hdr "Cache-Control: mag-age=3600" -bodylen 1024 } -start varnish v1 -arg "-a ${tmpdir}/v1.sock" -vcl+backend { import debug; import vtc; sub vcl_recv { return (hash); } sub vcl_deliver { set resp.filters += " debug.pedantic"; if (req.method == "GET") { vtc.workspace_alloc(client, -1 * (req.xid - 1001)); } else if (req.method == "HEAD") { vtc.workspace_alloc(client, -1 * (req.xid - 1202)); } } } -start varnish v1 -cliok "param.set vsl_mask -ReqHeader,-ReqUnset" varnish v1 -cliok "param.set vsl_mask -ReqProtocol" varnish v1 -cliok "param.set vsl_mask -RespHeader,-RespUnset" varnish v1 -cliok "param.set vsl_mask -RespReason,-RespProtocol" varnish v1 -cliok "param.set vsl_mask -Timestamp,-Debug" varnish v1 -cliok "param.set vsl_mask -VCL_call,-VCL_return,-Hit" logexpect l1 -v v1 -g raw { expect * * VCL_Error "Attempted negative WS allocation" expect * * Error "Failure to push processors" expect * * VCL_Error "Out of workspace for VDP_STATE_MAGIC" expect * * Error "Failure to push processors" expect * * Error "Failure to push v1d processor" expect * * VCL_Error "Attempted negative WS allocation" expect * * Error "Failure to push processors" expect * * VCL_Error "Out of workspace for VDP_STATE_MAGIC" expect * * Error "Failure to push processors" } -start # some responses will fail (503), some won't. All we care # about here is the fact that we don't panic client c1 -connect "${tmpdir}/v1.sock" -repeat 100 { non_fatal txreq -url "/" rxresp } -run client c1 -connect "${tmpdir}/v1.sock" -repeat 100 { non_fatal txreq -url "/" -method "HEAD" rxresp } -run logexpect l1 -waitvarnish-7.5.0/bin/varnishtest/tests/r02633.vtc000066400000000000000000000005221457605730600210520ustar00rootroot00000000000000varnishtest "For HTTP/1.1 requests, Host is mandatory" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { } -start client c1 { txreq -proto HTTP/1.1 rxresp expect resp.status == 200 txreq -proto HTTP/1.1 -nohost rxresp expect resp.status == 400 txreq -proto HTTP/1.0 -nohost rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/r02645.vtc000066400000000000000000000010311457605730600210510ustar00rootroot00000000000000varnishtest "sweep through tight backend workspace conditions" server s1 -repeat 100 { rxreq send "HTTP/1.1 200 OK\r\nTransfer-encoding: chunked\r\n\r\n00000004\r\n1234\r\n00000000\r\n\r\n" } -start varnish v1 -vcl+backend { import vtc; sub vcl_recv { return (pass); } sub vcl_backend_fetch { vtc.workspace_alloc(backend, -4 * (bereq.xid - 1000) / 2); } } -start client c1 -repeat 100 { txreq -url "/" # some responses will fail (503), some won't. All we care # about here is the fact that we don't panic rxresp } -run varnish-7.5.0/bin/varnishtest/tests/r02646.vtc000066400000000000000000000004761457605730600210660ustar00rootroot00000000000000varnishtest "#2646: VUT should fail gracefully when removing a pid file fails" varnish v1 -vcl { backend be { .host = "${bad_backend}"; } } -start process p1 { exec varnishncsa -n ${v1_name} -P ${tmpdir}/ncsa.pid -w ${tmpdir}/ncsa.log } -start delay 1 shell "rm -f ${tmpdir}/ncsa.pid" process p1 -stop -wait varnish-7.5.0/bin/varnishtest/tests/r02647.vtc000066400000000000000000000002471457605730600210630ustar00rootroot00000000000000varnishtest "empty cli command" server s1 { rxreq txresp } -start varnish v1 -vcl+backend {} -start varnish v1 -clierr 100 "-" client c1 { txreq rxresp } -run varnish-7.5.0/bin/varnishtest/tests/r02649.vtc000066400000000000000000000005521457605730600210640ustar00rootroot00000000000000varnishtest "Cleanly stop a VUT app via vtc process -stop" varnish v1 -vcl { backend be { .host = "${bad_backend}"; } } -start process p1 { exec varnishncsa -n ${v1_name} -P ${tmpdir}/ncsa.pid -w ${tmpdir}/ncsa.log } -start delay 1 process p1 -expect-exit 0 -stop -wait # Expect empty stderr output shell -match {^[ ]*0\b} "wc -c ${tmpdir}/p1/stderr" varnish-7.5.0/bin/varnishtest/tests/r02679.vtc000066400000000000000000000011771457605730600210730ustar00rootroot00000000000000varnishtest "#2679: H/2 rxbody vfp drops data" server s1 { rxreq expect req.http.content-length == "31469" expect req.bodylen == 31469 txresp } -start varnish v1 -vcl+backend { import std; sub vcl_recv { std.cache_req_body(100KB); } } -start varnish v1 -cliok "param.set feature +http2" varnish v1 -cliok "param.set h2_rx_window_low_water 65535" varnish v1 -cliok "param.reset h2_initial_window_size" client c1 { stream 1 { txreq -req POST -hdr "content-length" "31469" -nostrend txdata -datalen 1550 -nostrend txdata -datalen 16000 -nostrend txdata -datalen 13919 rxresp expect resp.status == 200 } -run } -run varnish-7.5.0/bin/varnishtest/tests/r02686.vtc000066400000000000000000000011221457605730600210570ustar00rootroot00000000000000varnishtest "varnishtop -1/-d" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { } -start client c1 { txreq rxresp } -run process p1 -dump {varnishtop -n ${v1_name} -d -i ReqMethod} -start process p1 -winsz 30 80 delay 2 process p1 -expect-text 1 1 "list length 1" process p1 -expect-text 1 75 "(EOF)" process p1 -expect-text 3 6 "1.00 ReqMethod GET" process p1 -screen_dump -write q -wait process p2 -dump {varnishtop -n ${v1_name} -1 -i ReqMethod} -start process p2 -winsz 30 80 delay 2 process p2 -expect-text 1 6 "1.00 ReqMethod GET" process p2 -screen_dump -wait varnish-7.5.0/bin/varnishtest/tests/r02690.vtc000066400000000000000000000027421457605730600210630ustar00rootroot00000000000000varnishtest "Check http_max_hdr values" server s1 -repeat 17 { rxreq txresp } -start varnish v1 -vcl+backend { sub vcl_recv { return(pass); } } -start client c1 { txreq rxresp } -run varnish v1 -cliok {param.set http_max_hdr 33} client c1 { txreq rxresp } -run varnish v1 -cliok {param.set http_max_hdr 34} client c1 { txreq rxresp } -run varnish v1 -cliok {param.set http_max_hdr 35} client c1 { txreq rxresp } -run varnish v1 -cliok {param.set http_max_hdr 36} client c1 { txreq rxresp } -run varnish v1 -cliok {param.set http_max_hdr 37} client c1 { txreq rxresp } -run varnish v1 -cliok {param.set http_max_hdr 38} client c1 { txreq rxresp } -run varnish v1 -cliok {param.set http_max_hdr 39} client c1 { txreq rxresp } -run varnish v1 -cliok {param.set http_max_hdr 40} client c1 { txreq rxresp } -run varnish v1 -cliok {param.set http_max_hdr 41} client c1 { txreq rxresp } -run varnish v1 -cliok {param.set http_max_hdr 42} client c1 { txreq rxresp } -run varnish v1 -cliok {param.set http_max_hdr 43} client c1 { txreq rxresp } -run varnish v1 -cliok {param.set http_max_hdr 44} client c1 { txreq rxresp } -run varnish v1 -cliok {param.set http_max_hdr 45} client c1 { txreq rxresp } -run varnish v1 -cliok {param.set http_max_hdr 46} client c1 { txreq rxresp } -run varnish v1 -cliok {param.set http_max_hdr 47} client c1 { txreq rxresp } -run varnish v1 -cliok {param.set http_max_hdr 48} client c1 { txreq rxresp } -run varnish-7.5.0/bin/varnishtest/tests/r02700.vtc000066400000000000000000000014431457605730600210500ustar00rootroot00000000000000varnishtest "#2700: IMS and return (retry)" server s1 { rxreq txresp -hdr {Etag: "foo"} -body "1" rxreq expect req.http.If-None-Match == {"foo"} expect req.http.retries == "0" txresp -status 304 rxreq expect req.http.retries == "1" expect req.http.If-None-Match == {"foo"} txresp -status 304 } -start varnish v1 -vcl+backend { sub vcl_backend_fetch { set bereq.http.retries = bereq.retries; } sub vcl_backend_response { set beresp.ttl = 1ms; set beresp.grace = 0s; set beresp.keep = 1h; if (beresp.was_304 && bereq.retries == 0) { return (retry); } set beresp.http.was-304 = beresp.was_304; } } -start client c1 { txreq rxresp expect resp.status == 200 expect resp.http.was-304 == "false" delay 0.1 txreq rxresp expect resp.http.was-304 == "true" } -run varnish-7.5.0/bin/varnishtest/tests/r02702.vtc000066400000000000000000000031241457605730600210500ustar00rootroot00000000000000varnishtest "probes to UDS backends with .proxy_header in [12]" # Since we can only reliably forward IP addresses via PROXY with # Varnish, we can't use the trick from o00002.vtc (use a Varnish # instance listening for PROXY as a backend) to verify PROXYv1 (but # the PROXY header can be viewed in the test log). # We just verify that the probe is good (and that Varnish doesn't # crash with WRONG). server s1 -listen "${tmpdir}/s1.sock" { rxreq txresp } -start # For PROXYv1 with a UDS backend, we send PROXY UNKNOWN in the probe. varnish v1 -vcl { backend s1 { .path = "${s1_sock}"; .proxy_header = 1; .probe = { .window = 1; .threshold = 1; .interval = 0.5s; } } } -start delay 1 varnish v1 -cliexpect "vcl1.s1[ ]+probe[ ]+1/1[ ]+healthy" backend.list # For PROXYv2, we apply a trick similar to o0000[24].vtc, since # Varnish accepts (and ignores) PROXY LOCAL. server s2 { rxreq expect req.http.Host == "v2" expect req.http.X-Forwarded-For == "0.0.0.0" txresp } -start varnish v2 -arg "-a ${tmpdir}/v2.sock,PROXY,mode=0777" -vcl { backend s2 { .host = "${s2_addr}"; .port = "${s2_port}"; } } -start varnish v3 -vcl { backend bp { .path = "${v2_sock}"; .host_header = "v2"; .proxy_header = 2; .probe = { .window = 1; .threshold = 1; .interval = 0.5s; } } } -start server s1 -wait delay 1 varnish v3 -cliexpect "vcl1.bp[ ]+probe[ ]+1/1[ ]+healthy" backend.list # Verify in the v2 log that PROXY LOCAL was sent. logexpect l1 -v v2 -d 1 -g session -q "Proxy" { expect 0 * Begin sess expect * = Proxy {^\d+ local local local local$} expect * = End } -run varnish-7.5.0/bin/varnishtest/tests/r02705.vtc000066400000000000000000000015431457605730600210560ustar00rootroot00000000000000varnishtest "No vcl_hit when grace has run out, with working IMS" server s1 { rxreq txresp -hdr {Etag: "Foo"} -body "1" rxreq txresp -status 304 rxreq expect req.url == "/2" txresp -body "3" } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.ttl = 1ms; set beresp.grace = 0s; set beresp.keep = 1m; if (bereq.http.Hit) { set beresp.http.Hit = bereq.http.Hit; } } sub vcl_hit { set req.http.Hit = "HIT"; } } -start client c1 { txreq rxresp expect resp.http.Hit == expect resp.http.Etag == {"Foo"} expect resp.status == 200 expect resp.body == "1" delay .2 txreq rxresp expect resp.http.Hit == expect resp.http.Etag == {"Foo"} expect resp.status == 200 expect resp.body == "1" } -run client c2 { txreq -url "/2" rxresp expect resp.http.Hit == expect resp.body == "3" } -run varnish-7.5.0/bin/varnishtest/tests/r02722.vtc000066400000000000000000000015231457605730600210530ustar00rootroot00000000000000varnishtest "TCP vs UDS sockopt inheritance" feature cmd {test $(uname) != SunOS} server s0 { rxreqhdrs non_fatal rxreqbody txresp } -dispatch varnish v1 -arg {-a TCP=:0 -a "UDS=${tmpdir}/v1.sock"} varnish v1 -cliok "param.set debug +flush_head" varnish v1 -cliok "param.set timeout_idle 1" varnish v1 -vcl+backend { } -start client c1 { txreq -req POST -hdr "Content-Length: 100" send some rxresp expect resp.status == 503 } # This run performs the inheritance test on a TCP connection client c1 -run # This run relies on the inheritance test results client c1 -run varnish v1 -vsl_catchup # This run checks that TCP results aren't applied to a UDS client c1 -connect "${tmpdir}/v1.sock" client c1 -run # And finally this run checks UDS inheritance test results client c1 -run varnish-7.5.0/bin/varnishtest/tests/r02763.vtc000066400000000000000000000023261457605730600210620ustar00rootroot00000000000000varnishtest "Cacheable IMS replaced by HFM object" server s1 { rxreq expect req.url == "/etag" txresp -hdr "ETag: foo" -bodylen 7 rxreq expect req.url == "/etag" expect req.http.If-None-Match == "foo" txresp -status 304 -hdr "ETag: foo" rxreq expect req.url == "/etag" txresp -hdr "ETag: foo" -bodylen 7 } -start varnish v1 -vcl+backend { sub vcl_miss { set req.http.X-Cache = "MISS"; } sub vcl_pass { set req.http.X-Cache = "PASS"; } sub vcl_backend_response { if (bereq.http.HFM) { set beresp.uncacheable = true; } else { set beresp.ttl = 0.001s; set beresp.grace = 0s; set beresp.keep = 1m; } return (deliver); } sub vcl_deliver { set resp.http.X-Cache = req.http.X-Cache; } } -start client c1 { txreq -url "/etag" rxresp expect resp.status == 200 expect resp.bodylen == 7 expect resp.http.ETag == "foo" expect resp.http.X-Cache == "MISS" delay 0.1 txreq -url "/etag" -hdr "HFM: true" rxresp expect resp.status == 200 expect resp.bodylen == 7 expect resp.http.ETag == "foo" expect resp.http.X-Cache == "MISS" txreq -url "/etag" rxresp expect resp.status == 200 expect resp.bodylen == 7 expect resp.http.ETag == "foo" expect resp.http.X-Cache == "MISS" } -run varnish-7.5.0/bin/varnishtest/tests/r02775.vtc000066400000000000000000000005231457605730600210620ustar00rootroot00000000000000varnishtest "Regression test for #2775: allow PRIORITY on closed stream" server s1 { rxreq txresp } -start varnish v1 -vcl+backend {} -start varnish v1 -cliok "param.set feature +http2" varnish v1 -cliok "param.set debug +syncvsl" client c1 { stream 1 { txreq rxresp txprio } -run stream 3 { txreq rxresp } -run } -run varnish-7.5.0/bin/varnishtest/tests/r02831.vtc000066400000000000000000000020351457605730600210530ustar00rootroot00000000000000varnishtest "#2831: Out of storage in cache_req_body" server s1 { rxreq expect req.url == "/obj1" txresp -noserver -bodylen 1048400 } -start varnish v1 \ -arg "-p nuke_limit=0" \ -arg "-sTransient=default,1m" \ -syntax 4.0 \ -vcl+backend { import std; sub vcl_recv { if (req.method == "POST") { std.cache_req_body(1KB); } } sub vcl_backend_response { set beresp.do_stream = false; set beresp.storage = storage.Transient; # Unset Date header to not change the object sizes unset beresp.http.Date; } } -start varnish v1 -cliok "param.set debug +syncvsl" delay .1 client c1 { # Fill transient txreq -url "/obj1" rxresp expect resp.status == 200 } -run delay .1 varnish v1 -expect SM?.Transient.g_bytes > 1048400 varnish v1 -expect SM?.Transient.g_space < 100 client c1 { # No space for caching this req.body txreq -req "POST" -body "foobar" delay 1 } -run varnish v1 -expect SM?.Transient.c_fail == 1 client c1 { # Check that Varnish is still alive txreq -url "/obj1" rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/r02839.vtc000066400000000000000000000007111457605730600210620ustar00rootroot00000000000000varnishtest "Test uninitialized vmod objects" server s1 { rxreq txresp } -start varnish v1 -vcl+backend "" -start varnish v1 -errvcl "Object nullobj not initialized" { import debug; backend default { .host = "${localhost}"; } sub vcl_init { if (false) { new nullobj = debug.obj(); } } } varnish v1 -vcl { import debug; backend default { .host = "${localhost}"; } sub vcl_init { if (false) { new null = debug.obj_opt(); } } } varnish-7.5.0/bin/varnishtest/tests/r02849.vtc000066400000000000000000000044001457605730600210620ustar00rootroot00000000000000varnishtest "ESI included req's start in the same VCL the top started." server s1 { rxreq expect req.url == /l3a txresp -body "_Correct_" rxreq expect req.url == /l3b txresp -body "_Wrong1_" rxreq expect req.url == /l3c txresp -body "_Wrong2_" rxreq expect req.url == /l1 txresp -body {} rxreq expect req.url == /l2 txresp -body {} } -start # give enough stack to 32bit systems varnish v1 -cliok "param.set thread_pool_stack 80k" varnish v1 -vcl+backend { sub vcl_recv { if (req.url == "/l3") { set req.url = "/l3b"; } } sub vcl_backend_response { set beresp.do_esi = True; } sub vcl_deliver { set resp.http.vcl = "lab1"; } } -start varnish v1 -cliok "param.set debug +syncvsl" varnish v1 -cliok "vcl.label lab1 vcl1" varnish v1 -vcl+backend { sub vcl_recv { if (req.url == "/l3") { set req.url = "/l3a"; } } sub vcl_backend_response { set beresp.do_esi = True; } sub vcl_deliver { set resp.http.vcl = "lab2"; } } varnish v1 -cliok "vcl.label lab2 vcl2" varnish v1 -vcl+backend { sub vcl_recv { if (req.url == "/l1") { return (vcl(lab1)); } if (req.url == "/l3") { return (vcl(lab2)); } if (req.url == "/l3") { set req.url = "/l3c"; } } sub vcl_backend_response { set beresp.do_esi = True; } sub vcl_deliver { set resp.http.vcl = "vcl3"; } } # First cache the two possible inclusions client c1 { txreq -url /l3a rxresp expect resp.http.vcl == vcl3 txreq -url /l3b rxresp expect resp.http.vcl == vcl3 txreq -url /l3c rxresp expect resp.http.vcl == vcl3 } -run varnish v1 -vsl_catchup logexpect l1 -v v1 -g raw { expect * 1008 VCL_use "vcl3" expect * 1008 ReqURL "/l1" expect * 1008 VCL_use "vcl1 via lab1" expect * 1009 VCL_use "vcl1" expect * 1009 BeReqURL "/l1" # yes, twice! expect * 1010 ReqURL "/l2" expect * 1010 ReqURL "/l2" expect * 1011 VCL_use "vcl3" expect * 1011 BeReqURL "/l2" # yes, twice! expect * 1012 ReqURL "/l3" expect * 1012 ReqURL "/l3" expect * 1012 VCL_use "vcl2 via lab2" expect * 1012 ReqURL "/l3a" } -start # The test which one is picked client c1 { txreq -url /l1 rxresp expect resp.http.vcl == lab1 expect resp.body == {_Correct_} } -run logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/r02872.vtc000066400000000000000000000016161457605730600210640ustar00rootroot00000000000000varnishtest "VSL quoted fields" server s1 { rxreq txresp } -start varnish v1 -vcl { import std; backend be { .host = "${s1_sock}"; .probe = { .interval = 1m; } } sub vcl_recv { # a series of 3-fields log records std.log({" custom log "ok" "}); std.log({" "valid" "fields" ok "}); std.log({" "missing""blank" ko "}); std.log({" missing dquote "ko "}); # " return (synth(200)); } } -start client c1 { txreq rxresp expect resp.status == 200 } -run # records with malformed fields don't show up shell -expect "2" { varnishlog -d -n ${v1_name} -g raw -q 'VCL_Log[3]' | wc -l } server s1 -wait shell -expect "Went healthy" { varnishlog -d -n ${v1_name} -g raw \ -q 'Backend_health[10] eq "HTTP/1.1 200 OK"' } # s1 starts sick before the first probe request is made shell -expect "Went sick" { varnishlog -d -n ${v1_name} -g raw -q 'Backend_health[10] eq ""' } varnish-7.5.0/bin/varnishtest/tests/r02880.vtc000066400000000000000000000004251457605730600210600ustar00rootroot00000000000000varnishtest "Long VMOD object names" varnish v1 -vcl { import debug; backend b None; sub vcl_init { new l234567890123456789012345678901234567890123456789012345678 = debug.obj(); new l2345678901234567890123456789012345678901234567890123456789 = debug.obj(); } } varnish-7.5.0/bin/varnishtest/tests/r02887.vtc000066400000000000000000000007521457605730600210720ustar00rootroot00000000000000varnishtest "Test that director with sick backends only is shown as sick" varnish v1 -vcl { import directors; probe p { .url = "/foo"; } backend b1 { .host = "${bad_backend}"; .probe = p;} backend b2 { .host = "${bad_backend}"; .probe = p;} backend b3 { .host = "${bad_backend}"; .probe = p;} sub vcl_init { new foo = directors.random(); foo.add_backend(b1, 1); foo.add_backend(b2, 1); foo.add_backend(b3, 1); } } -start varnish v1 -cliexpect "foo.*sick" "backend.list" varnish-7.5.0/bin/varnishtest/tests/r02923.vtc000066400000000000000000000027271457605730600210650ustar00rootroot00000000000000varnishtest "race cond in max_concurrent_streams when request after receiving RST_STREAM" barrier b1 sock 2 barrier b2 sock 2 barrier b3 sock 2 barrier b4 sock 2 barrier b5 sock 3 server s1 { rxreq expect req.url == /nosync txresp rxreq expect req.url == /sync txresp } -start varnish v1 -cliok "param.set feature +http2" varnish v1 -cliok "param.set debug +syncvsl" varnish v1 -cliok "param.set h2_max_concurrent_streams 3" varnish v1 -vcl+backend { import vtc; sub vcl_recv { } sub vcl_backend_fetch { if(bereq.url ~ "/sync"){ vtc.barrier_sync("${b1_sock}"); vtc.barrier_sync("${b5_sock}"); } } } -start client c1 { txpri stream 0 rxsettings -run stream 1 { txreq -url /sync rxresp expect resp.status == 200 } -start stream 3 { barrier b1 sync delay .5 # Goes on waiting list txreq -url /sync delay .5 txrst -err 0x8 delay .5 barrier b2 sync } -start stream 5 { barrier b2 sync txreq -url /nosync delay .5 rxresp delay .5 expect resp.status == 200 barrier b3 sync } -start stream 7 { barrier b3 sync # Goes on waiting list txreq -url /sync delay .5 txrst -err 0x8 delay .5 barrier b4 sync } -start stream 9 { barrier b4 sync # Goes on waiting list txreq -url /sync delay .5 barrier b5 sync delay .5 rxresp delay .5 expect resp.status == 200 } -start stream 11 { barrier b5 sync txreq -url /sync delay .5 rxresp delay .5 expect resp.status == 200 } -start } -run varnish-7.5.0/bin/varnishtest/tests/r02934.vtc000066400000000000000000000010471457605730600210610ustar00rootroot00000000000000varnishtest "Bug in CONTINUATION flags when no sendbody" server s1 { rxreq expect req.proto == HTTP/1.1 txresp -hdr "Content-Type: text/plain" -hdrlen Foo 100 } -start varnish v1 -cliok "param.set feature +http2" varnish v1 -cliok "param.set debug +syncvsl" varnish v1 -cliok "param.set debug +h2_nocheck" varnish v1 -vcl+backend { } -start client c1 { stream 0 { txsettings -framesize 64 rxsettings } -run stream 1 { txreq -req HEAD rxresp expect resp.status == 200 expect resp.http.content-Type == "text/plain" } -run } -run varnish-7.5.0/bin/varnishtest/tests/r02937.vtc000066400000000000000000000007461457605730600210710ustar00rootroot00000000000000varnishtest "#2937: Panic on OU pool failure" server s1 { rxreq txresp } -start varnish v1 -vcl+backend {} -start varnish v1 -cliok "param.set feature +http2" varnish v1 -cliok "debug.reqpool.fail F" client c1 { send "GET / HTTP/1.1\r\n" send "Host: foo\r\n" send "Upgrade: h2c\r\n" send "HTTP2-Settings: AAMAAABkAAQAAP__\r\n" send "\r\n" rxresp expect resp.status == 101 expect resp.http.upgrade == h2c expect resp.http.connection == Upgrade txpri expect_close } -run varnish-7.5.0/bin/varnishtest/tests/r02946.vtc000066400000000000000000000013731457605730600210660ustar00rootroot00000000000000varnishtest "#2946 - objcore leak for backend_synth" varnish v1 -vcl { backend bad None; sub vcl_backend_error { if (bereq.http.abandon) { return (abandon); } if (bereq.http.ttl0) { set beresp.ttl = 0s; return (deliver); } set beresp.status = 200; set beresp.ttl = 0.001s; set beresp.grace = 1h; return (deliver); } } -start client c1 -repeat 20 -keepalive { txreq rxresp expect resp.status == 200 delay 0.001 } -run delay 2 client c1 { txreq -hdr "abandon: true" rxresp expect resp.status == 200 expect resp.http.age > 1 } -run client c1 { txreq -hdr "ttl0: true" rxresp expect resp.status == 200 expect resp.http.age > 1 } -run delay 1 varnish v1 -expect s_fetch > s_bgfetch varnish v1 -expect n_objectcore == 1 varnish-7.5.0/bin/varnishtest/tests/r02964.vtc000066400000000000000000000017761457605730600210750ustar00rootroot00000000000000varnishtest "Cancel private busy obj from vcl_deliver" server s1 { non_fatal rxreq expect req.url == "/hfm" txresp -hdr "HFM: True" -bodylen 65530 accept rxreq expect req.url == "/hfp" txresp -hdr "HFP: True" -bodylen 65550 } -start varnish v1 -arg "-s Transient=default" -vcl+backend { sub vcl_recv { if (req.restarts > 0) { return (synth(200)); } } sub vcl_backend_fetch { set bereq.http.Connection = "close"; } sub vcl_backend_response { if (bereq.url == "/hfm") { set beresp.uncacheable = true; } else if (bereq.url == "/hfp") { return (pass(1m)); } } sub vcl_deliver { if (req.restarts == 0) { return (restart); } } } -start logexpect l1 -v v1 -g raw { expect * * Storage "Transient" expect * * Storage "Transient" } -start client c1 { txreq -url "/hfm" rxresp expect resp.status == 200 txreq -url "/hfp" rxresp expect resp.status == 200 } -run logexpect l1 -wait varnish v1 -expect SM?.Transient.c_bytes > 0 varnish v1 -expect SM?.Transient.g_bytes < 2000 varnish-7.5.0/bin/varnishtest/tests/r02976.vtc000066400000000000000000000010601457605730600210620ustar00rootroot00000000000000varnishtest "Detect probe interval inversion" # For lack of a better mechanism this test passes by not timing out. # This is not enough to guarantee that there wasn't a bug in probe # scheduling but it's enough to avoid running into #2976 again. server s1 -repeat 5 { # probe requests rxreq txresp } -start varnish v1 -cliok {param.set vcc_feature -err_unref} varnish v1 -vcl { backend b1 { .host = "${s1_sock}"; .probe = { .interval = 1s; } } backend b2 { .host = "${bad_backend}"; .probe = { .interval = 24h; } } } -start server s1 -wait varnish-7.5.0/bin/varnishtest/tests/r02990.vtc000066400000000000000000000005631457605730600210650ustar00rootroot00000000000000varnishtest "Initial varnishstat verbosity" varnish v1 -vcl {backend be none;} -start process p1 -dump {varnishstat -n ${v1_name}} -start process p2 -dump {varnishstat -n ${v1_name} -f MGT.child_start} -start process p1 -expect-text 0 0 INFO process p1 -screen_dump process p2 -expect-text 0 0 MGT.child_start process p2 -expect-text 0 0 DIAG process p2 -screen_dump varnish-7.5.0/bin/varnishtest/tests/r03003.vtc000066400000000000000000000006321457605730600210440ustar00rootroot00000000000000varnishtest "Test PRIV_TOP vs PRIV_TASK" varnish v1 -vcl { import debug; backend bad { .host = "${bad_backend}"; } sub vcl_recv { return (synth(200)); } sub vcl_synth { set resp.http.priv_task = debug.test_priv_task("task"); set resp.http.priv_top = debug.test_priv_top("top"); } } -start client c1 { txreq rxresp expect resp.http.priv_task == task expect resp.http.priv_top == top } -run varnish-7.5.0/bin/varnishtest/tests/r03006.vtc000066400000000000000000000017311457605730600210500ustar00rootroot00000000000000varnishtest "test ban lurker destination being completed by dup ban" server s1 -repeat 4 -keepalive { rxreq txresp } -start varnish v1 -vcl+backend {} -start varnish v1 -cliok "param.set ban_lurker_age 99" # this ban becomes the pruned tail varnish v1 -cliok "ban obj.http.t == 1" client c1 { txreq -url /1 rxresp } -run # this ban becomes the new tail varnish v1 -cliok "ban obj.http.t == 2" client c1 { txreq -url /2 rxresp } -run # req ban to define where the tail goes (at t == 2) varnish v1 -cliok "ban req.http.barrier == here" client c1 { txreq -url /3 rxresp } -run # dup ban to trigger #3006 varnish v1 -cliok "ban obj.http.t == 2" client c1 { txreq -url /4 rxresp } -run varnish v1 -cliok "ban.list" varnish v1 -cliok "param.set ban_lurker_age 0.1" varnish v1 -cliok "ban.list" delay 2 varnish v1 -expect bans == 3 varnish v1 -expect bans_completed == 2 varnish v1 -cliok "ban.list" varnish v1 -expect bans == 3 varnish v1 -expect bans_completed == 2 varnish-7.5.0/bin/varnishtest/tests/r03019.vtc000066400000000000000000000007241457605730600210550ustar00rootroot00000000000000varnishtest "return(vcl) then reembark" barrier b1 cond 2 server s1 { rxreq barrier b1 sync txresp } -start varnish v1 -vcl+backend "" varnish v1 -cliok "vcl.label lbl vcl1" varnish v1 -vcl { backend be { .host = "${bad_backend}"; } sub vcl_recv { return (vcl(lbl)); } } -start client c1 { txreq rxresp expect resp.status == 200 } -start client c2 { barrier b1 sync txreq rxresp expect resp.status == 200 } -start client c1 -wait client c2 -wait varnish-7.5.0/bin/varnishtest/tests/r03038.vtc000066400000000000000000000010071457605730600210510ustar00rootroot00000000000000varnishtest "vcl.list and cli_limit" server s1 { } -start varnish v1 -vcl+backend { } -start varnish v1 -vcl+backend { } varnish v1 -vcl+backend { } varnish v1 -vcl+backend { } varnish v1 -vcl+backend { } varnish v1 -vcl+backend { } varnish v1 -vcl+backend { } varnish v1 -vcl+backend { } varnish v1 -vcl+backend { } varnish v1 -vcl+backend { } varnish v1 -expect n_vcl_avail == 10 varnish v1 -cliok "param.set cli_limit 128b" varnish v1 -clierr 201 "vcl.list" varnish v1 -stop varnish v1 -clierr 201 "vcl.list" varnish-7.5.0/bin/varnishtest/tests/r03079.vtc000066400000000000000000000036261457605730600210670ustar00rootroot00000000000000varnishtest "set VCL_??? [+]= VCL_???;" # set STRING|HEADER += STRINGS; server s1 { rxreq expect req.url == "/hello/world" expect req.http.host == helloworld txresp -hdr "x-powered-by: varnishtest" } -start varnish v1 -vcl+backend { sub vcl_backend_fetch { set bereq.url += "/world"; set bereq.http.host += "world"; } sub vcl_backend_response { set beresp.http.x-powered-by += bereq.http.x-varnish; } } -start client c1 { txreq -url "/hello" -hdr "host: hello" rxresp expect resp.status == 200 expect resp.http.x-powered-by == varnishtest1002 } -run # set BODY [+]= STRINGS|BLOB; varnish v1 -vcl { import blob; backend be none; sub vcl_recv { return (synth(200)); } sub vcl_synth { if (req.url ~ "synth") { synthetic("hello"); if (req.url ~ "add") { synthetic("world"); } } if (req.url ~ "string") { set resp.body = "hello"; if (req.url ~ "reset") { set resp.body = "world"; } if (req.url ~ "add") { set resp.body += "world"; } } if (req.url ~ "blob/literal") { set resp.body = :aGVsbG93b3JsZA==:; } elif (req.url ~ "blob") { set resp.body = blob.decode(HEX, encoded="1100"); if (req.url ~ "reset") { set resp.body = blob.decode(HEX, encoded="221100"); } if (req.url ~ "add") { set resp.body += blob.decode(HEX, encoded="221100"); } } return (deliver); } } client c2 { txreq -url "/synth" rxresp expect resp.body == hello txreq -url "/synth/add" rxresp expect resp.body == helloworld txreq -url "/string" rxresp expect resp.body == hello txreq -url "/string/reset" rxresp expect resp.body == world txreq -url "/string/add" rxresp expect resp.body == helloworld txreq -url "/blob" rxresp expect resp.bodylen == 2 txreq -url "/blob/reset" rxresp expect resp.bodylen == 3 txreq -url "/blob/add" rxresp expect resp.bodylen == 5 txreq -url "/blob/literal" rxresp expect resp.body == helloworld } -run varnish-7.5.0/bin/varnishtest/tests/r03089.vtc000066400000000000000000000026301457605730600210620ustar00rootroot00000000000000varnishtest "r03089 - Assert error corrupt OA_GZIPBITS" barrier b1 cond 2 server s1 { rxreq expect req.url == /included expect req.http.accept-encoding == "gzip" txresp -nolen -hdr "Content-Length: 49" -hdr "Content-Encoding: gzip" -hdr {Etag: "asdf"} barrier b1 sync delay 1 sendhex "1f 8b 08 08 ef 00 22 59 02 03 5f 00 0b 2e cd 53" sendhex "f0 4d ac 54 30 32 04 22 2b 03 13 2b 13 73 85 d0" sendhex "10 67 05 23 03 43 73 2e 00 cf 9b db c0 1d 00 00" sendhex "00" } -start server s2 { rxreq expect req.url == /included expect req.http.If-None-Match == {"asdf"} barrier b1 sync txresp -status 304 -nolen -hdr "Etag: asdf" } -start server s3 { rxreq expect req.url == / txresp -gzipbody {} } -start varnish v1 -arg "-p debug=+syncvsl" -vcl+backend { sub vcl_deliver { } sub vcl_backend_fetch { if (bereq.url == "/") { set bereq.backend = s3; } else if (bereq.http.backend == "s1") { set bereq.backend = s1; } else { set bereq.backend = s2; } } sub vcl_backend_response { if (bereq.url == "/") { set beresp.do_esi = true; } else { set beresp.ttl = 0.1s; set beresp.grace = 0s; set beresp.keep = 1m; } } } -start client c1 { txreq -url / -hdr "backend: s1" -hdr "Accept-encoding: gzip" rxresp } -start delay 1 client c2 { txreq -url / -hdr "backend: s2" -hdr "Accept-encoding: gzip" rxresp } -start client c1 -wait client c2 -wait varnish-7.5.0/bin/varnishtest/tests/r03093.vtc000066400000000000000000000016501457605730600210560ustar00rootroot00000000000000varnishtest "r03093 - fail retry on missing req body" barrier b1 sock 2 server s1 { rxreq expect req.method == POST expect req.body == foo txresp -nolen -hdr "Content-Length: 3" barrier b1 sync } -start # In this test s2 should not be called. The attempt to retry should fail because # the request was already released from the fetch thread in the first attempt. server s2 { rxreq expect req.method == POST expect req.body == foo txresp -body bar } -start varnish v1 -arg "-p debug=+syncvsl" -vcl+backend { import vtc; sub vcl_backend_fetch { set bereq.http.retries = bereq.retries; if (bereq.retries == 1) { set bereq.backend = s2; } } sub vcl_backend_response { if (bereq.http.retries == "0") { vtc.barrier_sync("${b1_sock}"); } set beresp.do_stream = false; } sub vcl_backend_error { return (retry); } } -start client c1 { txreq -req POST -body foo rxresp expect resp.status == 503 } -run varnish-7.5.0/bin/varnishtest/tests/r03098.vtc000066400000000000000000000015401457605730600210610ustar00rootroot00000000000000varnishtest "Explain how param.(re)?set may fail" # NB: we don't need or want to start the cache process varnish v1 -cliok "param.set thread_pool_max 10000" varnish v1 -cliok "param.set thread_pool_min 8000" # NB: "varnish v1 -cliexpect" wouldn't work with a non-200 status shell -err -expect "Must be at least 8000 (thread_pool_min)" { exec varnishadm -n ${v1_name} param.reset thread_pool_max } varnish v1 -cliok "param.set thread_pool_min 8" varnish v1 -cliok "param.set thread_pool_max 10" shell -err -expect "Must be no more than 10 (thread_pool_max)" { exec varnishadm -n ${v1_name} param.reset thread_pool_min } varnish v1 -cliok "param.reset thread_pool_max" varnish v1 -cliok "param.reset thread_pool_min" shell -err -expect "Must be no more than 95 (95% of thread_pool_min)" { exec varnishadm -n ${v1_name} param.set thread_pool_reserve 96 } varnish-7.5.0/bin/varnishtest/tests/r03109.vtc000066400000000000000000000014461457605730600210570ustar00rootroot00000000000000varnishtest "Test garbage after gzip end reaching gunzip vdp" server s1 { rxreq txresp -hdr "content-encoding: gzip" -nolen # (date | gzip -9f ; echo bad) | od -t x1| # sed -e 's:^[0-9a-f]* :sendhex ":' -e 's:$:":' -e '/^[0-9a-f]*"/ d' sendhex "1f 8b 08 00 f5 8a b9 5d 02 03 0b 4f 4d 51 30 36" sendhex "50 f0 4f 2e 51 30 34 b1 32 30 b7 32 30 54 70 76" sendhex "0d 51 30 32 30 b4 e4 02 00 fa 76 79 ba 1d 00 00" sendhex "00 62 61 64 0a" } -start varnish v1 -vcl+backend { sub vcl_backend_response { # no gunzip check set beresp.filters = ""; } sub vcl_deliver { set resp.filters = "gunzip"; } } -start logexpect l1 -v v1 -q "vxid == 1001" { expect * 1001 Gzip {^G.un.zip error: 1 .junk after VGZ_END.$} } -start client c1 { txreq rxresphdrs expect_close } -run logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/r03131.vtc000066400000000000000000000017231457605730600210500ustar00rootroot00000000000000varnishtest "Test WS_ReserveSize() overflow behavior" varnish v1 -vcl { import vtc; import std; backend dummy None; sub vcl_recv { return (synth(200)); } sub vcl_synth { vtc.workspace_alloc(session, -16); std.log("res(8) = " + vtc.workspace_reserve(session, 8)); std.log("res(15) = " + vtc.workspace_reserve(session, 15)); std.log("res(16) = " + vtc.workspace_reserve(session, 16)); std.log("res(17) = " + vtc.workspace_reserve(session, 17)); std.log("res(8) = " + vtc.workspace_reserve(session, 8)); } } -start logexpect l1 -v v1 -g raw { expect * * VCL_Log {^\Qres(8) = 8\E$} expect 0 = VCL_Log {^\Qres(15) = 15\E$} expect 0 = VCL_Log {^\Qres(16) = 16\E$} expect 0 = VCL_Log {^\Qres(17) = 0\E$} # workspace is now overflown, but smaller reservation still succeeds expect 0 = VCL_Log {^\Qres(8) = 8\E$} expect * = Error {^workspace_session overflow$} } -start client c1 { txreq rxresp expect resp.status == 500 } -run logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/r03159.vtc000066400000000000000000000016361457605730600210650ustar00rootroot00000000000000varnishtest "double sub unref warning / warnings output for -f" # Also tests #3160 filewrite ${tmpdir}/unref.vcl { vcl 4.1; backend be none; sub foo { } } process p1 -log { varnishd -F -n "${tmpdir}/t" -a "${tmpdir}/sock" \ -p vcc_feature=-err_unref -f "${tmpdir}/unref.vcl" \ -l 2m 2>&1 } -start -expect-exit 0x40 shell { # wait for startup vcl.load to complete varnishadm -n ${tmpdir}/t ping || varnishadm -n ${tmpdir}/t ping } process p1 -screen_dump process p1 -expect-text 0 1 "Unused sub foo, defined:" process p1 -expect-text 0 1 "(That was just a warning)" process p2 -log { set -e varnishadm -n ${tmpdir}/t "vcl.list" varnishadm -n ${tmpdir}/t -t 20 "vcl.load unref ${tmpdir}/unref.vcl" varnishadm -n ${tmpdir}/t "vcl.list" } -run process p2 -screen_dump process p2 -expect-text 0 1 "Unused sub foo, defined:" process p2 -expect-text 0 1 "(That was just a warning)" process p1 -kill TERM varnish-7.5.0/bin/varnishtest/tests/r03169.vtc000066400000000000000000000027231457605730600210640ustar00rootroot00000000000000varnishtest "Ensure we don't override Content-Encoding on a cond fetch" server s1 { rxreq txresp -hdr {ETag: "foo"} -body {****} rxreq expect req.http.If-None-Match == {"foo"} txresp -status 304 -hdr {ETag: W/"foo"} -hdr "Content-Encoding: gzip" rxreq expect req.url == "/1" txresp -hdr {Etag: "bar"} -gzipbody {asdf} rxreq expect req.url == "/1" expect req.http.If-None-Match == {"bar"} txresp -status 304 } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.ttl = 1s; set beresp.grace = 0s; set beresp.keep = 1d; } } -start client c1 { txreq -hdr "Accept-Encoding: gzip" rxresp expect resp.status == 200 expect resp.http.ETag == {"foo"} expect resp.http.Content-Length == "4" expect resp.http.Content-Encoding == delay 1.5 txreq -hdr "Accept-Encoding: gzip" rxresp expect resp.status == 200 expect resp.http.ETag == {W/"foo"} expect resp.http.Content-Length == "4" expect resp.http.Content-Encoding == # Also check that we're still OK in the general case txreq -url "/1" -hdr "Accept-Encoding: gzip" rxresp expect resp.status == 200 expect resp.http.ETag == {"bar"} expect resp.http.Content-Encoding == "gzip" delay 1.5 txreq -url "/1" -hdr "Accept-Encoding: gzip" rxresp expect resp.status == 200 expect resp.http.ETag == {"bar"} expect resp.http.Content-Encoding == "gzip" } -run varnish-7.5.0/bin/varnishtest/tests/r03189.vtc000066400000000000000000000006761457605730600210730ustar00rootroot00000000000000varnishtest "h1 send_timeout and streaming of dripping chunks" barrier b cond 2 server s1 { rxreq txresp -nolen -hdr "Transfer-Encoding: chunked" chunkedlen 1 delay 1 non_fatal barrier b sync chunkedlen 1 delay 1 chunkedlen 0 } -start varnish v1 \ -arg "-p idle_send_timeout=.1" \ -arg "-p send_timeout=.8" \ -vcl+backend { } -start client c1 { txreq rxresphdrs rxchunk barrier b sync expect_close } -run server s1 -wait varnish-7.5.0/bin/varnishtest/tests/r03221.vtc000066400000000000000000000006351457605730600210510ustar00rootroot00000000000000varnishtest "Handling of Age when return(pass)" server s1 { rxreq txresp -hdr "cache-control: max-age=2" -hdr "age: 1" rxreq txresp -hdr "cache-control: max-age=2" -hdr "age: 1" } -start varnish v1 -vcl+backend { sub vcl_recv { if (req.url == "/pass") { return(pass); } } } -start client c1 { txreq rxresp expect resp.http.age == 1 txreq -url /pass rxresp expect resp.http.age == 1 } -run varnish-7.5.0/bin/varnishtest/tests/r03241.vtc000066400000000000000000000016271457605730600210550ustar00rootroot00000000000000varnishtest "ESI include out of workspace" server s1 { rxreq expect req.http.esi0 == "foo" txresp -body {before after} rxreq expect req.url == "/body1" expect req.http.esi0 != "foo" txresp -body "include" } -start varnish v1 -cliok "param.set feature +esi_disable_xml_check" varnish v1 -vcl+backend { import vtc; sub vcl_recv { if (req.esi_level > 0) { set req.url = req.url + req.esi_level; } else { set req.http.esi0 = "foo"; } } sub vcl_backend_response { if (bereq.url == "/") { set beresp.do_esi = true; } } sub vcl_deliver { if (req.esi_level > 0) { vtc.workspace_alloc(client, -16); } } } -start logexpect l1 -v v1 -g raw { expect * * Error "^Failure to push ESI processors" } -start client c1 { txreq -hdr "Host: foo" rxresp expect resp.status == 200 expect resp.body == "before after" } -run logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/r03253.vtc000066400000000000000000000007661457605730600210630ustar00rootroot00000000000000varnishtest "ESI: sweep through tight backend workspace conditions" server s1 -repeat 100 { rxreq txresp -gzipbody "" } -start varnish v1 -vcl+backend { import vtc; sub vcl_recv { return (pass); } sub vcl_backend_response { vtc.workspace_alloc(backend, -4 * (bereq.xid - 1000) / 2); set beresp.do_esi = true; } } -start client c1 -repeat 100 { txreq -url "/" # some responses will fail (503), some won't. All we care # about here is the fact that we don't panic rxresp } -run varnish-7.5.0/bin/varnishtest/tests/r03266.vtc000066400000000000000000000005571457605730600210650ustar00rootroot00000000000000varnishtest "Don't recycle a closed backend connection" # broken origin: sends eof-encoded HTTP/1.1 response server s1 { rxreq send "HTTP/1.1 200 OK\r\n\r\n" send "foobar" } -start varnish v1 -vcl+backend {} -start client c1 { txreq rxresp } -run varnish v1 -expect fetch_failed == 0 varnish v1 -expect fetch_eof == 1 varnish v1 -expect backend_recycle == 0 varnish-7.5.0/bin/varnishtest/tests/r03301.vtc000066400000000000000000000021241457605730600210430ustar00rootroot00000000000000varnishtest "Issue 3301: Illegal error codes" varnish v1 -vcl { backend default none; sub vcl_recv { if (req.url == "/test1") { return (synth(1301)); } if (req.url == "/test2") { return (synth(1001)); } } sub vcl_backend_fetch { if (bereq.url == "/test3") { return (error(1302)); } if (bereq.url == "/test4") { return (error(1000)); } } } -start varnish v1 -cliok "param.set feature +http2" client c1 { stream 1 { txreq -url /test1 rxresp expect resp.status == 301 } -run stream 3 { txreq -url /test2 rxresp expect resp.status == 503 } -run stream 5 { txreq -url /test3 rxresp expect resp.status == 302 } -run stream 7 { txreq -url /test4 rxresp expect resp.status == 503 } -run } -run client c2 { txreq -url /test1 rxresp expect resp.status == 301 expect resp.reason == "Moved Permanently" txreq -url /test2 rxresp expect resp.status == 503 } -run client c2 { txreq -url /test3 rxresp expect resp.status == 302 expect resp.reason == "Found" } -run client c2 { txreq -url /test4 rxresp expect resp.status == 503 } -run varnish-7.5.0/bin/varnishtest/tests/r03308.vtc000066400000000000000000000005351457605730600210560ustar00rootroot00000000000000varnishtest "Unformatable VCL_TIME" feature 64bit server s1 { rxreq txresp } -start varnish v1 -vcl+backend { import std; sub vcl_deliver { set resp.http.ts = std.real2time( std.real("999999999999.999", 0) * std.real("999999999999.999", 0), now); } } -start client c1 { txreq rxresp expect resp.status == 503 } -run varnish-7.5.0/bin/varnishtest/tests/r03319.vtc000066400000000000000000000004521457605730600210560ustar00rootroot00000000000000varnishtest "Vary handling out of workspace" varnish v1 -vcl { import vtc; backend be none; sub vcl_recv { vtc.workspace_alloc(client, vtc.workspace_free(client)); } sub vcl_backend_fetch { return (error(200)); } } -start client c1 { txreq rxresp expect resp.status == 500 } -run varnish-7.5.0/bin/varnishtest/tests/r03329.vtc000066400000000000000000000011151457605730600210540ustar00rootroot00000000000000varnishtest "Lost reason from pipe to synth" server s1 {} -start varnish v1 -vcl+backend { import vtc; sub vcl_recv { if (req.http.workspace) { vtc.workspace_alloc(client, -10); } return (pipe); } sub vcl_pipe { if (req.http.foo) { return (synth(505, req.http.foo + "/bar")); } else { set bereq.http.baz = "baz"; return (synth(505, bereq.http.baz)); } } } -start client c1 { txreq -hdr "foo: foo" rxresp expect resp.reason == "foo/bar" txreq rxresp expect resp.reason == "baz" txreq -hdr "workspace: true" rxresp expect resp.status == 503 } -run varnish-7.5.0/bin/varnishtest/tests/r03353.vtc000066400000000000000000000007471457605730600210630ustar00rootroot00000000000000varnishtest "Test rollback and retry" # ws_emu triggers #3550 feature !workspace_emulator server s1 { rxreq txresp -nolen -hdr "Content-Length: 3" expect_close accept rxreq txresp -body xxx } -start varnish v1 -vcl+backend { import std; sub vcl_backend_response { if (bereq.retries == 0) { std.rollback(bereq); } } sub vcl_backend_error { if (bereq.retries == 0) { return (retry); } } } -start client c1 { txreq rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/r03354.vtc000066400000000000000000000024171457605730600210600ustar00rootroot00000000000000varnishtest "Bodybytes is #bytes in bottom VFP" server s1 { rxreq txresp -hdr "Foo: bar" -bodylen 1000 rxreq txresp -nolen -hdr "Transfer-Encoding: chunked" chunkedlen 100 chunkedlen 100 chunkedlen 100 chunkedlen 100 chunkedlen 100 chunkedlen 0 } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_gzip = true; if (beresp.http.foo == "bar") { set beresp.do_esi = true; } } } -start varnish v1 -cliok "param.set vsl_mask +VfpAcct,+VdpAcct" varnish v1 -cliok "param.set feature +esi_disable_xml_check" client c1 { txreq -method POST -url /1 -bodylen 100 rxresp expect resp.bodylen == 1000 } -run varnish v1 -expect MAIN.s_req_bodybytes == 100 varnish v1 -expect VBE.vcl1.s1.bereq_bodybytes == 100 varnish v1 -expect VBE.vcl1.s1.beresp_bodybytes == 1000 varnish v1 -expect MAIN.s_resp_bodybytes == 1000 varnish v1 -vsc *bodyb* client c1 { txreq -method POST -url /2 -nolen -hdr "Transfer-encoding: chunked" chunkedlen 100 chunkedlen 100 chunkedlen 100 chunkedlen 0 rxresp expect resp.bodylen == 500 } -run varnish v1 -expect MAIN.s_req_bodybytes == 400 varnish v1 -expect VBE.vcl1.s1.bereq_bodybytes >= 400 varnish v1 -expect VBE.vcl1.s1.beresp_bodybytes == 1500 varnish v1 -expect MAIN.s_resp_bodybytes == 1500 varnish v1 -vsc *bodyb* varnish-7.5.0/bin/varnishtest/tests/r03360.vtc000066400000000000000000000010351457605730600210500ustar00rootroot00000000000000varnishtest "Test recusive vcl includes" shell {echo include '"_recurse.vcl";' > ${tmpdir}/_recurse.vcl} shell {echo include +glob '"${tmpdir}/_recurse[2].vcl";' > ${tmpdir}/_recurse1.vcl} shell {echo include '"_recurse1.vcl";' > ${tmpdir}/_recurse2.vcl} varnish v1 -arg "-p vcl_path=${tmpdir}" -errvcl "Recursive use of file" { backend b { .host = "${localhost}"; } include "_recurse.vcl" ; } varnish v2 -arg "-p vcl_path=${tmpdir}" -errvcl "Recursive use of file" { backend b { .host = "${localhost}"; } include "_recurse1.vcl" ; } varnish-7.5.0/bin/varnishtest/tests/r03385.vtc000066400000000000000000000003631457605730600210620ustar00rootroot00000000000000varnishtest "Use a priv in vcl_pipe" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { import debug; sub vcl_recv { return (pipe); } sub vcl_pipe { debug.test_priv_task(); } } -start client c1 { txreq rxresp } -run varnish-7.5.0/bin/varnishtest/tests/r03394.vtc000066400000000000000000000003671457605730600210660ustar00rootroot00000000000000varnishtest "varnishstat curses field exclusion" server s1 -start varnish v1 -vcl+backend "" -start process p1 -dump {varnishstat -n ${v1_name} -f '^VSB.*'} -start process p1 -expect-text 0 0 "MAIN.pools" process p1 -screen_dump -write q -wait varnish-7.5.0/bin/varnishtest/tests/r03402.vtc000066400000000000000000000005301457605730600210440ustar00rootroot00000000000000varnishtest "probe dripping reads timeout" server s0 { rxreq delay 1 send "HTTP/1.1 200 OK\r\n" delay 0.5 send "Server: s0\r\n" delay 1 send "\r\n" } -dispatch varnish v1 -vcl+backend { probe default { .window = 1; .threshold = 1; .timeout = 2s; .interval = 0.1s; } } -start delay 5 varnish v1 -cliexpect sick backend.list varnish-7.5.0/bin/varnishtest/tests/r03410.vtc000066400000000000000000000017131457605730600210470ustar00rootroot00000000000000varnishtest "lost header workspace overflow" server s1 { rxreq send "HTTP/1.1 200 OK\r\n" send "Extra: 1\r\n" send "Extra: 2\r\n" send "Extra: 3\r\n" send "Extra: 4\r\n" send "Extra: 5\r\n" send "Extra: 6\r\n" send "Extra: 7\r\n" send "Extra: 8\r\n" send "Extra: 9\r\n" send "Extra: 10\r\n" send "Extra: 11\r\n" send "Extra: 12\r\n" send "Extra: 13\r\n" send "Extra: 14\r\n" send "Extra: 15\r\n" send "Extra: 16\r\n" send "Extra: 17\r\n" send "Extra: 18\r\n" send "Extra: 19\r\n" send "Extra: 20\r\n" send "Extra: 21\r\n" send "Extra: 22\r\n" send "Extra: 23\r\n" send "Extra: 24\r\n" send "Extra: 25\r\n" send "Extra: 26\r\n" # The start line consumes 5 header slots send "\r\n" } -start varnish v1 -cliok "param.set http_max_hdr 32" varnish v1 -vcl+backend { sub vcl_backend_response { # This is one header too many set beresp.http.Lost = "header"; } } -start client c1 { txreq rxresp expect resp.status == 500 } -run varnish-7.5.0/bin/varnishtest/tests/r03416.vtc000066400000000000000000000007601457605730600210560ustar00rootroot00000000000000varnishtest "Filter hop-by-hop headers out of h2 responses" server s1 { rxreq txresp } -start varnish v1 -cliok "param.set feature +http2" varnish v1 -vcl+backend { sub vcl_deliver { set resp.http.Keep-Alive = "timeout=5, max=1000"; set resp.http.Connection = "other"; set resp.http.Other = "foo"; } } -start client c1 { stream 1 { txreq rxresp expect resp.http.keep-alive == expect resp.http.connection == expect resp.http.other == } -run } -run varnish-7.5.0/bin/varnishtest/tests/r03417.vtc000066400000000000000000000003621457605730600210550ustar00rootroot00000000000000varnishtest "Filter Keep-Alive out from beresp to resp" server s1 { rxreq txresp -hdr "Keep-Alive: timeout=5, max=1000" } -start varnish v1 -vcl+backend "" -start client c1 { txreq rxresp expect resp.http.Keep-Alive == } -run varnish-7.5.0/bin/varnishtest/tests/r03433.vtc000066400000000000000000000013621457605730600210540ustar00rootroot00000000000000varnishtest "req.body and restarts" server s1 -repeat 2 { rxreq txresp -bodylen 1000 } -start server s2 -repeat 2 { rxreq txresp -bodylen 1000 } -start varnish v1 -vcl+backend { import std; import vtc; sub vcl_recv { std.cache_req_body(1MB); return (hash); } sub vcl_backend_fetch { if (bereq.url == "/1") { set bereq.backend = s1; } else { set bereq.backend = s2; } } sub vcl_backend_response { set beresp.ttl = 0.1s; } sub vcl_deliver { if (!req.restarts) { set req.url = "/2"; return (restart); } } } -start client c1 { txreq -req PUT -url /1 -bodylen 250000 rxresp expect resp.status == 200 delay 0.2 txreq -req PUT -url /1 -bodylen 250000 rxresp expect resp.status == 200 } -run delay 0.2 varnish-7.5.0/bin/varnishtest/tests/r03463.vtc000066400000000000000000000017171457605730600210630ustar00rootroot00000000000000varnishtest "VSL query lenient int comparisons" varnish v1 -vcl { import std; backend be none; sub vcl_recv { if (req.http.skip != "log") { std.log("float1: 123.456"); std.log("float2: 123."); std.log("float3: .456"); std.log("float4: 123"); std.log("float5: e1"); } return (synth(200)); } } -start logexpect l1 -v v1 -q "VCL_Log:float1 >= 123" { expect 0 1001 Begin rxreq } -start logexpect l2 -v v1 -q "VCL_Log:float2 <= 123" { expect 0 1001 Begin rxreq } -start logexpect l3 -v v1 -q "VCL_Log:float3 == 0" { expect 0 1001 Begin rxreq } -start logexpect l4 -v v1 -q "VCL_Log:float4 == 123" { expect 0 1001 Begin rxreq } -start logexpect l5 -v v1 -q "VCL_Log:float5 != 42 or ReqHeader:skip eq log" { fail add 1001 Begin rxreq expect * 1002 Begin rxreq fail clear } -start client c1 { txreq rxresp txreq -hdr "skip: log" rxresp } -run logexpect l1 -wait logexpect l2 -wait logexpect l3 -wait logexpect l4 -wait logexpect l5 -wait varnish-7.5.0/bin/varnishtest/tests/r03502.vtc000066400000000000000000000030451457605730600210510ustar00rootroot00000000000000varnishtest "#3502 Panic in VEP_Finish() for out-of-storage in vbf_beresp2obj()" # see also r01637 for failure case in VFP server s1 { # First consume (almost) all of the storage - the value # is brittle, see l1 fail rxreq expect req.url == /url1 txresp -bodylen 1048336 rxreq expect req.http.accept-encoding == gzip txresp -bodylen 1 } -start varnish v1 -arg "-sTransient=default,1M -p debug=+syncvsl -p nuke_limit=0" -vcl+backend { sub vcl_recv { if (req.url == "/") { return (pass); } } sub vcl_backend_response { set beresp.http.free = storage.Transient.free_space; if (bereq.url == "/") { set beresp.do_gzip = true; set beresp.do_esi = true; } } } -start logexpect l1 -v v1 -g vxid -q "vxid == 1004" { expect 25 1004 VCL_call {^BACKEND_RESPONSE} expect 0 = BerespHeader {^free:} expect 0 = VCL_return {^deliver} expect 0 = Timestamp {^Process} expect 0 = Filters {^ esi_gzip} expect 0 = BerespUnset {^Content-Length: 1} expect 0 = BerespHeader {^Content-Encoding: gzip} expect 0 = BerespHeader {^Vary: Accept-Encoding} # Ensure the FetchError is in vbf_beresp2obj() # not later in the VFP. Otherwise we have too much free_space fail add = Storage expect 0 = Error {^Failed to create object object from .+ Transient} expect 0 = FetchError {^Could not get storage} fail clear } -start client c1 { txreq -url /url1 rxresp expect resp.status == 200 txreq -hdr "Accept-Encoding: gzip" # no storage for synth either expect_close } -run logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/r03525.vtc000066400000000000000000000007141457605730600210560ustar00rootroot00000000000000varnishtest "Clear beresp status and reason on a retry" server s1 { rxreq txresp -status 500 -reason "my reason" } -start varnish v1 -arg "-p first_byte_timeout=0.2" -vcl+backend { sub vcl_backend_response { return (error(beresp.status, beresp.reason)); } sub vcl_backend_error { if (bereq.retries == 0) { return (retry); } } } -start client c1 { txreq rxresp expect resp.status == 503 expect resp.reason == "Backend fetch failed" } -run varnish-7.5.0/bin/varnishtest/tests/r03546.vtc000066400000000000000000000005131457605730600210560ustar00rootroot00000000000000varnishtest "Synth resp.reason race" varnish v1 -vcl { backend default none; sub vcl_backend_error { set beresp.status = 500; set beresp.reason = "VCL"; } sub vcl_deliver { return (synth(resp.status, resp.reason)); } } -start client c1 { txreq rxresp expect resp.status == 500 expect resp.reason == "VCL" } -run varnish-7.5.0/bin/varnishtest/tests/r03556.vtc000066400000000000000000000007671457605730600210720ustar00rootroot00000000000000varnishtest "#3556" server s1 { rxreq txresp non_fatal rxreq } -start varnish v1 -cliok "param.set first_byte_timeout 10" varnish v1 -vcl+backend {} -start client c1 { txreq rxresp } -run logexpect l2 -v v1 -q "ReqMethod eq POST" { expect * * End } -start client c2 { txreq -req POST \ -hdr "Content-Length: 10" \ -hdr "Content-Type: text/plain" send incompl } -run logexpect l2 -wait shell -expect POST { exec varnishncsa -d -n ${v1_name} -q 'Timestamp:Process[2] < 10.0' } varnish-7.5.0/bin/varnishtest/tests/r03560.vtc000066400000000000000000000010201457605730600210440ustar00rootroot00000000000000varnishtest "A backend connection error gets returned as a valid short response" barrier b1 sock 2 server s1 { rxreq txresp -nolen -hdr "Content-Length: 10" barrier b1 sync send "12345" # Early connection close error } -start varnish v1 -vcl+backend { import vtc; sub vcl_deliver { # Make sure we are streaming and give the backend time to error out vtc.barrier_sync("${b1_sock}"); vtc.sleep(1s); } } -start client c1 { txreq rxresphdrs expect resp.http.Content-Length == "10" recv 5 expect_close } -run varnish-7.5.0/bin/varnishtest/tests/r03564.vtc000066400000000000000000000011201457605730600210510ustar00rootroot00000000000000varnishtest "sess.* symbols and vcl syntax" varnish v1 -vcl { backend be none; sub vcl_deliver { set resp.http.sess-xid = sess.xid; set resp.http.sess-timeout-idle = sess.timeout_idle; } } -start client c1 { txreq rxresp expect resp.http.sess-xid == 1000 expect resp.http.sess-timeout-idle == 5.000 } -run varnish v1 -syntax 4.0 -vcl { backend be none; sub vcl_deliver { set resp.http.sess-timeout-idle = sess.timeout_idle; } } varnish v1 -syntax 4.0 -errvcl "Symbol not found: 'sess.xid'" { backend be none; sub vcl_deliver { set resp.http.sess-xid = sess.xid; } } varnish-7.5.0/bin/varnishtest/tests/r03706.vtc000066400000000000000000000003761457605730600210630ustar00rootroot00000000000000varnishtest "Null regex subject in ban expression" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { } -start client c1 { txreq rxresp expect resp.status == 200 } -run varnish v1 -cliok "ban req.http.nonexistent ~ foo" client c1 -run varnish-7.5.0/bin/varnishtest/tests/r03734.vtc000066400000000000000000000012431457605730600210560ustar00rootroot00000000000000varnishtest "Issue 3734 - Discard dependency check and labels" varnish v1 -vcl { backend default none; sub vcl_recv { return (synth(200, "vcl1")); } } -start varnish v1 -vcl { backend default none; sub vcl_recv { return (synth(200, "vcl2")); } } varnish v1 -cliok { vcl.label lbl_vcl2 vcl2 } varnish v1 -vcl { backend default none; sub vcl_recv { if (req.url == "/label") { return (vcl(lbl_vcl2)); } return (synth(200, "vcl3")); } } client c1 { txreq rxresp expect resp.status == 200 expect resp.reason == vcl3 txreq -url /label rxresp expect resp.status == 200 expect resp.reason == vcl2 } -run varnish v1 -cliok { vcl.discard vcl1 } varnish-7.5.0/bin/varnishtest/tests/r03794.vtc000066400000000000000000000006541457605730600210710ustar00rootroot00000000000000varnishtest "Append configurable Via header" server s1 { rxreq expect req.http.via == \ "1.1 v2 (Varnish/${pkg_branch}), 1.1 v1 (Varnish/${pkg_branch})" txresp } -start varnish v1 -vcl+backend "" -start varnish v2 -vcl { backend v1 { .host = "${v1_sock}"; } } -start client c1 -connect ${v2_sock} { txreq rxresp expect resp.http.via == \ "1.1 v1 (Varnish/${pkg_branch}), 1.1 v2 (Varnish/${pkg_branch})" } -run varnish-7.5.0/bin/varnishtest/tests/r03830.vtc000066400000000000000000000005651457605730600210610ustar00rootroot00000000000000varnishtest "3830: Do not call http_hdr_flags() on pseudo-headers" server s1 { rxreq txresp -reason ":x" rxreq txresp } -start varnish v1 -vcl+backend { sub vcl_recv { return (hash); } } -start client c1 { txreq rxresp expect resp.status == 200 } -run client c2 { txreq -url :x -method :x rxresp expect resp.status == 200 } -run varnish v1 -vsl_catchup varnish-7.5.0/bin/varnishtest/tests/r03856.vtc000066400000000000000000000024541457605730600210700ustar00rootroot00000000000000varnishtest "Regression test off-by-one in VSLbs" # vsl_buffer=257 bytes - 2 bytes header -> 255 bytes varnish v1 -arg "-p vsl_buffer=267" -vcl { import debug; backend b None; sub vcl_recv { # Assert error in VSLbs(), cache/cache_shmlog.c line 385: # Condition(vsl->wlp < vsl->wle) not true. debug.vsl_flush(); set req.http.a = # 255 = "a: " + 8 * 32 - 4 "0123456789abcdef0123456789abcdef" + "0123456789abcdef0123456789abcdef" + "0123456789abcdef0123456789abcdef" + "0123456789abcdef0123456789abcdef" + "0123456789abcdef0123456789abcdef" + "0123456789abcdef0123456789abcdef" + "0123456789abcdef0123456789abcdef" + "0123456789abcdef0123456789ab"; debug.return_strands("xyz"); # Assert error in VSLbs(), cache/cache_shmlog.c line 390: # Condition(VSL_END(vsl->wlp, l) < vsl->wle) not true. debug.vsl_flush(); debug.return_strands( # 255 = 8 * 32 - 1 "0123456789abcdef0123456789abcdef" + "0123456789abcdef0123456789abcdef" + "0123456789abcdef0123456789abcdef" + "0123456789abcdef0123456789abcdef" + "0123456789abcdef0123456789abcdef" + "0123456789abcdef0123456789abcdef" + "0123456789abcdef0123456789abcdef" + "0123456789abcdef0123456789abcde"); return (synth(200)); } } -start client c1 { txreq rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/r03865.vtc000066400000000000000000000037551457605730600210750ustar00rootroot00000000000000varnishtest "ESI onerror" server s1 { rxreq expect req.url == "/abort" txresp -hdr {surrogate-control: content="ESI/1.0"} \ -body {before after} rxreq expect req.url == "/abort0" txresp -hdr {surrogate-control: content="ESI/1.0"} \ -body {before after} } -start varnish v1 -cliok "param.set feature +esi_disable_xml_check" varnish v1 -cliok "param.set feature +esi_include_onerror" varnish v1 -vcl+backend { sub vcl_backend_fetch { if (bereq.url == "/fail") { return (error(604)); } if (bereq.url == "/fail0") { return (error(605)); } } sub vcl_backend_response { set beresp.do_esi = beresp.http.surrogate-control ~ "ESI/1.0"; unset beresp.http.surrogate-control; } sub vcl_backend_error { if (beresp.status == 604) { set beresp.body = "FOOBAR"; return(deliver); } if (beresp.status == 605) { set beresp.body = ""; return(deliver); } } } -start client c1 { txreq -url "/abort" non_fatal rxresphdrs expect resp.status == 200 rxchunk rxchunk expect_close expect resp.body == "before " } -run client c1 { # #4070 txreq -url "/abort0" non_fatal rxresphdrs expect resp.status == 200 rxchunk rxchunk expect_close expect resp.body == "before " } -run varnish v1 -cliok "param.set max_esi_depth 0" client c1 { txreq -url "/abort" non_fatal rxresphdrs expect resp.status == 200 rxchunk rxchunk expect_close expect resp.body == "before " } -run varnish v1 -cliok "param.set max_esi_depth 1" varnish v1 -vsl_catchup server s1 -wait server s1 { rxreq expect req.url == "/continue" txresp -hdr {surrogate-control: content="ESI/1.0"} \ -body {before after} } -start client c1 { fatal txreq -url "/continue" rxresp expect resp.body == "before after" } -run varnish v1 -cliok "param.set max_esi_depth 0" client c1 { fatal txreq -url "/continue" rxresp expect resp.body == "before after" } -run varnish-7.5.0/bin/varnishtest/tests/r03895.vtc000066400000000000000000000013111457605730600210620ustar00rootroot00000000000000 varnishtest "looped backends" server s1 { } -start server s2 { } -start server s3 { } -start server s4 { } -start varnish v1 -vcl+backend { import directors; import std; sub vcl_init { new rr = directors.round_robin(); rr.add_backend(s1); rr.add_backend(s2); rr.add_backend(s3); rr.add_backend(s4); new rr2 = directors.round_robin(); rr2.add_backend(rr.backend()); rr.add_backend(rr2.backend()); } } -start varnish v1 -vcl+backend { import directors; import std; sub vcl_init { new rr2 = directors.round_robin(); rr2.add_backend(s1); rr2.add_backend(s2); rr2.add_backend(s3); rr2.add_backend(s4); } } varnish v1 -cliok "vcl.discard vcl1" varnish v1 -cliok "vcl.list" varnish-7.5.0/bin/varnishtest/tests/r03940.vtc000066400000000000000000000032221457605730600210540ustar00rootroot00000000000000varnishtest "test startup_timeout vs. stevedore init / open" # we test with vtc_varnish and vtc_process because of different code # paths in mgr for implicit start vs. cli start #### # startup_timeout used, delay in stevedore init varnish v1 -arg "-sdebug=debug,dinit=5s -pstartup_timeout=3s -pcli_timeout=2s" \ -arg "-p feature=+no_coredump" \ -vcl "backend none none;" \ -expectexit 0x40 varnish v1 -cliexpect \ "Child failed on launch within startup_timeout=3.00s" \ "start" # v1 registers a panic on some systems, but not others shell {varnishadm -n ${v1_name} panic.clear || true } varnish v1 -wait process p1 { varnishd \ -sdebug=debug,dinit=5s \ -pstartup_timeout=3s -pcli_timeout=2s \ -n ${tmpdir}/p1 -a :0 -b none 2>&1 } -expect-exit 0x2 -run # expect-text does not work if a panic info pushes the # error out of the emulated terminal's view. shell {grep -q "Child failed on launch within startup_timeout=3.00s" ${p1_out}} #### # cli_timeout used, delay in stevedore open varnish v2 -arg "-sdebug=debug,dopen=5s -pstartup_timeout=2s -pcli_timeout=3s" \ -arg "-p feature=+no_coredump" \ -vcl "backend none none;" \ -expectexit 0x40 varnish v2 -cliexpect \ "launch within cli_timeout=3.00s .tip: set startup_" \ "start" # "time for the big quit" varnish v2 -cliok "panic.clear" varnish v2 -wait process p2 { varnishd \ -sdebug=debug,dopen=5s \ -pstartup_timeout=2s -pcli_timeout=3s \ -n ${tmpdir}/p2 -a :0 -b none } -expect-exit 0x2 -run # XXX not reliably the only failure mode #shell {grep -q "time for the big quit" ${p2_err}} # see explanation of previous shell grep shell {grep -q "launch within cli_timeout=3.00s (tip: set startup_" ${p2_err}} varnish-7.5.0/bin/varnishtest/tests/r03960.vtc000066400000000000000000000002401457605730600210530ustar00rootroot00000000000000varnishtest "invalid header name on RHS" varnish v1 -errvcl "Expected ID got '0'" { vcl 4.1; backend default none; sub vcl_recv { if (req.http.0) {} } } varnish-7.5.0/bin/varnishtest/tests/r03962.vtc000066400000000000000000000013051457605730600210600ustar00rootroot00000000000000varnishtest "ban expression object name prefixes" server s1 {} -start varnish v1 -vcl+backend {} -start varnish v1 -cliexpect {Unknown or unsupported field "req.urlXX"} "ban req.urlXX ~ foobarbazzz" varnish v1 -cliexpect {Unknown or unsupported field "obj.ageYY"} "ban obj.ageYY < 1d" varnish v1 -cliexpect {Unknown or unsupported field "req.ur"} "ban req.ur ~ foobarbazzz" varnish v1 -cliexpect {Unknown or unsupported field "req.htt"} "ban req.htt ~ foobarbazzz" varnish v1 -cliexpect {Unknown or unsupported field "req.htt.XXYY"} "ban req.htt.XXYY ~ foobarbazzz" varnish v1 -cliexpect {Missing header name: "obj.http."} "ban obj.http. ~ foobarbazzz" varnish v1 -cliok "ban req.http.XXYY ~ foobarbazzz" varnish-7.5.0/bin/varnishtest/tests/r03984.vtc000066400000000000000000000017251457605730600210720ustar00rootroot00000000000000varnishtest "Access protected headers" varnish v1 -vcl { backend be none; sub access_req { if (req.http.content-length || req.http.transfer-encoding) {} } sub access_resp { if (resp.http.content-length || resp.http.transfer-encoding) {} } sub access_bereq { if (bereq.http.content-length || bereq.http.transfer-encoding) {} } sub access_beresp { if (beresp.http.content-length || beresp.http.transfer-encoding) {} } sub vcl_recv { call access_req; } sub vcl_hash { call access_req; } sub vcl_purge { call access_req; } sub vcl_miss { call access_req; } sub vcl_pass { call access_req; } sub vcl_hit { call access_req; } sub vcl_synth { call access_req; call access_resp; } sub vcl_deliver { call access_req; call access_resp; } sub vcl_backend_fetch { call access_bereq; } sub vcl_backend_error { call access_bereq; call access_beresp; } sub vcl_backend_response { call access_bereq; call access_beresp; } } varnish-7.5.0/bin/varnishtest/tests/r03996.vtc000066400000000000000000000021441457605730600210710ustar00rootroot00000000000000varnishtest "h2 rapid reset" barrier b1 sock 2 -cyclic barrier b2 sock 5 -cyclic server s1 { rxreq txresp } -start varnish v1 -cliok "param.set feature +http2" varnish v1 -cliok "param.set debug +syncvsl" varnish v1 -cliok "param.set h2_rapid_reset_limit 3" varnish v1 -cliok "param.set h2_rapid_reset 5" varnish v1 -vcl+backend { import vtc; sub vcl_recv { vtc.sleep(0.5s); if (req.http.barrier) { vtc.barrier_sync(req.http.barrier); } vtc.barrier_sync("${b2_sock}"); } } -start varnish v1 -vsl_catchup client c1 { stream 0 { rxgoaway expect goaway.err == ENHANCE_YOUR_CALM } -start loop 4 { stream next { txreq -hdr barrier ${b1_sock} barrier b1 sync txrst } -run } barrier b2 sync stream 0 -wait } -run varnish v1 -vsl_catchup varnish v1 -expect sc_rapid_reset == 1 varnish v1 -cliok "param.set feature -vcl_req_reset" client c2 { stream 0 { rxgoaway expect goaway.err == ENHANCE_YOUR_CALM } -start loop 4 { stream next { txreq txrst } -run } barrier b2 sync stream 0 -wait } -run varnish v1 -vsl_catchup varnish v1 -expect sc_rapid_reset == 2 varnish-7.5.0/bin/varnishtest/tests/r04036.vtc000066400000000000000000000004441457605730600210540ustar00rootroot00000000000000varnishtest "Undefined storage properties" varnish v1 -arg "-s malloc=malloc -s file=file,${tmpdir}/file,2M" -vcl { backend be none; sub vcl_recv { set req.http.happy = storage.malloc.happy; set req.http.space = storage.file.free_space; } } -start client c1 { txreq rxresp } -run varnish-7.5.0/bin/varnishtest/tests/r04053.vtc000066400000000000000000000016761457605730600210630ustar00rootroot00000000000000varnishtest "Override ESI status check for onerror=abort" server s1 { rxreq expect req.http.esi-level == 0 txresp -body {before after} rxreq expect req.http.esi-level == 1 txresp -status 500 -hdr "transfer-encoding: chunked" delay 0.1 chunked 500 chunkedlen 0 } -start varnish v1 -cliok "param.set feature +esi_disable_xml_check" varnish v1 -cliok "param.set feature +esi_include_onerror" varnish v1 -vcl+backend { sub vcl_recv { set req.http.esi-level = req.esi_level; } sub vcl_backend_response { set beresp.do_esi = bereq.http.esi-level == "0"; } sub vcl_deliver { if (req.esi_level > 0 && resp.status != 200) { set resp.status = 200; } } } -start client c1 { txreq rxresp expect resp.body == "before 500 after" } -run varnish-7.5.0/bin/varnishtest/tests/s00000.vtc000066400000000000000000000012051457605730600210340ustar00rootroot00000000000000varnishtest "Simple expiry test" server s1 { rxreq expect req.url == "/" txresp -hdr "Cache-control: max-age = 1" -body "1111\n" delay 3 rxreq expect req.url == "/" txresp -hdr "Cache-control: max-age = 1" -body "22222\n" } -start varnish v1 -vcl+backend { } -start varnish v1 -cliok "param.set default_grace 0" varnish v1 -cliok "param.set default_keep 0" client c1 { txreq -url "/" rxresp expect resp.bodylen == 5 expect resp.http.x-varnish == "1001" expect resp.status == 200 } -run delay 3 client c2 { txreq -url "/" rxresp expect resp.status == 200 expect resp.http.x-varnish == "1004" expect resp.bodylen == 6 } -run varnish-7.5.0/bin/varnishtest/tests/s00001.vtc000066400000000000000000000013111457605730600210330ustar00rootroot00000000000000varnishtest "Simple expiry test (fully reaped object)" barrier b1 cond 2 server s1 { rxreq expect req.url == "/" txresp -hdr "Cache-control: max-age = 1" -body "1111\n" barrier b1 sync rxreq expect req.url == "/" txresp -hdr "Cache-control: max-age = 1" -body "22222\n" } -start varnish v1 -vcl+backend { } -start varnish v1 -cliok "param.set default_keep 0" varnish v1 -cliok "param.set default_grace 0" client c1 { txreq -url "/" rxresp expect resp.bodylen == 5 expect resp.http.x-varnish == "1001" expect resp.status == 200 } -run barrier b1 sync delay 1.1 client c2 { txreq -url "/" rxresp expect resp.status == 200 expect resp.http.x-varnish == "1004" expect resp.bodylen == 6 } -run varnish-7.5.0/bin/varnishtest/tests/s00002.vtc000066400000000000000000000022641457605730600210440ustar00rootroot00000000000000varnishtest "Check grace with sick backends" barrier b1 cond 2 barrier b2 cond 2 server s1 { rxreq expect req.url == "/" txresp -proto HTTP/1.0 -hdr "nbr: 1" -body "hi" accept rxreq expect req.url == "/" txresp -proto HTTP/1.0 -hdr "nbr: 2" -body "hi" barrier b1 sync accept rxreq expect req.url == "/" txresp -proto HTTP/1.0 -hdr "nbr: 3" -hdr "foo: bar" -body "hi" accept rxreq expect req.url == "/" txresp -proto HTTP/1.0 -status 400 -hdr "nbr: 4" -body "hi" accept accept rxreq expect req.url == "/" txresp -proto HTTP/1.0 -status 400 -hdr "nbr: 5" -body "hi" accept barrier b2 sync } -start varnish v1 -vcl { backend b { .host = "${s1_addr}"; .port = "${s1_port}"; .probe = { .url = "/"; .timeout = 30ms; .interval = 1s; .window = 2; .threshold = 1; .initial = 0; } } sub vcl_backend_response { set beresp.ttl = 1s; set beresp.grace = 1m; } } -start barrier b1 sync client c1 { txreq -url "/" rxresp expect resp.http.foo == "bar" expect resp.status == 200 } -run barrier b2 sync client c2 { txreq -url "/" rxresp expect resp.http.foo == "bar" expect resp.status == 200 expect resp.http.x-varnish == "1004 1002" } -run varnish-7.5.0/bin/varnishtest/tests/s00003.vtc000066400000000000000000000024211457605730600210400ustar00rootroot00000000000000varnishtest "Coverage test for -sfile" server s1 { rxreq txresp -nolen -hdr "Transfer-encoding: chunked" chunkedlen 65536 chunkedlen 65536 chunkedlen 65536 chunkedlen 65536 chunkedlen 1 chunkedlen 0 rxreq txresp -nolen -hdr "Transfer-encoding: chunked" chunkedlen 262 chunkedlen 0 } -start varnish v1 \ -arg "-sTransient=file,${tmpdir}/_.file,10m" \ -arg "-sdir=file,${tmpdir}/,10m" \ -vcl+backend { sub vcl_backend_response { set beresp.do_stream = false; set beresp.ttl = 0.1s; set beresp.grace = 0.1s; set beresp.keep = 0.1s; } } \ -start client c1 { txreq rxresp expect resp.bodylen == 262145 delay 2 txreq rxresp expect resp.bodylen == 262 } -run varnish v1 -vsl_catchup varnish v1 -cliok "ban obj.http.date ~ ." process p1 { varnishd -sTransient=file,${tmpdir}/foo,xxx -blocalhost -a:0 -n ${tmpdir} 2>&1 } -expect-exit 0x2 -dump -start -expect-text 0 0 "Invalid number" -wait -screen_dump process p1 { varnishd -sTransient=file,${tmpdir}/foo,10M,xxx -blocalhost -a:0 -n ${tmpdir} 2>&1 } -expect-exit 0x2 -dump -start -expect-text 0 0 "granularity" -wait -screen_dump process p1 { varnishd -sTransient=file,${tmpdir}/foo,10m,,foo -blocalhost -a:0 -n ${tmpdir} 2>&1 } -expect-exit 0x2 -dump -start -expect-text 0 0 "invalid advice" -wait varnish-7.5.0/bin/varnishtest/tests/s00004.vtc000066400000000000000000000030621457605730600210430ustar00rootroot00000000000000varnishtest "Timestamps" server s1 { rxreq expect req.url == "/1" delay 1 txresp -nolen -hdr "Transfer-Encoding: chunked" delay 1 chunkedlen 1000 chunkedlen 0 rxreq expect req.url == "/2" txresp } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_stream = false; } sub vcl_deliver { if (req.url == "/1" && req.restarts == 0) { return (restart); } } } -start logexpect l1 -v v1 -g request { expect 0 1001 Begin req expect * = Timestamp {Start: \S+ 0\.000000 0\.000000} expect * = Timestamp {Req: \S+ 0\.\d+ 0\.\d+} expect * = Timestamp {Fetch: \S+ [0-4]\.\d+ [0-4]\.\d+} expect * = Timestamp {Restart: \S+ 2\.\d+ 0\.\d+} expect * = End expect 0 1002 Begin bereq expect * = Timestamp {Start: \S+ 0\.000000 0\.000000} expect * = Timestamp {Bereq: \S+ 0\.\d+ 0\.\d+} expect * = Timestamp {Beresp: \S+ 1\.\d+ [01]\.\d+} expect * = Timestamp {BerespBody: \S+ 2\.\d+ (1\.\d+|0\.9)} expect * = End expect 0 1003 Begin {req 1001 restart} expect * = Timestamp {Start: \S+ 2\.\d+ 0\.\d+} expect * = Timestamp {Process: \S+ 2\.\d+ 0\.\d+} expect * = Timestamp {Resp: \S+ 2\.\d+ 0\.\d+} expect * = End expect 0 1004 Begin req expect * = Timestamp {Start: \S+ 0\.000000 0\.000000} expect * = Timestamp {Req: \S+ 0\.\d+ 0\.\d+} expect * = Timestamp {ReqBody: \S+ 0\.\d+ 0\.\d+} expect * = Timestamp {Fetch: \S+ 0\.\d+ 0\.\d+} expect * = Timestamp {Resp: \S+ 0\.\d+ 0\.\d+} expect * = End } -start client c1 { txreq -url "/1" rxresp delay 1 txreq -req "POST" -url "/2" -body "asdf" rxresp } -run logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/s00005.vtc000066400000000000000000000065531457605730600210540ustar00rootroot00000000000000varnishtest "Test VCL labels" server s1 { rxreq txresp } -start varnish v1 -vcl+backend {} # VCL name must be C-names varnish v1 -clierr 106 {vcl.inline 0000 "vcl 4.0; backend b { .host = \"localhost\";} "} varnish v1 -clierr 106 {vcl.inline a00/ "vcl 4.0; backend b { .host = \"localhost\";} "} varnish v1 -clierr 106 {vcl.inline a00Ã¥ "vcl 4.0; backend b { .host = \"localhost\";} "} varnish v1 -vcl+backend { sub vcl_recv { return (synth(400)); } } varnish v1 -start client c1 { txreq rxresp expect resp.status == 400 } -run varnish v1 -cliok "vcl.use vcl1" client c1 { txreq rxresp expect resp.status == 200 } -run varnish v1 -cliok "vcl.list" varnish v1 -clierr 106 "vcl.label foo vcl0" varnish v1 -cliok "vcl.label foo vcl2" varnish v1 -cliok "vcl.label bar vcl2" varnish v1 -cliok "vcl.list" varnish v1 -clijson "vcl.list -j" varnish v1 -cliok "vcl.show foo" varnish v1 -cliok "vcl.show -v bar" varnish v1 -clierr 300 "vcl.discard vcl2" varnish v1 -cliok "vcl.discard bar" varnish v1 -cliok "vcl.label foo vcl1" varnish v1 -clierr 106 "vcl.label vcl1 vcl2" varnish v1 -clierr 106 "vcl.state foo cold" varnish v1 -clierr 300 "vcl.label bar foo" varnish v1 -clierr 300 "vcl.discard vcl1" varnish v1 -cliok "vcl.list" varnish v1 -cliok "vcl.use foo" varnish v1 -clierr 300 "vcl.discard foo" varnish v1 -cliok "vcl.list" client c1 -run varnish v1 -cliok "vcl.label foo vcl2" client c1 { txreq rxresp expect resp.status == 400 } -run varnish v1 -cliok "vcl.use vcl1" varnish v1 -cliok "vcl.list" client c1 { txreq rxresp expect resp.status == 200 } -run varnish v1 -cliok "vcl.discard foo" varnish v1 -clierr 106 "vcl.discard foo" varnish v1 -stop varnish v1 -cliok "vcl.list" varnish v1 -clierr 106 "vcl.label fo- vcl0" varnish v1 -cliok "vcl.label fo- vcl1" varnish v1 -clierr 300 "vcl.label bar fo-" varnish v1 -clierr 200 "vcl.state vcl1 warm" varnish v1 -clierr 200 "vcl.state vcl1 auto" varnish v1 -clierr 300 "vcl.state vcl1 cold" varnish v1 -clierr 300 "vcl.discard vcl1" varnish v1 -cliok "vcl.list" varnish v1 -cliok "vcl.use fo-" varnish v1 -clierr 300 "vcl.discard fo-" varnish v1 -cliok "vcl.list" server s1 -start varnish v1 -start client c1 -run varnish v1 -stop varnish v1 -cliok "vcl.use vcl1" varnish v1 -cliok "vcl.discard fo-" varnish v1 -clierr 106 "vcl.discard fo-" varnish v1 -cliok "vcl.label label1 vcl1" varnish v1 -cliok "vcl.label label2 vcl1" varnish v1 -cliok "vcl.label label3 vcl1" varnish v1 -cliok "vcl.list" varnish v1 -clijson "vcl.list -j" varnish v1 -start varnish v1 -cliok "vcl.list" varnish v1 -cliok "vcl.label slartibartfast vcl1" server s1 -start client c1 -run # Test loop detection ####################################################################### varnish v1 -cliok vcl.list varnish v1 -clijson "vcl.list -j" varnish v1 -vcl+backend { } varnish v1 -cliok "vcl.label lblA vcl3" varnish v1 -vcl+backend { sub vcl_recv { return (vcl(lblA)); } } varnish v1 -cliok "vcl.label lblB vcl4" varnish v1 -vcl+backend { sub vcl_recv { return (vcl(lblB)); } } varnish v1 -clierr 106 "vcl.label lblA vcl5" varnish v1 -cliexpect \ {would create a loop} \ {vcl.label lblA vcl5} # omitting ^vcl1 line to only get aligned fields varnish v1 -cliexpect { vcl2 label1 vcl1 label2 vcl1 label3 vcl1 slartibartfast vcl1 vcl3 lblA vcl3 vcl4 lblA lblB vcl4 vcl5 lblB } vcl.deps varnish-7.5.0/bin/varnishtest/tests/s00006.vtc000066400000000000000000000007411457605730600210460ustar00rootroot00000000000000varnishtest "Check that Age is always less than max-age while not stale" server s1 { rxreq expect req.url == "/" txresp -hdr "Cache-control: max-age=2" } -start varnish v1 -vcl+backend { } -start client c1 { txreq -url "/" rxresp expect resp.status == 200 expect resp.http.Age == 0 delay 0.8 txreq -url "/" rxresp expect resp.status == 200 expect resp.http.Age == 0 delay 0.3 txreq -url "/" rxresp expect resp.status == 200 expect resp.http.Age <= 1 } -run varnish-7.5.0/bin/varnishtest/tests/s00007.vtc000066400000000000000000000016601457605730600210500ustar00rootroot00000000000000varnishtest "Effective TTL for for a slow backend" server s1 { rxreq delay 2 txresp -body "foo" # The second request is never used, but is here to give a # better error if varnish decides to fetch the object the # second time rxreq txresp -body "bar" } -start varnish v1 -arg "-p default_ttl=3 -p default_grace=0" -vcl+backend { sub vcl_backend_response { set beresp.http.X-ttl = beresp.ttl; } } -start client c1 { txreq rxresp expect resp.status == 200 expect resp.body == "foo" expect resp.http.x-ttl <= 3 expect resp.http.x-ttl >= 2 delay 2 # It is now 2 seconds since the first response was received # from the backend, but 4 seconds since the first request was # sent to the backend. Timeout is 3 seconds, and here we # consider the object _not_ expired, and thus do not want a # refetch. txreq rxresp expect resp.status == 200 expect resp.body == "foo" } -run varnish v1 -expect beresp_shortlived == 1 varnish-7.5.0/bin/varnishtest/tests/s00008.vtc000066400000000000000000000007431457605730600210520ustar00rootroot00000000000000varnishtest "setting ttl in vcl_backend_response for slow backends" server s1 { rxreq delay 2 txresp -body "foo" # The second request is never used rxreq txresp -body "bar" } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.ttl = 10s; } sub vcl_deliver { set resp.http.X-ttl = obj.ttl; } } -start client c1 { txreq rxresp expect resp.status == 200 expect resp.body == "foo" expect resp.http.X-ttl <= 10 expect resp.http.X-ttl >= 9 } -run varnish-7.5.0/bin/varnishtest/tests/s00009.vtc000066400000000000000000000014451457605730600210530ustar00rootroot00000000000000varnishtest "Correct obj.ttl is when backend and processing are slow" barrier b1 sock 2 barrier b2 sock 2 server s1 { rxreq delay 2 txresp -body "foo" # The second request is never used rxreq txresp -body "bar" } -start varnish v1 -vcl+backend { import vtc; sub vcl_backend_response { # Simulate processing for 1.5 sec vtc.barrier_sync("${b1_sock}"); vtc.barrier_sync("${b2_sock}"); # Moving this above the processing should not change # anything. set beresp.ttl = 10s; } sub vcl_deliver { set resp.http.X-ttl = obj.ttl; } } -start client c1 { txreq rxresp expect resp.status == 200 expect resp.body == "foo" expect resp.http.X-ttl <= 9 expect resp.http.X-ttl >= 8 } -start # help for sleeping inside of vcl barrier b1 sync delay 1.5 barrier b2 sync client c1 -wait varnish-7.5.0/bin/varnishtest/tests/s00010.vtc000066400000000000000000000054141457605730600210430ustar00rootroot00000000000000varnishtest "client h1 send timeouts - tcp" # XXX See https://github.com/varnishcache/varnish-cache/pull/2980#issuecomment-486214661 feature cmd {test $(uname) != "SunOS" && test $(uname) != "Darwin"} barrier b1 cond 2 -cyclic barrier b2 cond 2 -cyclic server s0 { fatal rxreq txresp -nolen -hdr "Transfer-encoding: chunked" chunkedlen 100000 # make sure varnish is stuck in delivery barrier b1 sync non_fatal chunkedlen 0 } -dispatch varnish v1 -cliok "param.set debug +syncvsl" varnish v1 -cliok "param.set thread_pools 1" varnish v1 -cliok "param.set timeout_idle 1" varnish v1 -cliok "param.set idle_send_timeout .1" varnish v1 -cliok "param.set send_timeout .1" varnish v1 -vcl+backend { import std; import debug; sub vcl_recv { if (req.http.send_timeout) { set sess.send_timeout = std.duration(req.http.send_timeout); } if (req.http.idle_send_timeout) { set sess.idle_send_timeout = std.duration(req.http.idle_send_timeout); } return (pass); } sub vcl_deliver { debug.sndbuf(256b); } } -start # case 1: send_timeout parameter logexpect l1 -v v1 -g raw { expect * 1001 Debug "Hit total send timeout" expect * 1000 SessClose TX_ERROR } -start client c1 -rcvbuf 256 { txreq rxresphdrs # varnish is stuck sending the chunk barrier b1 sync # wait for the timeout to kick in barrier b2 sync non_fatal rxrespbody expect_close } -start logexpect l1 -wait barrier b2 sync client c1 -wait # case 2: send_timeout overridden in VCL varnish v1 -cliok "param.reset send_timeout" logexpect l2 -v v1 -g raw { expect * 1004 Debug "Hit total send timeout" expect * 1003 SessClose TX_ERROR } -start client c2 -rcvbuf 256 { txreq -hdr "send_timeout: 100ms" rxresphdrs # varnish is stuck sending the chunk barrier b1 sync # wait for the timeout to kick in barrier b2 sync # expect the transaction to be interrupted non_fatal rxrespbody expect_close } -start logexpect l2 -wait barrier b2 sync client c2 -wait # case 3: idle_send_timeout parameter logexpect l3 -v v1 -g raw { expect * 1007 Debug "Hit idle send timeout" } -start client c3 -rcvbuf 256 { txreq rxresphdrs # varnish is stuck sending the chunk barrier b1 sync # wait for the timeout to kick in barrier b2 sync # don't wait for the transaction to complete } -start logexpect l3 -wait barrier b2 sync client c3 -wait # case 4: idle_send_timeout overridden in VCL varnish v1 -cliok "param.reset idle_send_timeout" logexpect l4 -v v1 -g raw { expect * 1010 Debug "Hit idle send timeout" } -start client c4 -rcvbuf 256 { txreq -hdr "idle_send_timeout: 100ms" rxresphdrs # varnish is stuck sending the chunk barrier b1 sync # wait for the timeout to kick in barrier b2 sync # don't wait for the transaction to complete } -start logexpect l4 -wait barrier b2 sync client c4 -wait varnish-7.5.0/bin/varnishtest/tests/s00011.vtc000066400000000000000000000007151457605730600210430ustar00rootroot00000000000000varnishtest "backend send timeouts" server s1 { non_fatal rxreq } -start varnish v1 -vcl+backend "" -start # Despite shared code, these parameters are only for client responses. # See 43bbe97b38a86d706c0d9f4838bbfbd305342775. varnish v1 -cliok "param.set idle_send_timeout 1" varnish v1 -cliok "param.set send_timeout 1" client c1 { txreq -method POST -hdr "Content-Length: 10" send hello delay 2 send world rxresp expect resp.status == 503 } -run varnish-7.5.0/bin/varnishtest/tests/s00012.vtc000066400000000000000000000031461457605730600210450ustar00rootroot00000000000000varnishtest "client h1 send timeouts - uds" feature cmd {test $(uname) != "SunOS"} server s1 { rxreq txresp -bodylen 100000 } -start varnish v1 \ -arg "-p timeout_idle=1" \ -arg "-p idle_send_timeout=.1" \ -arg "-p send_timeout=.1" \ -arg "-a ${tmpdir}/v1.sock" \ -vcl+backend { import debug; sub vcl_recv { if (req.url == "/longsend") { # client -rcvbuf 128 is super inefficient, so # we need a very long timeout set sess.send_timeout = 20s; } else if (req.url == "/longidlesend") { set sess.idle_send_timeout = 2s; } } sub vcl_hash { hash_data("/"); return (lookup); } sub vcl_deliver { debug.sndbuf(128b); } } -start logexpect l1 -v v1 -q "ReqURL ~ \"^/$\"" { expect * * Debug "Hit total send timeout" } -start client c1 -connect "${tmpdir}/v1.sock" -rcvbuf 128 { txreq non_fatal rxresphdrs # keep the session open for 2 seconds delay 2 } -start client c2 -connect "${tmpdir}/v1.sock" -rcvbuf 128 { txreq -url /longsend rxresphdrs delay 0.8 rxrespbody expect resp.bodylen == 100000 } -start client c1 -wait client c2 -wait varnish v1 -cliok "param.set idle_send_timeout 1" varnish v1 -cliok "param.reset send_timeout" logexpect l2 -v v1 -q "ReqURL ~ \"^/$\"" { expect * * Debug "Hit idle send timeout" } -start client c3 -connect "${tmpdir}/v1.sock" -rcvbuf 128 { txreq rxresphdrs # keep the session open for 2 seconds delay 2 } -start client c4 -connect "${tmpdir}/v1.sock" -rcvbuf 128 { txreq -url /longidlesend rxresphdrs delay 1.8 rxrespbody expect resp.bodylen == 100000 } -start client c3 -wait client c4 -wait logexpect l1 -wait logexpect l2 -wait varnish-7.5.0/bin/varnishtest/tests/s00013.vtc000066400000000000000000000030011457605730600210340ustar00rootroot00000000000000varnishtest "pipe timeouts" server s1 { rxreq txresp -hdr "transfer-encoding: chunked" delay 1.1 close loop 3 { accept rxreq txresp -hdr "transfer-encoding: chunked" expect_close } accept non_fatal rxreq txresp -hdr "transfer-encoding: chunked" loop 20 { chunkedlen 1 delay 0.1 } } -start varnish v1 -cliok "param.set pipe_timeout 0s" varnish v1 -cliok "param.set pipe_task_deadline 0s" varnish v1 -vcl+backend { sub vcl_pipe { set bereq.task_deadline = 1.1s; if (req.method != "TMO") { unset bereq.task_deadline; } } } -start logexpect l1 -v v1 -g raw -q SessClose { expect 1000 * SessClose {^TX_PIPE 1\.} expect 1003 * SessClose {^RX_TIMEOUT 0\.} expect 1006 * SessClose {^RX_TIMEOUT 1\.} expect 1009 * SessClose {^RX_TIMEOUT 1\.} expect 1012 * SessClose {^RX_TIMEOUT 1\.} } -start client c1 { non_fatal txreq -method PIPE rxresp } -run varnish v1 -cliok "param.set pipe_timeout 500ms" varnish v1 -cliok "param.set pipe_task_deadline 0s" client c1 -run varnish v1 -cliok "param.set pipe_timeout 0s" varnish v1 -cliok "param.set pipe_task_deadline 1.1s" client c1 -run varnish v1 -cliok "param.set pipe_timeout 0s" varnish v1 -cliok "param.set pipe_task_deadline 0s" client c2 { non_fatal txreq -method TMO rxresp } -run varnish v1 -cliok "param.set pipe_timeout 500ms" varnish v1 -cliok "param.set pipe_task_deadline 1.1s" client c1 -run logexpect l1 -wait varnish v1 -expect MAIN.s_pipe == 5 varnish v1 -expect MAIN.sc_tx_pipe == 1 varnish v1 -expect MAIN.sc_rx_timeout == 4 varnish-7.5.0/bin/varnishtest/tests/t02000.vtc000066400000000000000000000052611457605730600210450ustar00rootroot00000000000000varnishtest "Direct H2 start" server s1 { rxreq expect req.http.host == foo.bar txresp \ -hdr "H234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789I: foo" \ -hdr "Foo: H234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789I" \ -bodylen 10 rxreq txresp rxreq txresp } -start varnish v1 -vcl+backend { sub vcl_recv { return (pipe); } } -start varnish v1 -cliok "param.set debug +syncvsl" varnish v1 -cliok "param.set vsl_mask +H2RxHdr,+H2RxBody,+H2TxHdr,+H2TxBody" process p1 {exec varnishlog -n ${v1_name} -g raw -w ${tmpdir}/vlog -A} -start shell {while ! test -s ${tmpdir}/vlog ; do sleep 1 ; done} client c1 { txpri expect_close } -run varnish v1 -cliok "param.set feature +http2" varnish v1 -cliok "param.reset h2_initial_window_size" client c1 { stream 1 { txprio -weight 10 -stream 0 } -run stream 3 { txprio -weight 10 -stream 0 } -run stream 5 { txprio -weight 10 -stream 2 } -run stream 7 { txreq -dep 3 -hdr :authority foo.bar -pad cotton rxresp expect resp.status == 200 delay 1 txrst -err 0x1111 } -start stream 0 { txping -data "_-__-_-_" rxping expect ping.ack == "true" expect ping.data == "_-__-_-_" } -run stream 7 -wait } -run varnish v1 -expect MEMPOOL.req0.live == 0 varnish v1 -expect MEMPOOL.req1.live == 0 varnish v1 -expect MEMPOOL.sess0.live == 0 varnish v1 -expect MEMPOOL.sess1.live == 0 process p1 -stop # shell {cat ${tmpdir}/vlog} # SETTINGS with default initial window size shell -match {1001 H2TxHdr c \[000006040000000000\]} { cat ${tmpdir}/vlog } # While we're here, test sess.xid over H2 as well varnish v1 -syntax 4.1 -vcl+backend { sub vcl_backend_response { set beresp.http.B-Sess-XID = sess.xid; } sub vcl_deliver { set resp.http.C-Sess-XID = sess.xid; set resp.http.xport = req.transport; } } client c1 { stream 7 { txreq -url "/uncached" rxresp expect resp.status == 200 expect resp.http.C-Sess-XID ~ "^[0-9]+$" expect resp.http.B-Sess-XID ~ "^[0-9]+$" expect resp.http.C-Sess-XID == resp.http.B-Sess-XID expect resp.http.xport == HTTP/2 } -run stream 9 { txreq -url "/still_not_cached" rxresp expect resp.status == 200 expect resp.http.C-Sess-XID ~ "^[0-9]+$" expect resp.http.B-Sess-XID ~ "^[0-9]+$" expect resp.http.C-Sess-XID == resp.http.B-Sess-XID } -run } -run varnish-7.5.0/bin/varnishtest/tests/t02001.vtc000066400000000000000000000067011457605730600210460ustar00rootroot00000000000000varnishtest "H1->H2 Upgrade" barrier b1 cond 2 server s1 { rxreq expect req.url == /noupgrade expect req.http.host == foo.bar txresp -status 400 -bodylen 10 } -start varnish v1 -vcl+backend {} -start varnish v1 -cliok "param.set feature -http2" varnish v1 -cliok "param.set debug +syncvsl" client c1 { send "GET /noupgrade HTTP/1.1\r\n" send "Host: foo.bar\r\n" send "Upgrade: h2c\r\n" send "HTTP2-Settings: AAMAAABkAAQAAP__\r\n" send "\r\n" rxresp expect resp.status == 400 expect resp.bodylen == 10 } -run server s1 { rxreq expect req.url == /upgrade1 expect req.http.host == foo.bar expect req.bodylen == 4 txresp -status 401 -bodylen 8 rxreq expect req.url == /upgrade2 expect req.http.host == foo.bar barrier b1 sync txresp -status 402 -bodylen 11 } -start varnish v1 -vsl_catchup varnish v1 -expect MEMPOOL.req0.live == 0 varnish v1 -expect MEMPOOL.req1.live == 0 varnish v1 -expect MEMPOOL.sess0.live == 0 varnish v1 -expect MEMPOOL.sess1.live == 0 varnish v1 -cliok "param.set feature +http2" # We don't support upgrades with body client c1 { send "POST /upgrade1 HTTP/1.1\r\n" send "Host: foo.bar\r\n" send "Upgrade: h2c\r\n" send "HTTP2-Settings: AAMAAABkAAQAAP__\r\n" send "Content-Length: 4\r\n" send "\r\n" send "FOO\n" rxresp expect resp.status == 401 expect resp.bodylen == 8 } -run varnish v1 -vsl_catchup varnish v1 -expect MEMPOOL.req0.live == 0 varnish v1 -expect MEMPOOL.req1.live == 0 varnish v1 -expect MEMPOOL.sess0.live == 0 varnish v1 -expect MEMPOOL.sess1.live == 0 client c1 { send "GET /upgrade2 HTTP/1.1\r\n" send "Host: foo.bar\r\n" send "Upgrade: h2c\r\n" send "HTTP2-Settings: AAMAAABkAAQAAP__\r\n" send "\r\n" rxresp expect resp.status == 101 expect resp.http.upgrade == h2c expect resp.http.connection == Upgrade txpri stream 0 { rxsettings txsettings txsettings -ack rxsettings expect settings.ack == true } -run barrier b1 sync stream 1 { rxresp expect resp.status == 402 expect resp.bodylen == 11 } -run } -run varnish v1 -vsl_catchup varnish v1 -expect MEMPOOL.req0.live == 0 varnish v1 -expect MEMPOOL.req1.live == 0 varnish v1 -expect MEMPOOL.sess0.live == 0 varnish v1 -expect MEMPOOL.sess1.live == 0 client c1 { # Illegal HTTP2-Settings send "GET /noupgrade HTTP/1.1\r\n" send "Host: foo.bar\r\n" send "Upgrade: h2c\r\n" send "HTTP2-Settings: #######\r\n" send "\r\n" expect_close } -run varnish v1 -vsl_catchup varnish v1 -expect MEMPOOL.req0.live == 0 varnish v1 -expect MEMPOOL.req1.live == 0 varnish v1 -expect MEMPOOL.sess0.live == 0 varnish v1 -expect MEMPOOL.sess1.live == 0 client c1 { # PRISM with error in last bit send "GET /noupgrade HTTP/1.1\r\n" send "Host: foo.bar\r\n" send "Upgrade: h2c\r\n" send "HTTP2-Settings: AAMAAABkAAQAAP__\r\n" send "\r\n" rxresp expect resp.status == 101 expect resp.http.upgrade == h2c expect resp.http.connection == Upgrade sendhex "505249202a20485454502f322e300d0a0d0a534d0d0a0d0b" expect_close } -run varnish v1 -expect MEMPOOL.req0.live == 0 varnish v1 -expect MEMPOOL.req1.live == 0 varnish v1 -expect MEMPOOL.sess0.live == 0 varnish v1 -expect MEMPOOL.sess1.live == 0 client c1 { # Missing HTTP2-Settings send "GET /noupgrade HTTP/1.1\r\n" send "Host: foo.bar\r\n" send "Upgrade: h2c\r\n" send "\r\n" expect_close } -run varnish v1 -vsl_catchup varnish v1 -expect MEMPOOL.req0.live == 0 varnish v1 -expect MEMPOOL.req1.live == 0 varnish v1 -expect MEMPOOL.sess0.live == 0 varnish v1 -expect MEMPOOL.sess1.live == 0 varnish-7.5.0/bin/varnishtest/tests/t02002.vtc000066400000000000000000000021461457605730600210460ustar00rootroot00000000000000varnishtest "HPACK coverage" server s1 { rxreq expect req.http.host == www.example.com expect req.http.foohdr == FOOcont txresp -status 500 -bodylen 10 rxreq expect req.http.host == www.example.com expect req.http.foohdr == FOOcont2 txresp -status 404 -bodylen 20 } -start varnish v1 -vcl+backend {} -start varnish v1 -cliok "param.set feature +http2" varnish v1 -cliok "param.set debug +syncvsl" client c1 { stream 7 { txreq \ -litIdxHdr inc 1 huf "www.example.com" \ -idxHdr 16 \ -litHdr inc huf "foohdr" huf "FOOcont" rxresp expect resp.status == 500 expect resp.bodylen == 10 } -run stream 9 { txreq \ -idxHdr 63 \ -litIdxHdr never 62 plain "FOOcont2" rxresp expect resp.status == 404 expect resp.bodylen == 20 } -run } -run varnish v1 -expect MEMPOOL.req0.live == 0 varnish v1 -expect MEMPOOL.req1.live == 0 varnish v1 -expect MEMPOOL.sess0.live == 0 varnish v1 -expect MEMPOOL.sess1.live == 0 client c1 { stream 11 { txreq -hdr sna[]fu foo.bar -pad cotton rxrst } -run } -run client c1 { stream 13 { txreq -hdr snaFu foo.bar -pad cotton rxrst } -run } -run varnish-7.5.0/bin/varnishtest/tests/t02003.vtc000066400000000000000000000237721457605730600210570ustar00rootroot00000000000000varnishtest "H2 Frame coverage/error conditions" server s1 { rxreq txresp } -start varnish v1 -vcl+backend {} -start varnish v1 -cliok "param.set feature +http2" varnish v1 -cliok "param.set debug +syncvsl" ####################################################################### # Test Even stream numbers client c1 { stream 0 { rxgoaway expect goaway.laststream == 0 expect goaway.err == PROTOCOL_ERROR } -start stream 2 { sendhex "000003 80 00 00000002 010203" txprio } -run stream 0 -wait } -run varnish v1 -vsl_catchup varnish v1 -expect MEMPOOL.req0.live == 0 varnish v1 -expect MEMPOOL.req1.live == 0 varnish v1 -expect MEMPOOL.sess0.live == 0 varnish v1 -expect MEMPOOL.sess1.live == 0 ####################################################################### # Test reverse order stream numbers client c1 { stream 0 { rxgoaway expect goaway.laststream == 3 expect goaway.err == PROTOCOL_ERROR } -start stream 3 { txreq } -run stream 1 { txreq } -run stream 0 -wait } -run varnish v1 -vsl_catchup varnish v1 -expect MEMPOOL.req0.live == 0 varnish v1 -expect MEMPOOL.req1.live == 0 varnish v1 -expect MEMPOOL.sess0.live == 0 varnish v1 -expect MEMPOOL.sess1.live == 0 ####################################################################### # Test WINDOW_UPDATE error conditions client c1 { stream 1 { txreq -nostrend txwinup -size 0 rxrst expect rst.err == PROTOCOL_ERROR } -run stream 3 { txreq -nostrend txwinup -size 0x40000000 txwinup -size 0x40000000 rxrst expect rst.err == FLOW_CONTROL_ERROR } -run stream 0 { rxgoaway expect goaway.laststream == 5 expect goaway.err == FRAME_SIZE_ERROR } -start stream 5 { txreq -nostrend sendhex "000003 08 00 00000005" delay .1 sendhex 01 delay .1 sendhex 02 delay .1 sendhex 03 } -run stream 0 -wait } -run client c1 { stream 0 { txwinup -size 0x40000000 txwinup -size 0x40000000 rxgoaway expect goaway.laststream == 0 expect goaway.err == FLOW_CONTROL_ERROR } -run } -run client c1 { stream 1 { txreq rxresp } -run stream 1 { # WINDOW_UPDATE on closed stream txwinup -size 0x4000 } -run } -run varnish v1 -vsl_catchup varnish v1 -expect MEMPOOL.req0.live == 0 varnish v1 -expect MEMPOOL.req1.live == 0 varnish v1 -expect MEMPOOL.sess0.live == 0 varnish v1 -expect MEMPOOL.sess1.live == 0 ####################################################################### # Test PING error conditions client c1 { stream 0 { txping -ack -data "FOOBAR42" rxgoaway expect goaway.laststream == 0 expect goaway.err == PROTOCOL_ERROR } -run } -run client c1 { stream 0 { sendhex "000008 06 80 00000001 0102030405060708" rxgoaway expect goaway.laststream == 0 expect goaway.err == PROTOCOL_ERROR } -run } -run client c1 { stream 0 { sendhex "000007 06 80 00000000 01020304050607" rxgoaway expect goaway.laststream == 0 expect goaway.err == FRAME_SIZE_ERROR } -run } -run varnish v1 -vsl_catchup varnish v1 -expect MEMPOOL.req0.live == 0 varnish v1 -expect MEMPOOL.req1.live == 0 varnish v1 -expect MEMPOOL.sess0.live == 0 varnish v1 -expect MEMPOOL.sess1.live == 0 ####################################################################### # Test PUSH_PROMISE error conditions client c1 { stream 0 { rxgoaway expect goaway.err == PROTOCOL_ERROR expect goaway.laststream == 1 } -start stream 1 { txreq -nostrend sendhex "000008 05 00 00000001 0001020304050607" } -run stream 0 -wait } -run client c1 { stream 0 { rxgoaway expect goaway.err == PROTOCOL_ERROR expect goaway.laststream == 1 } -start stream 1 { txreq rxresp delay .1 # send a PUSH_PROMISE after a request sendhex "000008 05 00 00000001 0001020304050607" } -start } -run varnish v1 -vsl_catchup varnish v1 -expect MEMPOOL.req0.live == 0 varnish v1 -expect MEMPOOL.req1.live == 0 varnish v1 -expect MEMPOOL.sess0.live == 0 varnish v1 -expect MEMPOOL.sess1.live == 0 ####################################################################### # Test RST_STREAM error conditions client c1 { stream 0 { # RST idle stream sendhex "000004 03 00 00000007 00000008" rxgoaway expect goaway.err == PROTOCOL_ERROR expect goaway.laststream == 0 } -run } -run client c1 { stream 0 { rxgoaway expect goaway.err == FRAME_SIZE_ERROR expect goaway.laststream == 1 } -start stream 1 { txreq -nostrend # RST wrong length sendhex "000005 03 00 00000001 0000000800" } -run stream 0 -wait } -run client c1 { stream 0 { # RST stream zero sendhex "000000 03 00 00000000 00000008" rxgoaway expect goaway.err == PROTOCOL_ERROR expect goaway.laststream == 0 } -run } -run client c1 { stream 0 { rxgoaway expect goaway.err == NO_ERROR expect goaway.laststream == 3 } -start stream 1 { txreq -nostrend txrst -err 2 } -run stream 3 { txreq -nostrend txrst -err 0x666 } -run stream 0 -wait } -run client c1 { stream 0 { rxgoaway expect goaway.err == NO_ERROR expect goaway.laststream == 1 } -start stream 1 { txreq rxresp } -run stream 1 { # RST_STREAM on closed stream txrst } -run stream 0 -wait } -run varnish v1 -vsl_catchup varnish v1 -expect MEMPOOL.req0.live == 0 varnish v1 -expect MEMPOOL.req1.live == 0 varnish v1 -expect MEMPOOL.sess0.live == 0 varnish v1 -expect MEMPOOL.sess1.live == 0 ####################################################################### # Test SETTING error conditions client c1 { stream 0 { # SETTING ACK with data sendhex "000001 04 01 00000000 aa" rxgoaway expect goaway.err == FRAME_SIZE_ERROR expect goaway.laststream == 0 } -run } -run client c1 { stream 0 { # SETTING ACK with bad length sendhex "000001 04 00 00000000 aa" rxgoaway expect goaway.err == PROTOCOL_ERROR expect goaway.laststream == 0 } -run } -run client c1 { stream 0 { # SETTING ACK with bad value txsettings -winsize 0x80000000 rxgoaway expect goaway.err == FLOW_CONTROL_ERROR expect goaway.laststream == 0 } -run } -run client c1 { stream 0 { # SETTING unknown value sendhex "000006 04 00 00000000 ffff00000000" rxsettings txping rxping } -run } -run varnish v1 -vsl_catchup varnish v1 -expect MEMPOOL.req0.live == 0 varnish v1 -expect MEMPOOL.req1.live == 0 varnish v1 -expect MEMPOOL.sess0.live == 0 varnish v1 -expect MEMPOOL.sess1.live == 0 ####################################################################### # Test GOAWAY error conditions client c1 { stream 0 { txgoaway -err 2 } -run expect_close } -run client c1 { stream 0 { txgoaway -err 2222 } -run expect_close } -run varnish v1 -vsl_catchup varnish v1 -expect MEMPOOL.req0.live == 0 varnish v1 -expect MEMPOOL.req1.live == 0 varnish v1 -expect MEMPOOL.sess0.live == 0 varnish v1 -expect MEMPOOL.sess1.live == 0 ####################################################################### # Test HEADERS error conditions client c1 { stream 1 { txreq -nostrend txreq -nostrend } -run stream 0 { rxgoaway } -run expect_close } -run client c1 { stream 0 { sendhex 00000c sendhex 01 sendhex 05 sendhex 00000001 sendhex ffffffff sendhex ffffffff sendhex ffffffff rxgoaway expect goaway.err == COMPRESSION_ERROR } -run } -run client c1 { stream 0 { sendhex 000012 sendhex 01 sendhex 05 sendhex 00000001 sendhex {8286 8441 0f77 7777 2e65 7861 6d70 6c65 2e63} rxgoaway expect goaway.err == COMPRESSION_ERROR } -run } -run client c1 { stream 1 { txreq -hdr ":bla" "foo" rxrst expect rst.err == PROTOCOL_ERROR } -run } -run #2349: Padding exceeds frame size client c1 { stream 1 { sendhex 000001 sendhex 01 sendhex 09 sendhex 00000001 sendhex { ff } } -run stream 0 { rxgoaway expect goaway.err == PROTOCOL_ERROR expect goaway.laststream == 1 } -run expect_close } -run #2349: Padding equal to frame size client c1 { stream 1 { sendhex 000001 sendhex 01 sendhex 09 sendhex 00000001 sendhex 01 } -run stream 0 { rxgoaway expect goaway.err == PROTOCOL_ERROR expect goaway.laststream == 1 } -run expect_close } -run #2349: Integer underrun may also occur when the priority flag is set client c1 { stream 1 { sendhex 000004 sendhex 01 sendhex 21 sendhex 00000001 sendhex { aabb ccdd } } -run stream 0 { rxgoaway expect goaway.err == PROTOCOL_ERROR expect goaway.laststream == 1 } -run expect_close } -run varnish v1 -vsl_catchup varnish v1 -expect MEMPOOL.req0.live == 0 varnish v1 -expect MEMPOOL.req1.live == 0 varnish v1 -expect MEMPOOL.sess0.live == 0 varnish v1 -expect MEMPOOL.sess1.live == 0 ####################################################################### # Test CONTINUATION error conditions client c1 { stream 1 { txreq -nostrend txcont -hdr "bar" "foo" } -run stream 0 { rxgoaway } -run expect_close } -run client c1 { stream 0 { sendhex 000014 sendhex 01 sendhex 01 sendhex 00000001 sendhex {8286 8441 0f77 7777 2e65 7861 6d70 6c65 2e63 6f6d} sendhex 00000c sendhex 09 sendhex 04 sendhex 00000001 sendhex ffffffff sendhex ffffffff sendhex ffffffff rxgoaway expect goaway.err == COMPRESSION_ERROR } -run } -run client c1 { stream 1 { txreq -nohdrend txcont -hdr "bar" "foo" rxresp expect resp.status == 200 } -run } -run # 2350: Don't accept a continuation frame after stream is closed client c1 { stream 1 { txreq rxresp txcont -hdr "foo" "bar" } -run stream 0 { rxgoaway expect goaway.err == PROTOCOL_ERROR } -run } -run varnish v1 -vsl_catchup varnish v1 -expect MEMPOOL.req0.live == 0 varnish v1 -expect MEMPOOL.req1.live == 0 varnish v1 -expect MEMPOOL.sess0.live == 0 varnish v1 -expect MEMPOOL.sess1.live == 0 ####################################################################### # Test DATA error conditions client c1 { stream 1 { txdata -data "FOOBAR" } -run stream 0 { rxgoaway } -run expect_close } -run client c1 { stream 1 { txreq rxresp txdata -data "FOOBAR" } -run stream 3 { txreq rxresp } -run } -run varnish v1 -vsl_catchup varnish v1 -expect MEMPOOL.req0.live == 0 varnish v1 -expect MEMPOOL.req1.live == 0 varnish v1 -expect MEMPOOL.sess0.live == 0 varnish v1 -expect MEMPOOL.sess1.live == 0 varnish-7.5.0/bin/varnishtest/tests/t02004.vtc000066400000000000000000000007211457605730600210450ustar00rootroot00000000000000varnishtest "H2 panic" server s1 { rxreq txresp } -start varnish v1 -cliok "param.set feature +http2" varnish v1 -cliok "param.set feature +no_coredump" varnish v1 -cliok "param.set debug +syncvsl" varnish v1 -vcl+backend { import vtc; sub vcl_recv { vtc.panic("H2 panic"); } } -start client c1 { stream 1 { txreq -hdr :authority foo.bar -pad cotton } -run expect_close } -run delay 2 varnish v1 -cliok "panic.clear" varnish v1 -expectexit 0x40 varnish-7.5.0/bin/varnishtest/tests/t02005.vtc000066400000000000000000000025631457605730600210540ustar00rootroot00000000000000varnishtest "H2 POST" barrier b2 sock 2 barrier b3 sock 2 server s1 { rxreq expect req.http.content-length == 7 expect req.http.transfer-encoding == txresp -noserver -hdr "Accept-ranges: bytes" -hdr "Content-Type: text/plain" -body response rxreq txresp -noserver -hdr "Accept-ranges: bytes" } -start varnish v1 -vcl+backend { import vtc; sub vcl_recv { if (req.url == "/a") { vtc.barrier_sync("${b2_sock}"); vtc.barrier_sync("${b3_sock}"); return (fail); } } sub vcl_deliver { # make ReqAcct deterministic unset resp.http.via; } } -cliok "param.set feature +http2" -start varnish v1 -cliok "param.set debug +syncvsl" logexpect l1 -v v1 -g raw { expect * 1001 ReqAcct "80 7 87 78 8 86" expect * 1000 ReqAcct "45 8 53 63 28 91" } -start client c1 { stream 0 { txping rxping } -run stream 1 { txreq -req POST -hdr content-type text/plain -hdr content-length 7 -body request # First, HTTP checks rxresp expect resp.http.content-Type == "text/plain" # Then, payload checks expect resp.body == response } -run } -run client c2 { stream 0 { barrier b2 sync delay 1 barrier b3 sync } -start stream 1 { txreq -url "/a" -req POST -nostrend txdata -datalen 100 rxresp expect resp.status == 503 } -run stream 3 { txreq -url "/b" rxresp expect resp.status == 200 } -run stream 0 -wait } -run logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/t02006.vtc000066400000000000000000000013531457605730600210510ustar00rootroot00000000000000varnishtest "H2 POST w/ 100 Continue" server s1 { rxreq expect req.http.content-length == expect req.http.transfer-encoding == chunked expect req.proto == HTTP/1.1 txresp -hdr "Content-Type: text/plain" -body response } -start varnish v1 -vcl+backend {} -cliok "param.set feature +http2" -start varnish v1 -cliok "param.set debug +syncvsl" client c1 { stream 1 { txreq \ -req POST \ -hdr content-type text/plain \ -nostrend \ -nohdrend txcont \ -hdr expect 100-continue \ -nostrend rxhdrs expect resp.status == 100 txdata \ -data request rxresp expect resp.status == 200 expect resp.http.content-Type == "text/plain" # Then, payload checks expect resp.body == response } -run } -run varnish-7.5.0/bin/varnishtest/tests/t02007.vtc000066400000000000000000000027321457605730600210540ustar00rootroot00000000000000varnishtest "H2 Huge response headers" server s1 { rxreq expect req.proto == HTTP/1.1 txresp -hdr "Content-Type: text/plain" -hdrlen Foo 100 -bodylen 100 non_fatal close accept rxreq expect req.url == /3 expect req.proto == HTTP/1.1 txresp -hdr "Content-Type: text/plain" -hdrlen Foo 50 -bodylen 50 close accept rxreq expect req.url == /5 } -start varnish v1 -cliok "param.set feature +http2" varnish v1 -cliok "param.set debug +syncvsl" varnish v1 -cliok "param.set debug +h2_nocheck" varnish v1 -vcl+backend {} -start client c1 { stream 0 { txsettings -framesize 64 rxsettings } -run stream 1 { txreq \ -req POST \ -url /1 \ -hdr content-type text/plain \ -nostrend \ -nohdrend txcont \ -hdr foo bar \ -nohdrend \ -nostrend txcont \ -hdr expect 100-continue \ -hdr content-length 7 \ -nostrend rxhdrs expect resp.status == 100 txdata \ -data request rxresp expect resp.status == 200 expect resp.http.content-Type == "text/plain" expect resp.bodylen == 100 } -run } -run varnish v1 -vsl_catchup client c1 { stream 3 { txreq \ -req POST \ -url /3 \ -hdr content-type text/plain \ -nostrend delay .2 txdata \ -nostrend \ -data request delay .2 txrst -err 0x333 } -run delay 2 stream 5 { txreq \ -req POST \ -url /5 \ -hdr content-type text/plain \ -nostrend delay .2 txdata \ -nostrend \ -data request delay .2 } -run } -run varnish v1 -vsl_catchup varnish-7.5.0/bin/varnishtest/tests/t02008.vtc000066400000000000000000000013501457605730600210500ustar00rootroot00000000000000varnishtest "Test GOAWAY/session cleanup" server s1 { rxreq txresp -hdr "Content-Type: text/plain" -body response } -start varnish v1 -vcl+backend {} -start varnish v1 -cliok {param.set feature +http2} varnish v1 -cliok "param.set debug +syncvsl" client c1 { stream 1 { txprio } -run stream 3 { txreq # First, HTTP checks rxresp expect resp.http.content-Type == "text/plain" # Then, payload checks expect resp.body == response } -run stream 5 { txprio } -run stream 0 { txgoaway -err 2 } -run expect_close } -run delay .1 client c1 { stream 1 { txreq # First, HTTP checks rxresp expect resp.http.content-Type == "text/plain" # Then, payload checks expect resp.body == response } -run } -run varnish-7.5.0/bin/varnishtest/tests/t02009.vtc000066400000000000000000000011431457605730600210510ustar00rootroot00000000000000varnishtest "H2 on waiting list" barrier b1 cond 2 server s1 { rxreq barrier b1 sync delay 1 txresp -hdr "Content-Type: text/plain" -body response } -start varnish v1 -vcl+backend {} -start varnish v1 -cliok {param.set feature +http2} varnish v1 -cliok "param.set debug +syncvsl" client c1 { stream 1 { txreq rxresp expect resp.http.content-Type == "text/plain" expect resp.body == response } -start stream 5 { barrier b1 sync txreq rxresp expect resp.http.content-Type == "text/plain" expect resp.body == response } -run stream 1 -wait } -run varnish v1 -expect client_req == 2 varnish-7.5.0/bin/varnishtest/tests/t02010.vtc000066400000000000000000000035251457605730600210470ustar00rootroot00000000000000varnishtest "Test H2 Client IMS" server s1 { rxreq expect req.url == "/foo" txresp -hdr "Last-Modified: Thu, 26 Jun 2008 12:00:01 GMT" \ -hdr {ETag: "foo"} \ -body "11111\n" rxreq expect req.url == "/bar" txresp -hdr "Last-Modified: Thu, 26 Jun 2008 12:00:01 GMT" \ -hdr {ETag: "bar"} } -start varnish v1 -vcl+backend { } -start varnish v1 -cliok "param.set feature +http2" client c1 { stream 1 { txreq -url "/foo" rxresp expect resp.status == 200 expect resp.http.etag == {"foo"} expect resp.http.content-length == "6" expect resp.bodylen == 6 } -run delay .1 stream 3 { txreq -url "/foo" \ -hdr "if-modified-since" "Thu, 26 Jun 2008 12:00:00 GMT" rxresp expect resp.status == 200 expect resp.http.content-length == "6" expect resp.http.etag == {"foo"} expect resp.bodylen == 6 } -run delay .1 stream 5 { txreq -url "/foo" \ -hdr "if-modified-since" "Thu, 26 Jun 2008 12:00:01 GMT" rxresp -no_obj expect resp.status == 304 expect resp.http.etag == {"foo"} expect resp.http.content-length == "" expect resp.bodylen == 0 } -run delay .1 stream 7 { txreq -url "/foo" \ -hdr "if-modified-since" "Thu, 26 Jun 2008 12:00:02 GMT" rxresp -no_obj expect resp.status == 304 expect resp.http.etag == {"foo"} expect resp.http.content-length == "" expect resp.bodylen == 0 } -run delay .1 stream 9 { txreq -url "/bar" rxresp expect resp.status == 200 expect resp.http.etag == {"bar"} expect resp.http.content-length == "0" expect resp.bodylen == 0 } -run delay .1 stream 11 { txreq -url "/bar" \ -hdr "if-modified-since" "Thu, 26 Jun 2008 12:00:01 GMT" rxresp -no_obj expect resp.status == 304 expect resp.http.etag == {"bar"} expect resp.http.content-length == expect resp.bodylen == 0 } -run } client c1 -run # client c1 -run varnish-7.5.0/bin/varnishtest/tests/t02011.vtc000066400000000000000000000034211457605730600210430ustar00rootroot00000000000000varnishtest "Can't schedule stream" barrier b1 sock 3 barrier b2 sock 3 server s1 { rxreq txresp } -start # We need to confirm that a refused stream doesn't harm other streams, so we # want a round trip to the backend to ensure that. This requires 3 threads to # be busy simultaneously before a stream can be refused: # # - one for c1's session # - one for stream 1 # - one for the backend request # # thread priorities ensure that there is exactly one thread per class # at this point, so when we try to get a second stream, we fail. varnish v1 -cliok "param.set thread_pools 1" varnish v1 -cliok "param.set thread_pool_min 6" varnish v1 -cliok "param.set thread_pool_max 6" varnish v1 -cliok "param.set thread_queue_limit 0" varnish v1 -cliok "param.set thread_stats_rate 1" varnish v1 -cliok "param.set feature +http2" varnish v1 -cliok "param.set debug +syncvsl" varnish v1 -vcl+backend { import vtc; sub vcl_recv { if (req.http.should == "reset") { vtc.panic("Expected stream reset REFUSED_STREAM"); } } sub vcl_backend_fetch { vtc.barrier_sync("${b1_sock}"); vtc.barrier_sync("${b2_sock}"); } } -start client c1 { txpri stream 0 rxsettings -run stream 1 { txreq -hdr should sync barrier b1 sync barrier b2 sync rxresp expect resp.status == 200 } -start barrier b1 sync stream 3 { txreq -hdr should reset rxrst expect rst.err == REFUSED_STREAM } -run barrier b2 sync } -run # trigger an update of the stats varnish v1 -cliok "param.set thread_pool_max 7" varnish v1 -cliok "param.set thread_pool_min 7" delay 1 varnish v1 -cliok "param.set thread_pool_min 6" delay 1 varnish v1 -vsl_catchup varnish v1 -expect sess_dropped == 0 varnish v1 -expect req_dropped == 1 varnish v1 -expect MEMPOOL.req0.live == 0 varnish v1 -expect MEMPOOL.sess0.live == 0 varnish-7.5.0/bin/varnishtest/tests/t02012.vtc000066400000000000000000000017301457605730600210450ustar00rootroot00000000000000varnishtest "Test max_concurrent_streams" barrier b1 sock 5 barrier b2 sock 3 barrier b3 cond 2 barrier b4 cond 2 server s1 { rxreq txresp } -start varnish v1 -cliok "param.set feature +http2" varnish v1 -cliok "param.set debug +syncvsl" varnish v1 -cliok "param.set h2_max_concurrent_streams 2" varnish v1 -vcl+backend { import vtc; sub vcl_deliver { if (req.http.sync) { vtc.barrier_sync("${b1_sock}"); vtc.barrier_sync("${b2_sock}"); } } } -start client c1 { stream 1 { txreq -hdr "sync" "1" barrier b3 sync barrier b1 sync rxresp expect resp.status == 200 } -start stream 3 { barrier b3 sync txreq -hdr "sync" "1" barrier b4 sync barrier b1 sync rxresp expect resp.status == 200 } -start stream 5 { barrier b4 sync barrier b1 sync txreq rxrst expect rst.err == REFUSED_STREAM barrier b2 sync } -run stream 1 -wait stream 3 -wait stream 7 { txreq rxresp expect resp.status == 200 } -run } -run varnish-7.5.0/bin/varnishtest/tests/t02013.vtc000066400000000000000000000032111457605730600210420ustar00rootroot00000000000000varnishtest "Direct H2 start over Unix domain sockets" server s1 -listen "${tmpdir}/s1.sock" { rxreq expect req.http.host == foo.bar txresp \ -hdr "H234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789I: foo" \ -hdr "Foo: H234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789I" \ -bodylen 10 } -start varnish v1 -arg "-a ${tmpdir}/v1.sock" -vcl+backend { sub vcl_recv { return (pipe); } } -start varnish v1 -cliok "param.set debug +syncvsl" varnish v1 -cliok "param.set feature -http2" client c1 -connect "${tmpdir}/v1.sock" { txpri expect_close } -run varnish v1 -cliok "param.set feature +http2" client c1 -connect "${tmpdir}/v1.sock" { stream 1 { txprio -weight 10 -stream 0 } -run stream 3 { txprio -weight 10 -stream 0 } -run stream 5 { txprio -weight 10 -stream 2 } -run stream 7 { txreq -dep 3 -hdr :authority foo.bar -pad cotton rxresp expect resp.status == 200 delay 1 txrst -err 0x1111 } -start stream 0 { txping -data "_-__-_-_" rxping expect ping.ack == "true" expect ping.data == "_-__-_-_" } -run stream 7 -wait } -run varnish v1 -expect MEMPOOL.req0.live == 0 varnish v1 -expect MEMPOOL.req1.live == 0 varnish v1 -expect MEMPOOL.sess0.live == 0 varnish v1 -expect MEMPOOL.sess1.live == 0 varnish-7.5.0/bin/varnishtest/tests/t02014.vtc000066400000000000000000000043121457605730600210460ustar00rootroot00000000000000varnishtest "Exercise h/2 sender flow control code" barrier b1 sock 3 barrier b2 sock 3 barrier b3 sock 3 barrier b4 sock 3 barrier b2_err cond 2 barrier b3_err cond 2 server s1 { rxreq txresp -bodylen 66300 } -start server s2 { non_fatal rxreq txresp -bodylen 66300 } -start varnish v1 -vcl+backend { import vtc; sub vcl_backend_fetch { if (bereq.method == "POST") { set bereq.backend = s2; } } sub vcl_deliver { if (req.http.barrier) { vtc.barrier_sync(req.http.barrier); } } } -start varnish v1 -cliok "param.set debug +syncvsl" varnish v1 -cliok "param.set feature +http2" varnish v1 -cliok "param.reset h2_initial_window_size" client c1 { stream 0 { barrier b1 sync delay .5 txwinup -size 256 delay .5 txwinup -size 256 delay .5 txwinup -size 256 } -start stream 1 { txreq -hdr barrier ${b1_sock} barrier b1 sync delay .5 txwinup -size 256 delay .5 txwinup -size 256 delay .5 txwinup -size 256 rxresp expect resp.status == 200 expect resp.bodylen == 66300 } -run stream 0 -wait } -run varnish v1 -vsl_catchup logexpect l2 -v v1 -g raw { expect * * ReqMethod GET expect * = VCL_call DELIVER } -start client c2 { stream 0 { barrier b2 sync } -start stream 1 { txreq -hdr barrier ${b2_sock} barrier b2_err sync txdata -data "fail" rxrst expect rst.err == STREAM_CLOSED barrier b2 sync } -run stream 0 -wait } -start logexpect l2 -wait barrier b2_err sync client c2 -wait logexpect l3 -v v1 -g raw { expect * * ReqMethod POST expect * = VCL_call DELIVER } -start client c3 { stream 0 { barrier b3 sync barrier b4 sync delay .5 txwinup -size 256 delay .5 txwinup -size 256 delay .5 txwinup -size 256 } -start stream 1 { txreq -req "POST" -hdr barrier ${b3_sock} -nostrend txdata -data "ok" barrier b3_err sync txdata -data "fail" rxrst expect rst.err == STREAM_CLOSED barrier b3 sync } -run stream 3 { txreq -hdr barrier ${b4_sock} barrier b4 sync delay .5 txwinup -size 256 delay .5 txwinup -size 256 delay .5 txwinup -size 256 rxresp expect resp.status == 200 expect resp.bodylen == 66300 } -run stream 0 -wait } -start logexpect l3 -wait barrier b3_err sync client c3 -wait varnish-7.5.0/bin/varnishtest/tests/t02015.vtc000066400000000000000000000013541457605730600210520ustar00rootroot00000000000000varnishtest "h2 ReqAcct" server s1 { rxreq txresp -noserver -bodylen 12345 } -start varnish v1 -cliok "param.set feature +http2" varnish v1 -vcl+backend { sub vcl_deliver { # make ReqAcct deterministic unset resp.http.via; } } -start logexpect l1 -v v1 -g raw -q ReqAcct { expect ? 1001 ReqAcct "46 0 46 69 12345 12414" expect ? 1003 ReqAcct "46 0 46 74 1000 1074" } -start client c1 { txpri stream 0 { rxsettings expect settings.ack == false txsettings -ack txsettings -winsize 1000 rxsettings expect settings.ack == true } -run stream 1 { txreq -hdr stream 1 rxhdrs rxdata txwinup -size 11345 rxdata } -run stream 3 { txreq -hdr stream 3 rxhdrs rxdata txrst } -run } -run logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/t02016.vtc000066400000000000000000000037741457605730600210630ustar00rootroot00000000000000varnishtest "client h2 send timeouts" server s1 { rxreq txresp -bodylen 12345 } -start varnish v1 -cliok "param.set feature +http2" varnish v1 -vcl+backend { sub vcl_recv { if (req.url ~ "synth") { return (synth(200)); } } } -start # coverage for send_timeout with c1 varnish v1 -cliok "param.set send_timeout 1" logexpect l1 -v v1 { expect * * Debug "Hit send_timeout" } -start client c1 { txpri stream 0 { rxsettings expect settings.ack == false txsettings -ack txsettings -winsize 1000 rxsettings expect settings.ack == true } -run stream 1 { txreq rxhdrs rxdata # keep the stream idle for 2 seconds delay 2 txwinup -size 256 # too late rxrst expect rst.err == CANCEL } -run } -run logexpect l1 -wait # coverage for h2_window_timeout with c2 varnish v1 -cliok "param.set h2_window_timeout 1" varnish v1 -cliok "param.reset send_timeout" logexpect l2 -v v1 { expect * * Debug "Hit h2_window_timeout" } -start client c2 { txpri stream 0 { rxsettings expect settings.ack == false txsettings -ack txsettings -winsize 1000 rxsettings expect settings.ack == true } -run stream 1 { txreq -nostrend -url "/synth" } -run stream 3 { txreq rxhdrs rxdata # keep the stream idle for 2 seconds delay 2 rxrst expect rst.err == CANCEL } -run stream 1 { txdata } -run } -run logexpect l2 -wait # coverage for h2_window_timeout change with c3 barrier b3 cond 2 logexpect l3 -v v1 { expect * * Debug "Hit h2_window_timeout" } -start client c3 { txpri stream 0 { rxsettings expect settings.ack == false txsettings -ack txsettings -winsize 1000 rxsettings expect settings.ack == true } -run stream 1 { txreq -nostrend -url "/synth" } -run stream 3 { txreq rxhdrs rxdata # keep the stream idle for 2 seconds barrier b3 sync delay 2 rxrst expect rst.err == CANCEL } -run stream 1 { txdata } -run } -start barrier b3 sync varnish v1 -cliok "param.reset h2_window_timeout" client c3 -wait logexpect l3 -wait varnish-7.5.0/bin/varnishtest/tests/t02017.vtc000066400000000000000000000014071457605730600210530ustar00rootroot00000000000000varnishtest "H/2 stream data head of line blocking" barrier b1 cond 2 barrier b2 cond 2 barrier b3 cond 2 barrier b4 cond 2 server s1 { rxreq barrier b4 sync txresp } -start varnish v1 -vcl+backend { sub vcl_recv { if (req.url == "/2") { return (synth(700)); } } } -start varnish v1 -cliok "param.set feature +http2" client c1 { stream 1 { txreq -req GET -url /1 -hdr "content-length" "1" -nostrend barrier b1 sync barrier b2 sync txdata -data 1 # rxwinup barrier b3 sync rxresp expect resp.status == 200 } -start stream 3 { barrier b1 sync txreq -req GET -url /2 -hdr "content-length" "1" -nostrend barrier b2 sync barrier b3 sync txdata -data 2 # rxwinup rxresp expect resp.status == 700 barrier b4 sync } -start } -run varnish-7.5.0/bin/varnishtest/tests/t02018.vtc000066400000000000000000000013661457605730600210600ustar00rootroot00000000000000varnishtest "H/2 stream multiple buffer exhaustion" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { } -start varnish v1 -cliok "param.set feature +http2" varnish v1 -cliok "param.reset h2_initial_window_size" varnish v1 -cliok "param.reset h2_rx_window_low_water" client c1 { stream 1 { txreq -req GET -url /1 -hdr "content-length" "131072" -nostrend txdata -datalen 16384 -nostrend rxwinup txdata -datalen 16384 -nostrend rxwinup txdata -datalen 16384 -nostrend rxwinup txdata -datalen 16384 -nostrend rxwinup txdata -datalen 16384 -nostrend rxwinup txdata -datalen 16384 -nostrend rxwinup txdata -datalen 16384 -nostrend rxwinup txdata -datalen 16384 rxresp expect resp.status == 200 } -start } -run varnish-7.5.0/bin/varnishtest/tests/t02019.vtc000066400000000000000000000017121457605730600210540ustar00rootroot00000000000000varnishtest "H/2 stream early buffer exhaustion" barrier b1 sock 2 server s1 { rxreq txresp } -start varnish v1 -vcl+backend { import vtc; sub vcl_recv { vtc.barrier_sync("${b1_sock}"); vtc.sleep(0.1s); } } -start varnish v1 -cliok "param.set fetch_chunksize 64k" varnish v1 -cliok "param.set feature +http2" varnish v1 -cliok "param.reset h2_initial_window_size" varnish v1 -cliok "param.reset h2_rx_window_low_water" client c1 { stream 1 { txreq -req POST -url /1 -hdr "content-length" "131072" -nostrend expect stream.peer_window == 65535 txdata -datalen 16384 -nostrend txdata -datalen 16384 -nostrend txdata -datalen 16384 -nostrend txdata -datalen 16383 -nostrend expect stream.peer_window == 0 barrier b1 sync loop 4 { rxwinup expect stream.peer_window == 65535 txdata -datalen 16384 -nostrend } rxwinup expect stream.peer_window == 65535 txdata -datalen 1 rxresp expect resp.status == 200 } -run } -run varnish-7.5.0/bin/varnishtest/tests/t02020.vtc000066400000000000000000000040621457605730600210450ustar00rootroot00000000000000varnishtest "H/2 received data frames with padding" barrier b1 sock 3 server s1 { rxreq expect req.url == "/1" expect req.body == abcde txresp rxreq txresp rxreq txresp expect req.body == a } -start varnish v1 -cliok "param.set fetch_chunksize 64k" varnish v1 -vcl+backend { import vtc; sub vcl_recv { if (req.url == "/3") { vtc.barrier_sync("${b1_sock}"); } } } -start varnish v1 -cliok "param.set feature +http2" varnish v1 -cliok "param.reset h2_initial_window_size" varnish v1 -cliok "param.reset h2_rx_window_low_water" varnish v1 -cliok "param.set debug +syncvsl" client c1 { stream 1 { txreq -req POST -url /1 -hdr "content-length" "5" -nostrend txdata -data abcde -padlen 1 rxresp expect resp.status == 200 } -start } -run client c2 { # This makes use of the fact that Varnish will always send a # session window update immediately when receiving a data # frame. The four rxwinup before barrier sync on stream 0 matches # up with the four txdata frames sent early on the stream, and # makes sure that the session thread has exhausted its send window # completely before opening up and starting to consume rxbuf data # by unblocking the client thread stuck in vcl_recv. From that # point on window updates will also be sent on the stream. stream 0 { rxwinup rxwinup rxwinup rxwinup barrier b1 sync } -start stream 3 { txreq -req POST -url /3 -hdr "content-length" "131072" -nostrend txdata -datalen 16300 -padlen 83 -nostrend txdata -datalen 16300 -padlen 83 -nostrend txdata -datalen 16300 -padlen 83 -nostrend txdata -datalen 16300 -padlen 82 -nostrend barrier b1 sync rxwinup txdata -datalen 16300 -padlen 83 -nostrend rxwinup txdata -datalen 16300 -padlen 83 -nostrend rxwinup txdata -datalen 16300 -padlen 83 -nostrend rxwinup txdata -datalen 16300 -padlen 83 -nostrend rxwinup txdata -datalen 672 rxresp expect resp.status == 200 } -start } -run client c3 { stream 5 { txreq -req POST -url /5 -nostrend txdata -data a -padlen 255 rxresp expect resp.status == 200 } -start } -run varnish-7.5.0/bin/varnishtest/tests/t02021.vtc000066400000000000000000000013771457605730600210540ustar00rootroot00000000000000varnishtest "H/2 data frame padding exhaust window" server s1 { rxreq expect req.body == abcde txresp } -start varnish v1 -vcl+backend { } -start varnish v1 -cliok "param.set feature +http2" varnish v1 -cliok "param.reset h2_initial_window_size" varnish v1 -cliok "param.reset h2_rx_window_low_water" client c1 { stream 1 { txreq -req POST -url /1 -hdr "content-length" "5" -nostrend # Fill 65535 bytes of stream window using padding only # Note that each frame consumes 256 bytes of window (padlen + 1) loop 255 { txdata -padlen 255 -nostrend } txdata -padlen 254 -nostrend # Here the window have been exhausted, so we should receive # a window update rxwinup txdata -data abcde rxresp expect resp.status == 200 } -run } -run varnish-7.5.0/bin/varnishtest/tests/t02022.vtc000066400000000000000000000042011457605730600210420ustar00rootroot00000000000000varnishtest "Test non-transient rxbuf stevedore with LRU nuking" barrier b1 sock 2 -cyclic server s1 { rxreq txresp -body asdf rxreq expect req.http.X-Varnish == 1005 txresp -bodylen 1048000 rxreq txresp -body ASDF } -start varnish v1 -arg "-srxbuf=malloc,1m -smain=malloc,1m" -vcl+backend { import vtc; sub vcl_recv { if (req.url == "/1") { vtc.barrier_sync("${b1_sock}"); } } sub vcl_backend_response { if (bereq.url == "/2") { set beresp.storage = storage.rxbuf; } else { set beresp.storage = storage.main; } } } varnish v1 -cliok "param.set feature +http2" varnish v1 -cliok "param.reset h2_initial_window_size" varnish v1 -cliok "param.reset h2_rx_window_low_water" varnish v1 -cliok "param.set h2_rxbuf_storage rxbuf" varnish v1 -cliok "param.set vsl_mask +ExpKill" varnish v1 -cliok "param.set debug +syncvsl" varnish v1 -start client c1 { stream 1 { txreq -req POST -url /1 -hdr "content-length" "2048" -nostrend txdata -datalen 2048 rxresp expect resp.status == 200 } -start } -start varnish v1 -expect SM?.rxbuf.g_bytes >= 2048 varnish v1 -expect SM?.Transient.g_bytes == 0 varnish v1 -expect MAIN.n_lru_nuked == 0 barrier b1 sync client c1 -wait varnish v1 -vsl_catchup varnish v1 -expect SM?.rxbuf.g_bytes == 0 varnish v1 -expect SM?.Transient.g_bytes == 0 varnish v1 -expect MAIN.n_lru_nuked == 0 client c2 { txreq -url /2 rxresp expect resp.status == 200 expect resp.bodylen == 1048000 } -run varnish v1 -vsl_catchup varnish v1 -expect SM?.rxbuf.g_bytes >= 1048000 varnish v1 -expect MAIN.n_lru_nuked == 0 logexpect l1 -v v1 -g raw -q "Expkill ~ LRU" { expect * * Expkill xid=1005 } -start client c3 { stream 1 { txreq -req POST -url /1 -hdr "content-length" "2048" -nostrend txdata -datalen 2048 rxresp expect resp.status == 200 } -start } -start logexpect l1 -wait varnish v1 -expect SM?.rxbuf.g_bytes >= 2048 varnish v1 -expect SM?.rxbuf.g_bytes < 3000 varnish v1 -expect SM?.Transient.g_bytes == 0 varnish v1 -expect MAIN.n_lru_nuked == 1 barrier b1 sync client c3 -wait varnish v1 -vsl_catchup varnish v1 -expect SM?.rxbuf.g_bytes == 0 varnish v1 -expect SM?.Transient.g_bytes == 0 varnish-7.5.0/bin/varnishtest/tests/t02023.vtc000066400000000000000000000033341457605730600210510ustar00rootroot00000000000000varnishtest "Empty and invalid headers" server s1 { rxreq txresp } -start varnish v1 -arg "-p feature=+http2" -vcl+backend { } -start client c1 { txreq -url "" rxresp expect resp.status == 400 } -run client c1 { txreq -req "" rxresp expect resp.status == 400 } -run client c1 { txreq -proto "" rxresp expect resp.status == 400 } -run client c1 { stream 1 { txreq -url "" rxrst expect rst.err == PROTOCOL_ERROR } -run } -run client c1 { stream 1 { txreq -scheme "" rxrst expect rst.err == PROTOCOL_ERROR } -run } -run client c1 { stream 1 { txreq -req "" rxrst expect rst.err == PROTOCOL_ERROR } -run } -run client c1 { stream 1 { txreq -hdr "empty" "" rxresp expect resp.status == 200 } -run } -run varnish v1 -vsl_catchup client c1 { stream 1 { txreq -hdr "foo" " bar" rxrst expect rst.err == PROTOCOL_ERROR } -run } -run client c1 { stream 1 { txreq -hdr "foo" " " rxrst expect rst.err == PROTOCOL_ERROR } -run } -run client c1 { stream 1 { txreq -hdr ":foo" "bar" rxrst expect rst.err == PROTOCOL_ERROR } -run } -run client c1 { stream 1 { txreq -hdr "foo" "b\x0car" rxrst expect rst.err == PROTOCOL_ERROR } -run } -run client c1 { stream 1 { txreq -hdr "f o" "bar" rxrst expect rst.err == PROTOCOL_ERROR } -run } -run client c1 { stream 1 { txreq -hdr "f: o" "bar" rxrst expect rst.err == PROTOCOL_ERROR } -run } -run client c1 { stream 1 { txreq -hdr "foo" "bar " rxrst expect rst.err == PROTOCOL_ERROR } -run } -run client c1 { stream 1 { txreq -hdr "foo" " bar" rxrst expect rst.err == PROTOCOL_ERROR } -run } -run client c1 { stream 1 { txreq -hdr "foo" "bar " rxrst expect rst.err == PROTOCOL_ERROR } -run } -run varnish-7.5.0/bin/varnishtest/tests/t02024.vtc000066400000000000000000000010441457605730600210460ustar00rootroot00000000000000varnishtest "Garbage pseudo-headers" server s1 { rxreq txresp } -start varnish v1 -arg "-p feature=+http2" -vcl+backend { } -start client c1 { txreq -url " " rxresp expect resp.status == 400 } -run client c1 { txreq -req " " rxresp expect resp.status == 400 } -run client c1 { txreq -proto " " rxresp expect resp.status == 400 } -run client c1 { stream 1 { txreq -url " " rxrst } -run } -run client c1 { stream 1 { txreq -scheme " " rxrst } -run } -run client c1 { stream 1 { txreq -req " " rxrst } -run } -run varnish-7.5.0/bin/varnishtest/tests/t02025.vtc000066400000000000000000000023121457605730600210460ustar00rootroot00000000000000varnishtest "h2 reset interrupt" barrier b1 sock 2 barrier b2 sock 2 varnish v1 -cliok "param.set feature +http2" varnish v1 -cliok "param.set debug +syncvsl" varnish v1 -vcl { import vtc; backend be none; sub vcl_recv { vtc.barrier_sync("${b1_sock}"); vtc.barrier_sync("${b2_sock}"); } sub vcl_miss { vtc.panic("unreachable"); } } -start logexpect l1 -v v1 -g raw -i Debug { expect * * Debug "^H2RXF RST_STREAM" } -start client c1 { stream 0 { rxgoaway expect goaway.err == NO_ERROR expect goaway.laststream == 1 } -start stream 1 { txreq barrier b1 sync txrst } -run stream 0 -wait } -start logexpect l1 -wait barrier b2 sync client c1 -wait varnish v1 -vsl_catchup varnish v1 -expect req_reset == 1 # NB: The varnishncsa command below shows a minimal pattern to collect # "rapid reset" suspects per session, with the IP address. Here rapid # is interpreted as before a second elapsed. Session VXIDs showing up # numerous times become increasingly more suspicious. The format can of # course be extended to add anything else useful for data mining. shell -expect "1000 ${localhost} 408" { varnishncsa -n ${v1_name} -d \ -q 'Timestamp:Reset[2] < 1.0' -F '%{VSL:Begin[2]}x %h %s' } varnish-7.5.0/bin/varnishtest/tests/t02026.vtc000066400000000000000000000012771457605730600210600ustar00rootroot00000000000000varnishtest "Dublicate pseudo-headers" server s1 { rxreq txresp } -start varnish v1 -arg "-p feature=+http2" -vcl+backend { } -start #client c1 { # txreq -url "/some/path" -url "/some/other/path" # rxresp # expect resp.status == 400 #} -run #client c1 { # txreq -req "GET" -req "POST" # rxresp # expect resp.status == 400 #} -run #client c1 { # txreq -proto "HTTP/1.1" -proto "HTTP/2.0" # rxresp # expect resp.status == 400 #} -run client c1 { stream 1 { txreq -url "/some/path" -url "/some/other/path" rxrst } -run } -run client c1 { stream 1 { txreq -scheme "http" -scheme "https" rxrst } -run } -run client c1 { stream 1 { txreq -req "GET" -req "POST" rxrst } -run } -run varnish-7.5.0/bin/varnishtest/tests/u00000.vtc000066400000000000000000000124561457605730600210500ustar00rootroot00000000000000varnishtest "Code coverage of mgt_main, (VCL compiler and RSTdump etc)" shell "varnishd -b None -C 2> ${tmpdir}/_.c" shell { varnishd -n ${tmpdir}/no_keep -C -b None 2> no_keep.c test -s no_keep.c && ! test -d no_keep || test -d no_keep/*/vgc.so.dSYM } shell { varnishd -n ${tmpdir}/keep -p debug=+vcl_keep -C -b None 2> keep.c test -s keep.c && test -d keep } shell -err -expect {VCL version declaration missing} { echo 'bad vcl' > ${tmpdir}/bad.vcl varnishd -f ${tmpdir}/bad.vcl -n ${tmpdir} } shell -err -expect {-x must be the first argument} "varnishd -d -x foo " shell -err -expect {-V must be the first argument} "varnishd -d -V foo " shell -err -expect {Too many arguments for -x} "varnishd -x foo bar" shell -err -expect {Invalid -x argument} "varnishd -x foo " # This one is tricky, the getopt message on stderr is not standardized. shell -err -expect {option} "varnishd -A " shell -err -expect {Usage: varnishd [options]} "varnishd -? " shell -err -expect {Too many arguments} "varnishd foo " shell "varnishd -x parameter > ${tmpdir}/_.param" shell "varnishd -x vsl > ${tmpdir}/_.vsl" shell "varnishd -x cli > ${tmpdir}/_.cli" shell "varnishd -x builtin > ${tmpdir}/_.builtin" shell "varnishd -x optstring > ${tmpdir}/_.optstring" shell "varnishd --optstring > ${tmpdir}/_.optstring2" shell -err -expect {-C needs either -b or -f } { varnishd -C } shell -err -expect {Cannot open -S file} { varnishd -S ${tmpdir}/nonexistent -n ${tmpdir}/v0 -f '' } shell -err -expect {Cannot create secret-file in} { mkdir ${tmpdir}/is_a_dir ${tmpdir}/is_a_dir/_.secret varnishd -n ${tmpdir}/is_a_dir -d -a :0 } shell -err -expect {Unknown jail method "xyz"} "varnishd -jxyz -f '' " shell -err -expect {Too many arguments for -V} "varnishd -V -V" shell -expect {Copyright (c) 2006} "varnishd -V" shell -err -expect {Only one of -d or -F can be specified} "varnishd -d -F " shell -err -expect {Only one of -b or -f can be specified} "varnishd -b a -f b " shell -err -expect {-d makes no sense with -C} "varnishd -C -b None -d " shell -err -expect {-F makes no sense with -C} "varnishd -C -b None -F " shell -err -expect {Neither -b nor -f given} { varnishd -n ${tmpdir}/v0 } # Test -I and -l arguments (former a00016) shell -err -expect {Only one -I allowed} { touch foo bar varnishd -f '' -I foo -I bar -n ${tmpdir}/v0 -a :0 } shell -err -expect {Error: -I file CLI command failed (104)} { echo "vcl.list" > foo echo "-foobar" >> foo echo "vcl.load" >> foo varnishd -f '' -I foo -n ${tmpdir}/v0 -a :0 -l 2m } shell -err -expect "Can't open non-existent" { varnishd -f '' -I non-existent -n ${tmpdir}/v0 -a :0 } # Code coverage of mgt_main, (VCL compiler and RSTdump etc) (former a00017) # Test -F mode with no VCL loaded shell {echo ping > ${tmpdir}/I_file} process p1 "exec varnishd -n ${tmpdir}/v1 -F -f '' -a :0 -l2m -I ${tmpdir}/I_file 2>&1" -start process p1 -expect-text 0 1 {PONG} process p1 -expect-text 0 1 {END of -I file processing} process p1 -screen_dump shell { ( echo 'vcl 4.0;' echo 'backend default {' echo ' .host="${bad_backend}";' echo '}' ) > ${tmpdir}/vcl } shell -expect {VCL compiled.} { varnishadm -n ${tmpdir}/v1 vcl.load vcl1 ${tmpdir}/vcl } shell -expect {VCL 'vcl1' now active} { varnishadm -n ${tmpdir}/v1 vcl.use vcl1 } shell -expect {active auto warm - vcl1} { varnishadm -n ${tmpdir}/v1 vcl.list } shell {varnishadm -n ${tmpdir}/v1 start} shell {varnishadm -n ${tmpdir}/v1 debug.listen_address} shell -exit 1 -expect "Command failed with error code 500" { varnishadm -n ${tmpdir}/v1 quit } shell -exit 1 -expect "Command failed with error code 102" { varnishadm -n ${tmpdir}/v1 debug.panic.master -j } shell -exit 1 -expect "Command failed with error code 101" { varnishadm -n ${tmpdir}/v1 123 } shell "varnishadm -n ${tmpdir}/v1 param.set cli_limit 128" shell -expect "[response was truncated]" "varnishadm -n ${tmpdir}/v1 help" process p1 -expect-exit 64 -stop -wait # Test multiple -f options shell { cat >${tmpdir}/ok1 <<-EOF vcl 4.0; backend ok1 { .host="${bad_backend}"; } EOF cat >${tmpdir}/ok2 <<-EOF vcl 4.0; backend ok2 { .host="${bad_backend}"; } EOF } varnish v2 -arg "-f ${tmpdir}/ok1" -arg "-f ${tmpdir}/ok2" -start varnish v2 -cliexpect {available auto warm 0 boot0} "vcl.list" varnish v2 -cliexpect {active auto warm 0 boot1} "vcl.list" varnish v2 -stop -wait # Test multiple -f options with a bad VCL shell -err -expect {Cannot read -f file} { exec varnishd -n ${tmpdir}/v0 -F -a :0 -l2m -f ${tmpdir}/ok1 \ -f ${tmpdir}/ok2 -f ${tmpdir}/bad } shell -err -expect {Cannot read -f file} { exec varnishd -n ${tmpdir}/v0 -F -a :0 -l2m -f ${tmpdir}/ok1 \ -f ${tmpdir}/bad -f ${tmpdir}/ok2 } # varnishd -spersistent is tested in p00000.vtc # Test that incomplete CLI commands in -I causes failure filewrite ${tmpdir}/_foobar foobar process p1 { exec varnishd -n ${tmpdir}/v0 -d -a :0 -I ${tmpdir}/_foobar 2>&1 } -expect-exit 2 -dump -start process p1 -expect-text 0 0 "-I file had incomplete CLI command at the end" process p1 -screen-dump process p1 -wait process p2 { /bin/echo 'foobar << blabla' > ${tmpdir}/_foobar exec varnishd -n ${tmpdir}/v0 -d -a :0 -I ${tmpdir}/_foobar 2>&1 } -expect-exit 2 -start process p2 -expect-text 0 0 "-I file had incomplete CLI command at the end" process p2 -screen-dump process p2 -wait varnish-7.5.0/bin/varnishtest/tests/u00001.vtc000066400000000000000000000025441457605730600210460ustar00rootroot00000000000000varnishtest "trivial run of varnishadm in script mode" feature cmd "command -v diff" feature cmd "python3 -c 'import json'" varnish v1 -vcl {backend be none;} -start shell { cat >expected.txt <<-EOF Child_in_state_running EOF varnishadm -n ${v1_name} status | tr ' ' _ >actual.txt diff -u expected.txt actual.txt } # same command via stdin shell { echo status | varnishadm -n ${v1_name} | tr ' ' _ >actual.txt diff -u expected.txt actual.txt } shell { cat >expected.txt <<-EOF 200_22______ Child_in_state_running EOF { varnishadm -n ${v1_name} -p status printf '\n' # easier to add here than remove in expected.txt } | tr ' ' _ >actual.txt diff -u expected.txt actual.txt } # same command via stdin shell { echo status | varnishadm -n ${v1_name} -p | tr ' ' _ >actual.txt printf '\n' >>actual.txt diff -u expected.txt actual.txt } shell { varnishadm -n ${v1_name} status -j | python3 -c 'import sys, json; print(json.load(sys.stdin))' } # same command via stdin shell { echo 'status -j' | varnishadm -n ${v1_name} | python3 -c 'import sys, json; print(json.load(sys.stdin))' } shell -err { varnishadm -n ${v1_name} -p status -j | python3 -c 'import sys, json; print(json.load(sys.stdin))' } # same command via stdin shell -err { echo 'status -j' | varnishadm -n ${v1_name} -p | python3 -c 'import sys, json; print(json.load(sys.stdin))' } varnish-7.5.0/bin/varnishtest/tests/u00003.vtc000066400000000000000000000137521457605730600210530ustar00rootroot00000000000000varnishtest "varnishncsa coverage" server s1 -repeat 2 { rxreq txresp -bodylen 100 } -start varnish v1 -vcl+backend { import std; sub vcl_recv { if (req.url == "/2") { return (pipe); } return (hash); } sub vcl_miss { std.log("quux:quuz"); } sub vcl_hit { return (synth(404)); } } -start shell { varnishncsa -n ${v1_name} -D -P ${tmpdir}/ncsa.pid \ -w ${tmpdir}/ncsa.log -R 100/s } process p1 -winsz 25 200 {varnishncsa -n ${v1_name}} -start delay 1 client c1 { txreq -nouseragent -url /1?foo=bar -hdr "authorization: basic dXNlcjpwYXNz" rxresp txreq -nouseragent -url /1?foo=bar -hdr "baz: qux" rxresp } -run delay 1 shell "mv ${tmpdir}/ncsa.log ${tmpdir}/ncsa.old.log" shell "kill -HUP `cat ${tmpdir}/ncsa.pid`" client c2 { txreq -nouseragent -url /2 rxresp } -run delay 1 shell "kill `cat ${tmpdir}/ncsa.pid`" # default formatter and rotation shell -match {${localhost} - user \[../.../20[1-9][0-9]:..:..:.. (?# )[+-]....\] "GET http://${localhost}/1\?foo=bar HTTP/1.1" 200 100 "-" "-" ${localhost} - - \[../.../20[1-9][0-9]:..:..:.. [+-]....\] (?# )"GET http://${localhost}/1\?foo=bar HTTP/1.1" 404 \d+ "-" "-"$} \ "cat ${tmpdir}/ncsa.old.log" shell "grep -q /2 ${tmpdir}/ncsa.log" # command line shell -match "Usage: .*varnishncsa " \ "varnishncsa -h" shell -expect "Copyright (c) 2006 Verdens Gang AS" \ "varnishncsa -V" shell -err -expect "Missing -w option" \ {varnishncsa -D} shell -err -expect "Daemon cannot write to stdout" \ "varnishlog -D -w -" shell -err -expect "Unknown format specifier at: %{foo}A" \ {varnishncsa -F "%{foo}A"} shell -err -expect "Unknown format specifier at: %A" \ {varnishncsa -F "%A"} shell -err -expect "Unmatched bracket at: %{foo" \ "varnishncsa -F %{foo" shell -err -expect "Unknown formatting extension: foo" \ {varnishncsa -F "%{foo}x"} shell -err -expect "Missing tag in VSL:" \ {varnishncsa -F "%{VSL:}x"} shell -err -expect "Unknown log tag: nonexistent" \ {varnishncsa -F "%{VSL:nonexistent}x"} shell -err -expect "Tag not unique: Req" \ {varnishncsa -F "%{VSL:Req}x"} shell -err -expect "Unknown log tag: Begin[" \ {varnishncsa -F "%{VSL:Begin[}x"} shell -err -expect "Syntax error: VSL:Begin]" \ {varnishncsa -F "%{VSL:Begin]}x"} shell -err -expect "Unknown log tag: Begin[a" \ {varnishncsa -F "%{VSL:Begin[a}x"} shell -err -expect "Syntax error: VSL:Begin[a]" \ {varnishncsa -F "%{VSL:Begin[a]}x"} shell -err -match {Syntax error. (?# )Field specifier must be between 1 and 255: Begin\[0\]} \ {varnishncsa -F "%{VSL:Begin[0]}x"} shell -err -match {Syntax error. (?# )Field specifier must be between 1 and 255: Begin\[999999999999\]} \ {varnishncsa -F "%{VSL:Begin[999999999999]}x"} shell -err -expect "Can't open format file (No such file or directory)" \ {varnishncsa -f /nonexistent/file} shell -err -expect "Empty format file" \ {varnishncsa -f /dev/null} # In Linux and SunOS, getline(3) fails when the stream refers to a # directory but happily works in FreeBSD. #shell -err -expect "Can't read format from file (Is a directory)" \ # {varnishncsa -f ${tmpdir}} shell -err -expect "Invalid grouping mode: session" \ {varnishncsa -g session} shell -err -expect "Can't open output file (No such file or directory)" \ {varnishncsa -w /nonexistent/file} shell -err -match "Usage: .*varnishncsa " \ {varnishncsa extra} # -b shell -match {^${localhost} 100 ${localhost} 0$} \ {varnishncsa -n ${v1_name} -b -d -F "%{Host}i %b"} # -c shell -match {^${localhost} 100 ${localhost} \d+ ${localhost} -$} \ {varnishncsa -n ${v1_name} -c -d -F "%{Host}i %b"} # -f and standard formatters shell {echo "%b %D %H %h %I %{baz}i %l %m %{Age}o %O %q %s %t %{%Y}t \ %T %U %u" > ${tmpdir}/format} shell -match {^100 \d+ HTTP/1.1 (?:\d+.\d+.\d+.\d+|[a-f0-9:]+) \d+ - - GET 0 \d+ (?# )\?foo=bar 200 \[../.../(20[1-9][0-9]):..:..:.. [+-]....\] \1 \d+ /1 user \d+ \d+ HTTP/1.1 (?:\d+.\d+.\d+.\d+|[a-f0-9:]+) \d+ qux - GET - \d+ (?# )\?foo=bar 404 \[../.../(20[1-9][0-9]):..:..:.. [+-]....\] \1 \d+ /1 - - \d+ HTTP/1.1 (?:\d+.\d+.\d+.\d+|[a-f0-9:]+) \d+ - - GET - \d+ (?# ) - \[../.../(20[1-9][0-9]):..:..:.. [+-]....\] \1 \d+ /2 -$} \ {varnishncsa -n ${v1_name} -d -f ${tmpdir}/format} # extended variables shell -match {^\d+.\d{6} miss miss c 1001 quuz \d+.\d{6} miss synth c 1003\s \d+.\d{6} miss pipe c 1005\s$} \ {varnishncsa -n ${v1_name} -d -F "%{Varnish:time_firstbyte}x \ %{Varnish:hitmiss}x %{Varnish:handling}x %{Varnish:side}x \ %{Varnish:vxid}x %{VCL_Log:quux}x"} # %{VSL:..}x shell -match {^req (\d+) rxreq \1 \d{10}\.\d{6} (\d+\.\d{6}) \d+\.\d{6} \2 - - req (\d+) rxreq \3 \d{10}\.\d{6} (\d+\.\d{6}) \d+\.\d{6} \4 - - req (\d+) rxreq \5 - - - -$} \ {varnishncsa -n ${v1_name} -d -F "%{VSL:Begin}x \ %{VSL:Begin[2]}x %{VSL:Timestamp:Resp}x \ %{VSL:Timestamp:Resp[2]}x %{VSL:Timestamp:foo}x %{VSL:ReqURL[2]}x"} # times shell -match {^\{\d{4}-\d{2}-\d{2}\}T\{\d{2}:\d{2}:\d{2}\} \{\{\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\}\} \d+ \d+ \d+ \d{10,} \d{13,} \d{16,} \d{3} \d{6} usecx msecy} \ {varnishncsa -n ${v1_name} -d -F "%{{%F}T{%T}}t %{{{%FT%T}}}t \ %{s}T %{ms}T %{us}T %{sec}t %{msec}t %{usec}t %{msec_frac}t %{usec_frac}t \ %{usecx}t %{msecy}t"} process p1 -stop -screen-dump process p1 -expect-text 1 0 {/1?foo=bar HTTP/1.1" 200 100 "-" "-"} process p1 -expect-text 1 0 { - user [} process p1 -expect-text 2 0 {/1?foo=bar HTTP/1.1" 404 248 "-" "-"} process p1 -expect-text 3 0 {/2 HTTP/1.1" - - "-" "-"} process p2 {varnishncsa -t 5 -n nonexistent} -start delay 1 process p2 -expect-exit 1 -kill INT -wait shell {grep -q "VSM: Attach interrupted" ${p2_err}} # IP address for bad requests client c3 { txreq -url "/bad path" rxresp expect resp.status == 400 } -run varnish v1 -vsl_catchup shell -expect ${localhost} { varnishncsa -n ${v1_name} -d -q 'RespStatus == 400' } shell { ( varnishncsa -n ${v1_name} -d -k 1 varnishncsa -n ${v1_name} -d -k 1 -F "%{User-Agent}i" ) >def_then_ua.txt varnishncsa -n ${v1_name} -d -k 1 >def_with_ua.txt \ -F "%{Varnish:default_format}x\n%{User-Agent}i" diff -u def_then_ua.txt def_with_ua.txt } # ESI coverage in e00003.vtc varnish-7.5.0/bin/varnishtest/tests/u00004.vtc000066400000000000000000000015651457605730600210530ustar00rootroot00000000000000varnishtest "varnishtop coverage" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { } -start client c1 { txreq rxresp } -run shell -expect "fetch" "varnishtop -n ${v1_name} -1" process p1 "varnishtop -n ${v1_name} -d" -start delay 1 process p1 -screen_dump -expect-text 0 0 fetch process p1 -write q -wait # without -f shell -match "1\\.00 RespHeader Date: [^\\n]+\\n" { varnishtop -n ${v1_name} -1 -i RespHeader } # with -f shell -match "1\\.00 RespHeader Date\\n" { varnishtop -n ${v1_name} -1 -f -i RespHeader } shell -match "Usage: .*varnishtop " \ "varnishtop -h" shell -err -match "Usage: .*varnishtop " \ "varnishtop -K" shell -expect "Copyright (c) 2006 Verdens Gang AS" \ "varnishtop -V" shell -err -match "Usage: .*varnishtop " \ "varnishtop extra" shell -err -match "is not a number" \ "varnishtop -p 60 -p 12ABC" varnish-7.5.0/bin/varnishtest/tests/u00005.vtc000066400000000000000000000033021457605730600210430ustar00rootroot00000000000000varnishtest "varnishstat coverage" feature cmd "command -v cmp" server s1 { rxreq txresp } -start varnish v1 -vcl+backend {} -start # On fast systems the next varnishstat will return "inf" counters # if we don't give varnishd a chance to get going. delay 1 process p1 {varnishstat -1 -n ${v1_name} -f ^LCK.vbe.destroy \ -f LCK.vbe.* -f LCK.mempool.creat | tr '[1-9]' '0'} -run shell "grep -q vbe ${p1_out}" shell "grep -q LCK.mempool.creat ${p1_out}" shell -err "grep -q LCK.vbe.destroy ${p1_out}" process p2 {varnishstat -1 -n ${v1_name} -f ^*vbe.destroy -f *vbe* \ -f LCK.mempool.c* | tr '[1-9]' '0'} -run process p3 {varnishstat -1 -n ${v1_name} -X *vbe.destroy -I *vbe* \ -f LCK.mempool.c* | tr '[1-9]' '0'} -run shell "cmp -s ${p1_out} ${p2_out}" shell "cmp -s ${p1_out} ${p3_out}" shell -expect "MGT.uptime" \ "varnishstat -1 -n ${v1_name} -f ^MAIN*" shell -match "^MGT" \ "varnishstat -1 -n ${v1_name} -f ^foo" shell -match "Usage: .*varnishstat " \ "varnishstat -h" shell -expect "Varnishstat -f option fields:" \ "varnishstat -n ${v1_name} -l" shell -expect "Copyright (c) 2006 Verdens Gang AS" \ "varnishstat -V" shell -err -match "Usage: .*varnishstat " \ "varnishstat extra" shell -err -expect "-t: Invalid argument: -1" \ "varnishstat -t -1" shell -err -expect "-t: Invalid argument: Nan" \ "varnishstat -t Nan" shell -err -expect "-t: Invalid argument: foo" \ "varnishstat -t foo" shell -err -expect "Could not get hold of varnishd" \ "varnishstat -n /nonexistent -t 1" shell -expect "MAIN.uptime" \ "varnishstat -n ${v1_name} -1" shell -expect "MAIN.uptime" \ "varnishstat -n ${v1_name} -x" shell -match {"MAIN.uptime":} \ "varnishstat -n ${v1_name} -j" varnish-7.5.0/bin/varnishtest/tests/u00006.vtc000066400000000000000000000125021457605730600210460ustar00rootroot00000000000000varnishtest "varnishlog coverage" server s1 -repeat 2 { rxreq txresp } -start varnish v1 -vcl+backend {} -start # We use this to make sure we know there is a "0 CLI" in the binary log. process p1 { exec varnishlog -n ${v1_name} -g raw -u -A -w - } -start shell { exec varnishlog -n ${v1_name} -D -P ${tmpdir}/vlog.pid \ -w ${tmpdir}/vlog.bin -R 10/s \ -i RespStatus -i ReqURL -i BereqURL } shell -match "Usage: .*varnishlog " \ "varnishlog -h" shell -expect "Copyright (c) 2006 Verdens Gang AS" \ "varnishlog -V" shell -err -match "Usage: .*varnishlog " \ "varnishlog extra" shell -err -expect "Missing -w option" \ "varnishlog -D" shell -err -expect "Daemon cannot write to stdout" \ "varnishlog -D -w -" shell -err -expect "Ambiguous grouping type: r" \ "varnishlog -g r" shell -err -expect "Unknown grouping type: foo" \ "varnishlog -g foo" shell -err -expect "-k: Invalid number 'foo'" \ "varnishlog -k foo" shell -err -expect "-L: Range error" \ "varnishlog -L 0" shell -err -expect {-i: "foo" matches zero tags} \ "varnishlog -i foo" shell -err -expect {-i: "Resp" is ambiguous} \ "varnishlog -i Resp" shell -err -expect {-i: Syntax error in "Re**sp"} \ "varnishlog -i Re**sp" shell -err -expect {-I: "foo" matches zero tags} \ "varnishlog -I foo:bar" shell -err -expect {-I: "Resp" is ambiguous} \ "varnishlog -I Resp:bar" shell -err -expect {-I: Regex error at position 4 (missing closing parenthesis)} \ {varnishlog -I "(foo"} shell -err -expect "-t: Invalid argument" \ "varnishlog -t -1" shell -err -expect "-t: Invalid argument" \ "varnishlog -t foo" shell -err -expect "-t: Invalid argument" \ "varnishlog -t NaN" shell -err -expect {-x: Syntax error in "**"} \ {varnishlog -x "**"} shell -err -expect {-X: Syntax error in "**"} \ {varnishlog -X "**:bar"} shell -err -expect "Cannot open output file (No such file or directory)" \ {varnishlog -w /nonexistent/file} shell -err -expect "Only one of -n and -r options may be used" \ {varnishlog -n ${v1_name} -r ${v1_name}/_.vsm} shell -err -expect "Unterminated string" \ {varnishlog -q '"b\\la'} shell -err -expect "Syntax error" \ {varnishlog -q ' / '} shell -err -expect "Expected integer got '}" \ "varnishlog -q {}" shell -err -expect "Expected positive integer" \ "varnishlog -q {-1}" shell -err -expect "Syntax error in level limit" \ "varnishlog -q {1a}" shell -err -expect "Expected VSL tag name got '['" \ {varnishlog -q "[]"} shell -err -expect "Tag name matches zero tags" \ "varnishlog -q foo" shell -err -expect "Tag name is ambiguous" \ "varnishlog -q Resp" shell -err -expect "Expected string got '['" \ {varnishlog -q "ReqHeader:[]"} shell -err -expect "Expected integer got ']'" \ {varnishlog -q "ReqHeader:foo[]"} shell -err -expect "Expected positive integer" \ {varnishlog -q "ReqHeader:foo[a]"} shell -err -expect "Syntax error in tag name" \ {varnishlog -q '*x* eq "foo"'} shell -err -expect "Expected number got '>'" \ {varnishlog -q "RespStatus > >"} shell -err -expect "Floating point parse error" \ {varnishlog -q "Timestamp:Start[1] > 0.foo"} shell -err -expect "Integer parse error" \ {varnishlog -q "RespStatus > foo"} shell -err -expect "Expected string got '>'" \ {varnishlog -q "ReqMethod eq >"} shell -err -expect "Expected regular expression got '>'" \ {varnishlog -q "ReqMethod ~ >"} shell -err -expect "Regular expression error: " \ {varnishlog -q 'ReqMethod ~ "("'} shell -err -expect "Expected operator got 'ReqMethod'" \ {varnishlog -q "RespStatus ReqMethod 200"} shell -err -expect "-R: Syntax error" \ "varnishlog -R 1foo" shell -err -expect "-R: Range error" \ "varnishlog -R 0" shell -err -expect "-R: Range error" \ "varnishlog -R -1" shell -err -expect "-R: Range error" \ "varnishlog -R 3000000000" shell -err -expect "-R: Syntax error: Invalid duration" \ "varnishlog -R 10/0s" shell -err -expect "-R: Syntax error: Invalid duration" \ "varnishlog -R 10/-10s" shell -err -expect "-R: Syntax error: Invalid duration" \ "varnishlog -R 10/1q" shell -err -expect "-R: Syntax error: Invalid duration" \ "varnishlog -R 10/11f09s" shell -err -expect "-R: Syntax error" \ "varnishlog -R 1000000000/1000000000000000000000000000000s" # Wait until the binary also (must) contain a "0 CLI" process p1 -expect-text 0 0 "0 CLI" process p1 -screen_dump process p1 -stop shell -match "0 CLI[ ]+- Wr 200 [0-9]+ PONG" \ {varnishlog -n ${v1_name} -d -g raw -X "Wr 200 [0-9]+ [^P]"} client c1 { txreq -url /foo rxresp } -run delay 1 shell "mv ${tmpdir}/vlog.bin ${tmpdir}/vlog.bin~" shell "kill -HUP `cat ${tmpdir}/vlog.pid`" client c1 { txreq -url /bar rxresp } -run delay 1 shell "kill `cat ${tmpdir}/vlog.pid`" shell -match {^\*[ ]+<< Request\s+>>[ ]+1001[ ]+ -[ ]+1001 ReqURL[ ]+c /foo $} \ {varnishlog -r ${tmpdir}/vlog.bin~ -i ReqURL -v \ -q "RespStatus == 200"} shell -match "-[ ]+BereqURL[ ]+/bar" \ "varnishlog -r ${tmpdir}/vlog.bin -x ReqURL" shell -match {^\*[ ]+<< BeReq\s+>>[ ]+1005[ ]+ -[ ]+BereqURL[ ]+/bar $} \ "varnishlog -r ${tmpdir}/vlog.bin -b -C -I BAR" shell -match {^\*[ ]+<< BeReq\s+>>[ ]+1005[ ]+} \ "cat ${tmpdir}/vlog.bin | varnishlog -r - -b -C -I BAR" shell "rm -f ${tmpdir}/foo" shell -err -expect "Cannot open ${tmpdir}/foo: " \ "varnishlog -r ${tmpdir}/foo" shell "echo foobar > ${tmpdir}/foo" shell -err -expect "Not a VSL file: ${tmpdir}/foo" \ "varnishlog -r ${tmpdir}/foo" # ESI coverage in e00003.vtc varnish-7.5.0/bin/varnishtest/tests/u00007.vtc000066400000000000000000000031351457605730600210510ustar00rootroot00000000000000varnishtest "varnishhist coverage" server s1 { rxreq txresp } -start varnish v1 -vcl+backend {} -start shell -match "Usage: .*varnishhist " \ "varnishhist -h" shell -expect "Copyright (c) 2006 Verdens Gang AS" \ "varnishhist -V" shell -err -match "Usage: .*varnishhist " \ "varnishhist -K" shell -err -match "Usage: .*varnishhist " \ "varnishhist extra" shell -err -expect "-p: invalid '0'" \ "varnishhist -p 0" shell -err -expect "-B: being able to bend time does not mean we can stop it" \ "varnishhist -p 2.0 -B 0" shell -err -expect "-B: being able to bend time does not mean we can make it go backwards" \ "varnishhist -B -1" shell -err -expect "Invalid grouping mode: raw" \ "varnishhist -g raw" shell -err -expect "-P: No such profile 'foo'" \ "varnishhist -P foo" shell -err -expect "-P: 'Timestamp:' is not a valid profile name or definition" \ "varnishhist -P Timestamp::" shell -err -expect "-P: 'b:' is not a valid profile name or definition" \ "varnishhist -P b:" shell -err -expect "-P: 'foo:' is not a valid tag name" \ "varnishhist -P foo:" shell -err -expect "-P: 'b:Debug:' is an unsafe or binary record" \ "varnishhist -P b:Debug:" shell -err -expect "-P: 'b:BereqAcct:x' is not a valid profile name or definition" \ "varnishhist -P b:BereqAcct:x" shell -err -expect "-P: 'b:BereqAcct:x' is not a valid profile name or definition" \ "varnishhist -P b:BereqAcct:x:" shell -err -expect "-P: 'b:BereqAcct:' is not a valid profile name or definition" \ "varnishhist -P b:BereqAcct::5:1:a" shell -err -expect "-p: invalid '0'" \ "varnishhist -P b:BereqAcct::5 -p 0" varnish-7.5.0/bin/varnishtest/tests/u00008.vtc000066400000000000000000000041561457605730600210560ustar00rootroot00000000000000varnishtest "trivial run of varnishstat in curses mode" server s1 -repeat 4 { rxreq txresp } -start varnish v1 -vcl+backend { probe default { .url = "/"; } } -start process p1 {varnishstat -n ${v1_name}} -start process p2 {varnishstat -n ${v1_name}} -start process p3 {varnishstat -n ${v1_name}} -start process p1 -expect-text 0 0 "VBE.vcl1.s1.happy" process p1 -screen_dump client c1 { txreq rxresp } -run process p1 -expect-text 0 0 "MAIN.s_sess" process p1 -screen_dump process p1 -write {+} process p1 -screen_dump process p1 -expect-text 0 0 "Refresh interval set to 1.1 seconds." process p1 -write {-} process p1 -screen_dump process p1 -expect-text 0 0 "Refresh interval set to 1.0 seconds." process p1 -write {vG} process p1 -expect-text 0 0 "VBE.vcl1.s1.req" process p1 -expect-text 0 0 "DIAG" process p1 -screen_dump process p1 -write {dek} process p1 -expect-text 0 1 "Concurrent connections used:" process p1 -screen_dump process p1 -write {h} process p1 -expect-text 0 0 "Navigate the counter list one line up." process p1 -screen_dump process p1 -write {G} process p1 -expect-text 0 0 "Decrease refresh interval." process p1 -screen_dump # the counters screen is preserved process p1 -write {h} process p1 -expect-text 0 1 "Concurrent connections used:" process p1 -screen_dump # the help screen always appears from the top process p1 -write {h} process p1 -expect-text 0 0 "Navigate the counter list one line up." process p1 -screen_dump process p1 -write {h} process p1 -winsz 25 132 process p1 -expect-text 4 124 "AVG_1000" process p1 -expect-text 22 108 "UNSEEN DIAG" process p1 -key NPAGE process p1 -expect-text 0 0 "VBE.vcl1.s1.helddown" process p1 -screen_dump process p1 -key PPAGE process p1 -expect-text 0 0 "VBE.vcl1.s1.happy" process p1 -screen_dump process p1 -key END process p1 -expect-text 22 1 " " process p1 -expect-text 4 1 "^^^" process p1 -screen_dump process p1 -key HOME process p1 -expect-text 22 1 "vvv" process p1 -expect-text 4 1 " " process p1 -screen_dump process p1 -screen_dump -write {q} -wait process p2 -screen_dump -kill TERM -wait process p3 -screen_dump -kill HUP -wait varnish-7.5.0/bin/varnishtest/tests/u00009.vtc000066400000000000000000000020671457605730600210560ustar00rootroot00000000000000varnishtest "trivial run of varnishhist in curses mode" server s1 { rxreq txresp -bodylen 32 } -start varnish v1 -vcl+backend {} -start process p1 -dump {varnishhist -n ${v1_name}} -start process p2 -dump {varnishhist -n ${v1_name} -P b:BereqAcct::5:-1:1} -start process p3 -dump {varnishhist -n ${v1_name} -P BerespBodytime -B 2} -start process p1 -expect-text 24 0 {1e2} process p2 -expect-text 24 0 {1e-1} process p3 -expect-text 24 0 {1e2} delay 1 client c1 { txreq rxresp } -run varnish v1 -vsl_catchup process p1 -expect-text 22 0 {#} process p2 -expect-text 22 80 {#} client c1 { txreq rxresp } -run varnish v1 -vsl_catchup process p1 -expect-text 22 0 { |} process p1 -expect-text 3 1 {20_} process p1 -screen_dump process p1 -winsz 23 80 -need-bytes +10 process p1 -write {0>+-<} -need-bytes +10 process p3 -write {0++++++++++>+-&1} -expect-exit 2 -run process p1 -expect-text 0 0 "-t: Invalid argument:" process p1 -log {varnishadm -Q 2>&1} -expect-exit 1 -run process p1 -expect-text 0 0 "Usage: varnishadm" process p2 -log {varnishadm -h 2>&1} -expect-exit 0 -run process p2 -expect-text 0 0 "Usage: varnishadm" varnish-7.5.0/bin/varnishtest/tests/u00013.vtc000066400000000000000000000014351457605730600210470ustar00rootroot00000000000000varnishtest "varnishncsa outputs when UDS addresses are in use" # The %h formatter gets its value from ReqStart or BackendOpen, # which now may be a UDS address. # For UDS backends without a .host_header spec, the Host header is # set to "0.0.0.0" if the header is missing in the request. server s1 -listen "${tmpdir}/s1.sock" { rxreq txresp } -start varnish v1 -arg "-a ${tmpdir}/v1.sock" -vcl+backend {} -start client c1 -connect "${tmpdir}/v1.sock" { txreq -proto HTTP/1.0 rxresp } -run shell -expect "0.0.0.0" { varnishncsa -n ${v1_name} -d -c -F "%h" } shell -expect "http://localhost/" { varnishncsa -n ${v1_name} -d -c -F "%r" } shell -expect "0.0.0.0" { varnishncsa -n ${v1_name} -d -b -F "%h" } shell -expect "http://0.0.0.0/" { varnishncsa -n ${v1_name} -d -b -F "%r" } varnish-7.5.0/bin/varnishtest/tests/u00014.vtc000066400000000000000000000053611457605730600210520ustar00rootroot00000000000000varnishtest "VSL compound queries" # This test case compares individual query scenarios to their # compounded queries counterparts. server s1 { rxreq txresp -status 500 rxreq txresp -status 503 } -start varnish v1 -vcl+backend "" -start client c1 { txreq rxresp expect resp.status == 500 txreq rxresp expect resp.status == 503 } -run # Let's fist create a script to reduce in all the variants below. shell { cat >ncsa.sh <<-EOF #!/bin/sh varnishncsa -d -n ${v1_name} -F '%s' "\$@" | tr '\n' ' ' # ' EOF chmod +x ncsa.sh } shell -match "^500 503 $" { # no query ./ncsa.sh } shell -err -expect "Query expression error" { varnishncsa -d -n ${v1_name} -q ' # empty multiline query ' } shell -err -expect "Query expression error" { varnishncsa -d -n ${v1_name} -q ' * ~ "Incomplete quoted string" ' } shell -err -expect "Query expression error" { echo '# empty query file' >empty.vslq varnishncsa -d -n ${v1_name} -Q empty.vslq } shell -err -expect "-Q missing.vslq: No such file or directory" { varnishncsa -d -n ${v1_name} -Q missing.vslq } shell -match "^500 $" { # single query 1 ./ncsa.sh -q 'RespStatus == 500' } shell -match "^503 $" { # single query 2 ./ncsa.sh -q 'RespStatus == 503' } shell -match "^500 503 $" { # query 1 OR query 2 ./ncsa.sh -q '(RespStatus == 500) or (RespStatus == 503)' } shell -match "^500 503 $" { ./ncsa.sh -q ' # query 1 RespStatus == 500 # query 2 RespStatus == 503 ' } shell -match "^500 503 $" { ./ncsa.sh -q ' RespStatus == 500 # query 1 RespStatus == 503 # query 2 ' } shell -match "^500 503 $" { cat >query.vslq <<-EOF # query 1 RespStatus == 500 # query 2 RespStatus == 503 EOF ./ncsa.sh -Q query.vslq } shell -match "^500 503 $" { # multiple -q options ./ncsa.sh -q 'RespStatus == 500' -q 'RespStatus == 503' } shell -match "^500 503 $" { # multiple -Q options cat >query1.vslq <<-EOF # query 1 RespStatus == 500 EOF cat >query2.vslq <<-EOF # query 2 RespStatus == 503 EOF ./ncsa.sh -Q query1.vslq -Q query2.vslq } shell -match "^500 503 $" { # mix -Q and -q options ./ncsa.sh -Q query1.vslq -q 'RespStatus == 503' } shell -match "^500 $" { set -e tee query1_cont.vslq <<-EOF | wc -l | grep -q 4 # ensure 4 lines # line continuation RespStatus \\ == \\ 500 EOF ./ncsa.sh -Q query1_cont.vslq } shell -err -expect "Query expression error" { set -e tee string_cont.vslq <<-EOF | wc -l | grep -q 3 # ensure 3 lines # quoted string continuation * ~ "very long \\ string" EOF varnishncsa -d -n ${v1_name} -Q string_cont.vslq } # Make v1 log a multiline VSL record and query it (hardly useful) logexpect l1 -v v1 -g raw -q {CLI ~ "\\nvcl1.s1"} { expect 0 0 CLI "vcl1.s1 +healthy" } -start varnish v1 -cliok backend.list logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/u00015.vtc000066400000000000000000000010131457605730600210410ustar00rootroot00000000000000varnishtest "varnishstat -j" feature cmd "python3 -c 'import json'" server s1 -repeat 5 { rxreq txresp } -start varnish v1 -vcl+backend {} -start client c1 -repeat 5 { txreq rxresp } -run delay 1 shell { varnishstat -n ${v1_name} -j } shell -match "^1$" { varnishstat -n ${v1_name} -j | python3 -c 'import sys, json; print(json.load(sys.stdin)["version"])' } shell -match "^5$" { varnishstat -n ${v1_name} -j | python3 -c 'import sys, json; print(json.load(sys.stdin)["counters"]["MAIN.client_req"]["value"])' } varnish-7.5.0/bin/varnishtest/tests/u00016.vtc000066400000000000000000000030031457605730600210430ustar00rootroot00000000000000varnishtest "varnishncsa -j json escaping" feature cmd "python3 -c 'import json'" server s1 -repeat 3 { rxreq txresp } -start varnish v1 -vcl+backend {} -start client c1 { txreq -hdr "x-request-id: YES" -hdr "test:ascii" rxresp txreq -hdr "x-request-id: ∃" -hdr "test:unicode" rxresp txreq -hdr "test:null" rxresp } -run delay 1 # ASCII string with -j is valid JSON shell -match "^YES$" { varnishncsa -n ${v1_name} -d -q 'ReqHeader:test eq "ascii"' -j \ -F '{"xrid": "%{X-REQUEST-ID}i"}' | python3 -c 'import sys, json; print(json.load(sys.stdin)["xrid"])' } # ∃ without -j is not valid JSON shell -err -expect "Invalid \\escape" { varnishncsa -n ${v1_name} -d -q 'ReqHeader:test eq "unicode"' \ -F '{"xrid": "%{X-REQUEST-ID}i"}' | python3 -c 'import sys, json; print(json.load(sys.stdin)["xrid"])' } # ∃ with -j is valid JSON shell -match "^∃$" { varnishncsa -n ${v1_name} -d -q 'ReqHeader:test eq "unicode"' -j \ -F '{"xrid": "%{X-REQUEST-ID}i"}' | python3 -c 'import sys, json; print(json.load(sys.stdin)["xrid"])' } # Empty strings are not replaced with "-" shell -match "" { varnishncsa -n ${v1_name} -d -q 'ReqHeader:test eq "null"' -j \ -F '{"xrid": "%{X-REQUEST-ID}i"}' | python3 -c 'import sys, json; print(json.load(sys.stdin)["xrid"])' } # Empty VCL_Log entries are not replaced with "-" shell -match "" { varnishncsa -n ${v1_name} -d -q 'ReqHeader:test eq "null"' -j \ -F '{"xrid": "%{VCL_Log:varnishncsa}x"}' | python3 -c 'import sys, json; print(json.load(sys.stdin)["xrid"])' } varnish-7.5.0/bin/varnishtest/tests/u00017.vtc000066400000000000000000000010561457605730600210520ustar00rootroot00000000000000varnishtest "SLT_Debug can be printed by varnishncsa -j" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { import cookie; sub vcl_recv { cookie.parse(""); } } -start varnish v1 -cliok "param.set vsl_mask +Debug" client c1 { txreq rxresp } -run # Let's fist create a script to reduce in all the variants below. shell { varnishncsa -d -n ${v1_name} -F '%{VSL:Debug}x' -j | grep "^cookie: nothing to parse\\\\u0000" } shell -err -expect "Tag Debug can contain control characters" { varnishncsa -d -n ${v1_name} -F '%{VSL:Debug}x' } varnish-7.5.0/bin/varnishtest/tests/u00018.vtc000066400000000000000000000006621457605730600210550ustar00rootroot00000000000000varnishtest "Varnishtest coverage" # Cover usage processing shell -err -expect {usage:} { varnishtest -v -l } # Cover argument processing shell { echo 'varnishtest foo' > ${tmpdir}/_.vtc varnishtest -j 4 -n 10 -C -v -b 100k -L -l ${tmpdir}/_.vtc ls -l ${tmpdir} } shell -err -expect {No such file or directory} { rm -f ${tmpdir}/_.vtc env VTEST_DURATION=15 \ varnishtest -j 4 -n 10 -C -v -b 100k -L -l ${tmpdir}/_.vtc } varnish-7.5.0/bin/varnishtest/tests/u00019.vtc000066400000000000000000000005421457605730600210530ustar00rootroot00000000000000varnishtest "varnishlog write vsl to stdout" feature cmd "command -v gzip" feature cmd "command -v gunzip" varnish v1 -vcl { backend be none; } -start client c1 { txreq rxresp expect resp.status == 503 } -run shell {varnishlog -d -w - -n ${v1_name} | gzip >vsl.gz} shell {ls -l} shell -match "RespStatus +503" {gunzip < vsl.gz | varnishlog -r -} varnish-7.5.0/bin/varnishtest/tests/v00000.vtc000066400000000000000000000005761457605730600210510ustar00rootroot00000000000000varnishtest "VCL/VRT: req.ttl / beresp.ttl / beresp.grace" server s1 { rxreq txresp -hdr "Connection: close" -body "012345\n" } server s1 -start varnish v1 -vcl+backend { sub vcl_recv { set req.ttl += 1 s; } sub vcl_backend_response { set beresp.ttl += 1 m; set beresp.grace += 1 h; } } -start client c1 { txreq -url "/" rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/v00001.vtc000066400000000000000000000040241457605730600210420ustar00rootroot00000000000000varnishtest "VCL/VRT: url/request/proto/reason/status" varnish v1 -errvcl "Variable is read only" { backend proforma None; sub vcl_deliver { set req.proto = "HTTP/1.2"; } } varnish v1 -errvcl "Variable is read only" { backend proforma None; sub vcl_deliver { set resp.proto = "HTTP/1.2"; } } varnish v1 -errvcl "Variable is read only" { backend proforma None; sub vcl_backend_response { set bereq.proto = "HTTP/1.2"; } } varnish v1 -errvcl "Variable is read only" { backend proforma None; sub vcl_backend_response { set beresp.proto = "HTTP/1.2"; } } varnish v1 -errvcl "Variable is read only" { backend proforma None; sub vcl_recv { set req.http.content-length = "42"; } } varnish v1 -errvcl "Variable cannot be unset" { backend proforma None; sub vcl_recv { unset req.http.content-length; } } server s1 { rxreq txresp -hdr "Connection: close" -body "012345\n" } server s1 -start varnish v1 -syntax 4.0 -vcl+backend { sub vcl_recv { set req.http.foobar = req.url + req.method + req.proto; set req.url = "/"; set req.proto = "HTTP/1.2"; set req.method = "GET"; } sub vcl_backend_fetch { set bereq.http.foobar = bereq.url + bereq.proto; set bereq.url = "/"; set bereq.proto = "HTTP/1.2"; set bereq.method = "GET"; } sub vcl_backend_response { set beresp.http.foobar = beresp.proto + beresp.reason + beresp.status; set beresp.proto = "HTTP/1.2"; set beresp.reason = "For circular files"; set beresp.status = 903; set beresp.http.y-served-by-hostname = server.hostname; set beresp.http.y-served-by-identity = server.identity; } sub vcl_deliver { set resp.proto = "HTTP/1.2"; set resp.reason = "Naah, lets fail it"; set resp.status = 904; # XXX should be moved to it's own test set resp.http.x-served-by-hostname = server.hostname; set resp.http.x-served-by-identity = server.identity; set resp.http.foobar = resp.proto + resp.status; } } -start client c1 { txreq -url "/" rxresp expect resp.status == 904 } client c1 -run server s1 -wait varnish v1 -stop varnish-7.5.0/bin/varnishtest/tests/v00002.vtc000066400000000000000000000075471457605730600210600ustar00rootroot00000000000000varnishtest "Test VCL STRINGS/STRANDS comparisons" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { sub vcl_deliver { set resp.http.test = req.http.test; set resp.http.eq = req.http.foo == req.http.bar; set resp.http.neq = req.http.foo != req.http.bar; set resp.http.lt = req.http.foo < req.http.bar; set resp.http.le = req.http.foo <= req.http.bar; set resp.http.gt = req.http.foo > req.http.bar; set resp.http.ge = req.http.foo >= req.http.bar; } } -start client c1 { txreq -hdr "foo: 1" -hdr "bar: 1" -hdr "test: 1" rxresp expect resp.http.eq == true expect resp.http.neq == false expect resp.http.lt == false expect resp.http.le == true expect resp.http.gt == false expect resp.http.ge == true txreq -hdr "foo: 1" -hdr "bar: 2" -hdr "test: 2" rxresp expect resp.http.eq == false expect resp.http.neq == true expect resp.http.lt == true expect resp.http.le == true expect resp.http.gt == false expect resp.http.ge == false txreq -hdr "foo: 2" -hdr "bar: 1" -hdr "test: 3" rxresp expect resp.http.eq == false expect resp.http.neq == true expect resp.http.lt == false expect resp.http.le == false expect resp.http.gt == true expect resp.http.ge == true txreq -hdr "foo: 1" -hdr "bar: 11" -hdr "test: 4" rxresp expect resp.http.eq == false expect resp.http.neq == true expect resp.http.lt == true expect resp.http.le == true expect resp.http.gt == false expect resp.http.ge == false txreq -hdr "foo: 11" -hdr "bar: 1" -hdr "test: 5" rxresp expect resp.http.eq == false expect resp.http.neq == true expect resp.http.lt == false expect resp.http.le == false expect resp.http.gt == true expect resp.http.ge == true txreq -hdr "foo:" -hdr "bar:" -hdr "test: 6" rxresp expect resp.http.eq == true expect resp.http.neq == false expect resp.http.lt == false expect resp.http.le == true expect resp.http.gt == false expect resp.http.ge == true txreq -hdr "foo:" -hdr "bar: 1" -hdr "test: 7" rxresp expect resp.http.eq == false expect resp.http.neq == true expect resp.http.lt == true expect resp.http.le == true expect resp.http.gt == false expect resp.http.ge == false } -run varnish v1 -vsl_catchup -vcl+backend { sub vcl_deliver { set resp.http.test = req.http.test; set resp.http.eq = req.http.foo + " " == req.http.bar + " "; set resp.http.neq = req.http.foo + " " != req.http.bar + " "; set resp.http.lt = req.http.foo + " " < req.http.bar + " "; set resp.http.le = req.http.foo + " " <= req.http.bar + " "; set resp.http.gt = req.http.foo + " " > req.http.bar + " "; set resp.http.ge = req.http.foo + " " >= req.http.bar + " "; } } client c1 -run varnish v1 -vsl_catchup -vcl+backend { sub vcl_deliver { set resp.http.test = req.http.test; set resp.http.eq = req.http.foo == req.http.bar + req.http.not; set resp.http.neq = req.http.foo != req.http.bar + req.http.not; set resp.http.lt = req.http.foo < req.http.bar + req.http.not; set resp.http.le = req.http.foo <= req.http.bar + req.http.not; set resp.http.gt = req.http.foo > req.http.bar + req.http.not; set resp.http.ge = req.http.foo >= req.http.bar + req.http.not; } } client c1 -run varnish v1 -vsl_catchup -vcl+backend { sub vcl_deliver { set resp.http.test = req.http.test; set resp.http.eq = req.http.not + req.http.foo == req.http.bar + req.http.not; set resp.http.neq = req.http.not + req.http.foo != req.http.bar + req.http.not; set resp.http.lt = req.http.not + req.http.foo < req.http.bar + req.http.not; set resp.http.le = req.http.not + req.http.foo <= req.http.bar + req.http.not; set resp.http.gt = req.http.not + req.http.foo > req.http.bar + req.http.not; set resp.http.ge = req.http.not + req.http.foo >= req.http.bar + req.http.not; } } client c1 -run varnish-7.5.0/bin/varnishtest/tests/v00003.vtc000066400000000000000000000061771457605730600210570ustar00rootroot00000000000000varnishtest "vcl.state coverage tests" server s1 -repeat 20 { rxreq txresp delay .2 } -start # The debug vmod logs temperature vcl events varnish v1 -arg "-p vcl_cooldown=1" \ -arg "-p thread_pool_min=5" \ -arg "-p thread_pool_max=5" \ -vcl { import debug; backend default { .host = "${s1_addr}"; .port = "${s1_port}"; .probe = { .interval = 1s; } } } -start # We only have one vcl yet varnish v1 -expect VBE.vcl1.default.happy >= 0 varnish v1 -expect !VBE.vcl2.default.happy varnish v1 -cliok "backend.list -p *.*" varnish v1 -vcl { backend default { .host = "${s1_addr}"; .port = "${s1_port}"; .probe = { .interval = 1s; } } } # Now we have two vcls (and run on the latest loaded) delay .4 varnish v1 -expect VBE.vcl1.default.happy >= 0 varnish v1 -expect VBE.vcl2.default.happy >= 0 # We are about to freeze vcl, then implicitly thaw it via use. # check that we see the events logexpect l1 -v v1 -g raw { expect * 0 Debug "vcl1: VCL_EVENT_COLD" expect * 0 Debug "vcl1: VCL_EVENT_WARM" } -start # Freeze vcl1 varnish v1 -cliok "vcl.state vcl1 cold" varnish v1 -cliexpect "available *cold *cold *[0-9]+ *vcl1\\s+active *auto *warm *[0-9]+ *vcl2" "vcl.list" delay .4 varnish v1 -expect !VBE.vcl1.default.happy # Manual temperature control needs to be explicit before use varnish v1 -clierr 300 "vcl.use vcl1" varnish v1 -cliok "vcl.state vcl1 warm" varnish v1 -cliok "vcl.use vcl1" varnish v1 -cliexpect "active *warm *warm *[0-9]+ *vcl1\\s+available *auto *warm *[0-9]+ *vcl2" "vcl.list" delay .4 varnish v1 -expect VBE.vcl1.default.happy >= 0 varnish v1 -expect VBE.vcl2.default.happy >= 0 logexpect l1 -wait # and the unused one should go cold delay 4 varnish v1 -cliexpect "active *warm *warm *[0-9]+ *vcl1\\s+available *auto *cold *[0-9]+ *vcl2" "vcl.list" varnish v1 -expect !VBE.vcl2.default.happy # use the other varnish v1 -cliok "vcl.use vcl2" varnish v1 -cliexpect "available *warm *warm *[0-9]+ *vcl1\\s+active *auto *warm *[0-9]+ *vcl2" "vcl.list" # the non-auto vcl will stay warm even after the cooldown period delay 4 varnish v1 -cliexpect "available *warm *warm *[0-9]+ *vcl1\\s+active *auto *warm *[0-9]+ *vcl2" "vcl.list" varnish v1 -expect VBE.vcl1.default.happy >= 0 varnish v1 -expect VBE.vcl2.default.happy >= 0 # You can't freeze the active VCL varnish v1 -clierr 300 "vcl.state vcl2 cold" # the non-auto vcl will apply the cooldown again once changed back to auto varnish v1 -cliok "vcl.state vcl1 auto" varnish v1 -cliexpect "available *auto *warm *[0-9]+ *vcl1\\s+active *auto *warm *[0-9]+ *vcl2" "vcl.list" delay .4 varnish v1 -expect VBE.vcl1.default.happy >= 0 delay 4 varnish v1 -cliexpect "available *auto *cold *[0-9]+ *vcl1\\s+active *auto *warm *[0-9]+ *vcl2" "vcl.list" varnish v1 -expect !VBE.vcl1.default.happy # A VMOD's warm-up can fail varnish v1 -cliok "param.set max_esi_depth 42" varnish v1 -clierr 300 "vcl.state vcl1 warm" varnish v1 -cliexpect "available *auto *cold *[0-9]+ *vcl1\\s+active *auto *warm *[0-9]+ *vcl2" "vcl.list" # A warm-up failure can also fail a child start varnish v1 -cliok stop varnish v1 -cliok "vcl.state vcl1 warm" varnish v1 -cliok "vcl.list" varnish v1 -clierr 300 start varnish-7.5.0/bin/varnishtest/tests/v00004.vtc000066400000000000000000000233331457605730600210510ustar00rootroot00000000000000varnishtest "test if our default parameters make sense ..." feature 64bit feature !sanitizer feature !coverage # ... for our definition of a standard use case: # - 2019 header madness # - 5 ESI levels down # - 10 VCL subs down # - PCRE2 regsub server s1 { rxreq expect req.http.esi0 == "foo" txresp \ -hdr "Content-Type: text/html;charset=utf-8" \ -hdr "Content-Language: en-US" \ -hdr "X-UA-Compatible: IE=Edge" \ -hdr "X-Content-Type-Options: nosniff" \ -hdr "Content-Security-Policy-Report-Only: script-src 'unsafe-inline' 'unsafe-eval' 'self' blob: data: https:; style-src 'self' 'unsafe-inline' blob: data: https:; default-src 'self' https:; img-src https: blob: data: android-webview-video-poster:; frame-src blob: data: https:; worker-src blob: data: https:; child-src blob: data: https:; object-src 'self'; font-src 'self' https: blob: data: safari-extension://*; media-src 'self' blob: data: https:; connect-src wss: blob: data: https:; report-uri /csp_ep" \ -hdr "Content-Security-Policy: upgrade-insecure-requests" \ -hdr "Server: MySecretServerSauce" \ -hdr "Cache-Control: public, max-age=90" \ -hdr "Connection: keep-alive" \ -hdr "Vary: Accept-Encoding, Origin" \ -gzipbody { Before include After include } rxreq expect req.url == "/a1" expect req.http.esi0 != "foo" txresp \ -hdr "Content-Type: text/html;charset=utf-8" \ -hdr "Content-Language: en-US" \ -hdr "X-UA-Compatible: IE=Edge" \ -hdr "X-Content-Type-Options: nosniff" \ -hdr "Content-Security-Policy-Report-Only: script-src 'unsafe-inline' 'unsafe-eval' 'self' blob: data: https:; style-src 'self' 'unsafe-inline' blob: data: https:; default-src 'self' https:; img-src https: blob: data: android-webview-video-poster:; frame-src blob: data: https:; worker-src blob: data: https:; child-src blob: data: https:; object-src 'self'; font-src 'self' https: blob: data: safari-extension://*; media-src 'self' blob: data: https:; connect-src wss: blob: data: https:; report-uri /csp_ep" \ -hdr "Content-Security-Policy: upgrade-insecure-requests" \ -hdr "Server: MySecretServerSauce" \ -hdr "Cache-Control: public, max-age=90" \ -hdr "Connection: keep-alive" \ -hdr "Vary: Accept-Encoding, Origin" \ -gzipbody { Before include After include } rxreq expect req.url == "/b2" expect req.http.esi0 != "foo" txresp \ -hdr "Content-Type: text/html;charset=utf-8" \ -hdr "Content-Language: en-US" \ -hdr "X-UA-Compatible: IE=Edge" \ -hdr "X-Content-Type-Options: nosniff" \ -hdr "Content-Security-Policy-Report-Only: script-src 'unsafe-inline' 'unsafe-eval' 'self' blob: data: https:; style-src 'self' 'unsafe-inline' blob: data: https:; default-src 'self' https:; img-src https: blob: data: android-webview-video-poster:; frame-src blob: data: https:; worker-src blob: data: https:; child-src blob: data: https:; object-src 'self'; font-src 'self' https: blob: data: safari-extension://*; media-src 'self' blob: data: https:; connect-src wss: blob: data: https:; report-uri /csp_ep" \ -hdr "Content-Security-Policy: upgrade-insecure-requests" \ -hdr "Server: MySecretServerSauce" \ -hdr "Cache-Control: public, max-age=90" \ -hdr "Connection: keep-alive" \ -hdr "Vary: Accept-Encoding, Origin" \ -gzipbody { Before include After include } rxreq expect req.url == "/c3" expect req.http.esi0 != "foo" txresp \ -hdr "Content-Type: text/html;charset=utf-8" \ -hdr "Content-Language: en-US" \ -hdr "X-UA-Compatible: IE=Edge" \ -hdr "X-Content-Type-Options: nosniff" \ -hdr "Content-Security-Policy-Report-Only: script-src 'unsafe-inline' 'unsafe-eval' 'self' blob: data: https:; style-src 'self' 'unsafe-inline' blob: data: https:; default-src 'self' https:; img-src https: blob: data: android-webview-video-poster:; frame-src blob: data: https:; worker-src blob: data: https:; child-src blob: data: https:; object-src 'self'; font-src 'self' https: blob: data: safari-extension://*; media-src 'self' blob: data: https:; connect-src wss: blob: data: https:; report-uri /csp_ep" \ -hdr "Content-Security-Policy: upgrade-insecure-requests" \ -hdr "Server: MySecretServerSauce" \ -hdr "Cache-Control: public, max-age=90" \ -hdr "Connection: keep-alive" \ -hdr "Vary: Accept-Encoding, Origin" \ -gzipbody { Before include After include } rxreq expect req.url == "/d4" expect req.http.esi0 != "foo" txresp \ -hdr "Content-Type: text/html;charset=utf-8" \ -hdr "Content-Language: en-US" \ -hdr "X-UA-Compatible: IE=Edge" \ -hdr "X-Content-Type-Options: nosniff" \ -hdr "Content-Security-Policy-Report-Only: script-src 'unsafe-inline' 'unsafe-eval' 'self' blob: data: https:; style-src 'self' 'unsafe-inline' blob: data: https:; default-src 'self' https:; img-src https: blob: data: android-webview-video-poster:; frame-src blob: data: https:; worker-src blob: data: https:; child-src blob: data: https:; object-src 'self'; font-src 'self' https: blob: data: safari-extension://*; media-src 'self' blob: data: https:; connect-src wss: blob: data: https:; report-uri /csp_ep" \ -hdr "Content-Security-Policy: upgrade-insecure-requests" \ -hdr "Server: MySecretServerSauce" \ -hdr "Cache-Control: public, max-age=90" \ -hdr "Connection: keep-alive" \ -hdr "Vary: Accept-Encoding, Origin" \ -gzipbody { Before include After include } rxreq expect req.url == "/e5" expect req.http.esi0 != "foo" txresp \ -hdr "Content-Type: text/html;charset=utf-8" \ -hdr "Content-Language: en-US" \ -hdr "X-UA-Compatible: IE=Edge" \ -hdr "X-Content-Type-Options: nosniff" \ -hdr "Content-Security-Policy-Report-Only: script-src 'unsafe-inline' 'unsafe-eval' 'self' blob: data: https:; style-src 'self' 'unsafe-inline' blob: data: https:; default-src 'self' https:; img-src https: blob: data: android-webview-video-poster:; frame-src blob: data: https:; worker-src blob: data: https:; child-src blob: data: https:; object-src 'self'; font-src 'self' https: blob: data: safari-extension://*; media-src 'self' blob: data: https:; connect-src wss: blob: data: https:; report-uri /csp_ep" \ -hdr "Content-Security-Policy: upgrade-insecure-requests" \ -hdr "Server: MySecretServerSauce" \ -hdr "Cache-Control: public, max-age=90" \ -hdr "Connection: keep-alive" \ -hdr "Vary: Accept-Encoding, Origin" \ -gzipbody { LAST } } -start varnish v1 -vcl+backend { import std; import debug; import vtc; sub recv0 { call recv1; std.log("STK recv0 " + debug.stk()); } sub recv1 { call recv2; std.log("STK recv1 " + debug.stk()); } sub recv2 { call recv3; std.log("STK recv2 " + debug.stk()); } sub recv3 { call recv4; std.log("STK recv3 " + debug.stk()); } sub recv4 { call recv5; std.log("STK recv4 " + debug.stk()); } sub recv5 { call recv6; std.log("STK recv5 " + debug.stk()); } sub recv6 { call recv7; std.log("STK recv6 " + debug.stk()); } sub recv7 { call recv8; std.log("STK recv7 " + debug.stk()); } sub recv8 { call recv9; std.log("STK recv8 " + debug.stk()); } sub recv9 { std.log("STK recv9 " + debug.stk() + " WS " + vtc.workspace_free(client)); set req.http.regex = regsub(req.http.cookie, "(.*)", "\1\1\1\1\1\1\1\1"); std.log("WS " + vtc.workspace_free(client)); # hey geoff, this is deliberate set req.http.regex = regsub(req.http.regex, "(.*)(.{5})(.{6})(.{7})(.{8})", "/\5\4\3\2\1"); std.log("WS " + vtc.workspace_free(client)); std.log("REGEX recv9 " + req.http.regex); } sub vcl_recv { if (req.esi_level > 0) { set req.url = req.url + req.esi_level; } else { set req.http.esi0 = "foo"; } std.log("STK recv " + debug.stk()); call recv0; } sub vcl_backend_response { set beresp.do_esi = true; } sub vcl_deliver { std.log("STK deliver " + debug.stk()); } } -start varnish v1 -cliok "param.set debug +syncvsl" client c1 { txreq \ -hdr "Host: foo" \ -hdr "User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/60.0" \ -hdr "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8" \ -hdr "Accept-Language: en-US,en;q=0.5" \ -hdr "Accept-Encoding: gzip, deflate, br" \ -hdr "Cookie: logged_in=yes; sess=vXgKJaOHR1I78w1WeTyH51EZSu1rhrC1aAINn+a4sVk/IMouksSSP0Mg4jzqhTMtdLilDo3t04fxRJP1ywB/loN9674CLOu2yzT996hUbzM8oza68yNzhSkkL4afQYOwLMJbtFvtY+lLHk3TJRHSS243HcYluLoo7qjmpiiUfx6JyIbRtl5xPPgVGkLgSA1Fu/yCXwfVCNhnLWHMSm1zd15CoroUCFDkuO0OponjseGPBzJ7NdFk2Fi5SJFZmhzHcBH/Ri/Uu5UeJwVAcJe9oPNuaWUR/Oy/D3nU81lOels8ypYJRmAAzO5r7RJ7KmIvjZhqxLG7cMViH/roegSgqxHsjXb/kSec2dmq1wQqSPYjxN/pIp8PefyM/IAho2h3WVKRDhYmAokhDIA8/UgMxaIyrWh1Ep6D16IU1uRMgx5Gjr6VJJ42GV23+OhfvlpdYoZxy7b9bwf7T3ABniF+VJOdMO5PTWfuG2Xt515FZ/byNpMYnMvWNGh4Ior8QyV2W0Nz4p0NJ5RWsnHYAoD3ySRC5E/cpu9RQsXdE1sVNDa7uMzgt0Bbnpk1ALeNN9JJ/l6zLATCKcvixty0Aonyi1nyG9LNL6+rtzsDOh7S5uDul67P2lXFUta1eY2Ma0e/JAHJcKgTqgFGCZJvsoFydnyu23AanhaPT4c3w3ZpGs0; evil_tracker=JcDDfXw14Efx4iLycPEDQaF8+Csci+cRHz0pwTm1JW9kvXyKlUcGVlpCw7qYZtORuNnVb3m6HOwJneFhAdDlw5FQbQh1YmX8ZBgKD51Fo8T0R/0a8W0suJ/mJrQ6H6MFjgZc8YE7vx8zt+nUPT0qfZ9TCSndA0EXLerIc6Cdu06wBPF0m2ydkMKIPn/R6pU+mVrn58RZrLdcbsrwm5mhSCM9RjDYqEMye9n7jhTbdyna+X+7S8XubJRXqWa9Zft2UuprU0wnUVUA6eFdqvaiAGoepQFjJjh13g0fp6+GJiNwfSJbjTi3GK2o9E9t8qfLr0Avzjj9rqPG2G5MBxZMjg" \ -hdr "DNT: 1" \ -hdr "Connection: keep-alive" \ -hdr "Upgrade-Insecure-Requests: 1" \ -hdr {If-None-Match: W/"9060a5e7924af13779c0437265ad2f1c"} rxresp expect resp.status == 200 } client c1 -run varnish v1 -expect esi_errors == 0 varnish-7.5.0/bin/varnishtest/tests/v00005.vtc000066400000000000000000000025601457605730600210510ustar00rootroot00000000000000varnishtest "VCL: test backend probe syntax" # Check status definition varnish v1 -vcl { backend b1 { .host = "${localhost}"; .probe = { .expected_response = 204; } } } # Check url definition varnish v1 -vcl { backend b1 { .host = "${localhost}"; .probe = { .url = "/"; } } } # Check request definition varnish v1 -vcl { backend b1 { .host = "${localhost}"; .probe = { .request = "GET / HTTP/1.1" "Host: foo.bar" ; } } } # Check expect_close definition varnish v1 -errvcl {Expected "true" or "false"} { backend b1 { .host = "${localhost}"; .probe = { .url = "/"; .expect_close = faux; } } } varnish v1 -errvcl {Expected "true" or "false"} { backend b1 { .host = "${localhost}"; .probe = { .url = "/"; .expect_close = 1; } } } # Check redefinition varnish v1 -errvcl {Probe request redefinition at:} { backend b1 { .host = "${localhost}"; .probe = { .url = "/"; .request = "GET / HTTP/1.1" "Host: foo.bar" ; } } } # Check redefinition the other way varnish v1 -errvcl {Probe request redefinition at:} { backend b1 { .host = "${localhost}"; .probe = { .request = "GET / HTTP/1.1" "Host: foo.bar" ; .url = "/"; } } } varnish v1 -errvcl {Expected CNUM got '"120s"'} { backend default { .host = "${localhost}"; .probe = { .timeout = "120s"; } } } varnish-7.5.0/bin/varnishtest/tests/v00006.vtc000066400000000000000000000041361457605730600210530ustar00rootroot00000000000000varnishtest "VCL: Test backend retirement" # # This case is quite sensitive to ordering of the worker threads because # it has so little actual traffic. In a real world setting, this should # not be an issue. # # First do one request to get a work-thread that holds a VCL reference server s1 { rxreq expect req.url == "/bar" txresp } -start # Only one pool, to avoid getting more than one work thread varnish v1 \ -arg "-p thread_pools=1" \ -arg "-p thread_pool_timeout=10" \ -vcl+backend { } -start # Give the varnishd a chance to start and create workers, delaying for # >thread_pool_timeout to allow for any over-bred workers to be kissed # to death. # NB: This is important for to avoid mis-ordering of the workers. delay 11 # But all in all, the delay does not fully prevent over-breeding. varnish v1 -expect MAIN.threads >= 10 client c1 { txreq -url "/bar" rxresp expect resp.status == 200 } -run server s1 -wait varnish v1 -expect n_backend == 1 varnish v1 -expect n_vcl_avail == 1 varnish v1 -expect n_vcl_discard == 0 # Set up a new VCL and backend server s2 { rxreq expect req.url == "/foo" txresp } -start varnish v1 -vcl { backend b2 { .host = "${s2_addr}"; .port = "${s2_port}"; } } varnish v1 -vsl_catchup varnish v1 -expect n_backend == 2 varnish v1 -expect n_vcl_avail == 2 varnish v1 -expect n_vcl_discard == 0 varnish v1 -cli "vcl.list" # Discard the first VCL varnish v1 -cli "vcl.discard vcl1" varnish v1 -vsl_catchup # Give expiry thread a chance to let go delay 2 varnish v1 -vsl_catchup # It may not go away as long as the workthread holds a VCL reference varnish v1 -expect n_backend >= 1 varnish v1 -expect n_vcl_avail >= 1 varnish v1 -expect n_vcl_discard >= 0 # Do another request through the new VCL to the new backend client c1 { txreq -url "/foo" rxresp expect resp.status == 200 } -run varnish v1 -vsl_catchup server s2 -wait # The workthread should have released its VCL reference now # but we need to tickle the CLI to notice varnish v1 -cli "vcl.list" varnish v1 -expect n_backend == 1 varnish v1 -expect n_vcl_avail == 1 varnish v1 -expect n_vcl_discard == 0 varnish-7.5.0/bin/varnishtest/tests/v00008.vtc000066400000000000000000000012521457605730600210510ustar00rootroot00000000000000varnishtest "Test host header specification" server s1 { rxreq expect req.url == "/foo" expect req.http.host == "snafu" txresp -body "foo1" rxreq expect req.url == "/bar" expect req.http.host == "${localhost}" txresp -body "foo1" } -start varnish v1 -vcl+backend { } -start client c1 { txreq -url "/foo" -hdr "Host: snafu" rxresp txreq -url "/bar" -proto HTTP/1.0 rxresp } -run server s2 { rxreq expect req.url == "/barf" expect req.http.host == "FOObar" txresp -body "foo1" } -start varnish v1 -vcl { backend b1 { .host = "${s2_addr}"; .port = "${s2_port}"; .host_header = "FOObar"; } } client c1 { txreq -url "/barf" -proto HTTP/1.0 rxresp } -run varnish-7.5.0/bin/varnishtest/tests/v00009.vtc000066400000000000000000000063651457605730600210640ustar00rootroot00000000000000varnishtest "Test std.ban()" # see also v00011.vtc server s1 { rxreq txresp -body "foo" rxreq txresp -body "foo" } -start varnish v1 -vcl+backend { import std; sub vcl_synth { set resp.http.b1 = std.ban(req.http.doesntexist); set resp.http.b1e = std.ban_error(); set resp.http.b2 = std.ban(""); set resp.http.b2e = std.ban_error(); set resp.http.b3 = std.ban("req.url"); set resp.http.b3e = std.ban_error(); set resp.http.b4 = std.ban("req.url //"); set resp.http.b4e = std.ban_error(); set resp.http.b5 = std.ban("req.url // bar"); set resp.http.b5e = std.ban_error(); set resp.http.b6 = std.ban("req.url == bar //"); set resp.http.b6e = std.ban_error(); set resp.http.b7 = std.ban("foo == bar //"); set resp.http.b7e = std.ban_error(); set resp.http.b8 = std.ban("obj.age == 4"); set resp.http.b8e = std.ban_error(); set resp.http.b9 = std.ban("obj.age // 4d"); set resp.http.b9e = std.ban_error(); set resp.http.b10 = std.ban("obj.http.foo > 4d"); set resp.http.b10e = std.ban_error(); set resp.http.b11 = std.ban("req.url ~ ^/$"); set resp.http.b11e = std.ban_error(); } sub vcl_recv { if (req.method == "BAN") { return (synth(209,"foo")); } } } -start client c1 { txreq rxresp expect resp.http.X-Varnish == "1001" } -run logexpect l1 -v v1 -d 1 -g vxid { expect * 1004 VCL_Error {ban[(][)]: Null argument} expect * 1004 VCL_Error {ban[(][)]: No ban conditions found[.]} expect * 1004 VCL_Error {ban[(][)]: Expected comparison operator[.]} expect * 1004 VCL_Error {ban[(][)]: Expected second operand[.]} expect * 1004 VCL_Error {ban[(][)]: expected conditional [(]==, !=, ~ or !~[)] got "//"} expect * 1004 VCL_Error {ban[(][)]: Expected && between conditions, found "//"} expect * 1004 VCL_Error {ban[(][)]: Unknown or unsupported field "foo"} expect * 1004 VCL_Error {ban[(][)]: expected duration .ms|s|m|h|d|w|y. got "4"} expect * 1004 VCL_Error {ban[(][)]: expected conditional [(]==, !=, >, >=, < or <=[)] got "//"} expect * 1004 VCL_Error {ban[(][)]: expected conditional [(]==, !=, ~ or !~[)] got ">"} } -start client c1 { txreq -req "BAN" rxresp expect resp.http.X-Varnish == "1004" expect resp.status == 209 expect resp.http.b1 == false expect resp.http.b1e == {Null argument} expect resp.http.b2 == false expect resp.http.b2e == {No ban conditions found.} expect resp.http.b3 == false expect resp.http.b3e == {Expected comparison operator.} expect resp.http.b4 == false expect resp.http.b4e == {Expected second operand.} expect resp.http.b5 == false expect resp.http.b5e == {expected conditional (==, !=, ~ or !~) got "//"} expect resp.http.b6 == false expect resp.http.b6e == {Expected && between conditions, found "//"} expect resp.http.b7 == false expect resp.http.b7e == {Unknown or unsupported field "foo"} expect resp.http.b8 == false expect resp.http.b8e == {expected duration [ms|s|m|h|d|w|y] got "4"} expect resp.http.b9 == false expect resp.http.b9e == {expected conditional (==, !=, >, >=, < or <=) got "//"} expect resp.http.b10 == false expect resp.http.b10e == {expected conditional (==, !=, ~ or !~) got ">"} expect resp.http.b11 == true expect resp.http.b11e == {} } -run logexpect l1 -wait client c1 { txreq rxresp expect resp.http.X-Varnish == "1006" } -run varnish-7.5.0/bin/varnishtest/tests/v00010.vtc000066400000000000000000000035631457605730600210510ustar00rootroot00000000000000varnishtest "VCL: check panic and restart" barrier b1 cond 2 barrier b2 cond 2 server s1 { rxreq txresp -hdr "Foo: bar" -body "abcdef\n" rxreq txresp -hdr "Panic: fetch" -body "012345\n" close barrier b1 sync accept rxreq txresp -hdr "Foo: bar" -body "abcdef\n" rxreq txresp -hdr "Panic: deliver" -body "012345\n" close barrier b2 sync accept rxreq txresp -hdr "Foo: foo" -body "abcdef\n" } -start varnish v1 -arg "-sdefault,1m" varnish v1 -cliok "param.set feature +no_coredump" varnish v1 -vcl+backend { import vtc; import debug; sub vcl_init { new foo = debug.obj("foo"); new bar = debug.concat("bar"); } sub vcl_backend_response { if (beresp.http.panic == "fetch") { vtc.panic("Had Panic header: " + beresp.http.panic); } } sub vcl_deliver { if (resp.http.panic == "deliver") { vtc.panic("Had Panic header: " + resp.http.panic); } } } -start varnish v1 -cliok "stop" varnish v1 -cliok "start" varnish v1 -wait-running varnish v1 -expect MGT.child_panic == 0 client c1 { txreq -url "/" rxresp txreq -url "/foo" # Don't expect answer, the server crashed. } -run varnish v1 -wait-stopped varnish v1 -cliok "panic.show" varnish v1 -clijson "panic.show -j" varnish v1 -cliok "panic.clear" varnish v1 -expect MGT.child_panic == 1 varnish v1 -clierr 300 "panic.clear" varnish v1 -cliok "start" varnish v1 -wait-running barrier b1 sync client c1 { txreq -url "/" rxresp txreq -url "/foo" # Don't expect answer, the server crashed. } -run varnish v1 -wait-stopped varnish v1 -cliok "panic.show" varnish v1 -clijson "panic.show -j" varnish v1 -cliok "panic.clear -z" varnish v1 -expect MGT.child_panic == 0 varnish v1 -clierr 300 "panic.clear" varnish v1 -cliok "start" varnish v1 -wait-running barrier b2 sync client c1 { txreq -url "/" rxresp expect resp.http.foo == "foo" } -run varnish v1 -cliok "panic.clear -z" varnish v1 -expectexit 0x40 varnish-7.5.0/bin/varnishtest/tests/v00011.vtc000066400000000000000000000031061457605730600210430ustar00rootroot00000000000000varnishtest "Test vcl ban()" # see also v00009.vtc server s1 { rxreq txresp -body "foo" rxreq txresp -body "foo" } -start varnish v1 -vcl+backend { sub vcl_recv { if (req.method == "BAN") { ban(req.http.doesntexist); ban(""); ban("req.url"); ban("req.url //"); ban("req.url // bar"); ban("req.url == bar //"); ban("foo == bar //"); ban("obj.age == 4"); ban("obj.age // 4d"); ban("obj.http.foo > 4d"); ban("req.url ~ ^/$"); return (synth(209,"foo")); } } } -start client c1 { txreq rxresp expect resp.http.X-Varnish == "1001" } -run logexpect l1 -v v1 -d 1 -g vxid { expect * 1004 VCL_Error {ban[(][)]: Null argument} expect * 1004 VCL_Error {ban[(][)]: No ban conditions found[.]} expect * 1004 VCL_Error {ban[(][)]: Expected comparison operator[.]} expect * 1004 VCL_Error {ban[(][)]: Expected second operand[.]} expect * 1004 VCL_Error {ban[(][)]: expected conditional [(]==, !=, ~ or !~[)] got "//"} expect * 1004 VCL_Error {ban[(][)]: Expected && between conditions, found "//"} expect * 1004 VCL_Error {ban[(][)]: Unknown or unsupported field "foo"} expect * 1004 VCL_Error {ban[(][)]: expected duration .ms|s|m|h|d|w|y. got "4"} expect * 1004 VCL_Error {ban[(][)]: expected conditional [(]==, !=, >, >=, < or <=[)] got "//"} expect * 1004 VCL_Error {ban[(][)]: expected conditional [(]==, !=, ~ or !~[)] got ">"} } -start client c1 { txreq -req "BAN" rxresp expect resp.http.X-Varnish == "1004" expect resp.status == 209 } -run logexpect l1 -wait client c1 { txreq rxresp expect resp.http.X-Varnish == "1006" } -run varnish-7.5.0/bin/varnishtest/tests/v00012.vtc000066400000000000000000000010161457605730600210420ustar00rootroot00000000000000varnishtest "Check backend connection limit" barrier b1 cond 2 barrier b2 cond 2 server s1 { rxreq barrier b1 sync barrier b2 sync txresp } -start varnish v1 -vcl { backend default { .host = "${s1_addr}"; .port = "${s1_port}"; .max_connections = 1; } sub vcl_recv { return(pass); } } -start client c1 { txreq rxresp expect resp.status == 200 } -start client c2 { barrier b1 sync txreq rxresp expect resp.status == 503 } -run barrier b2 sync client c1 -wait varnish v1 -expect backend_busy == 1 varnish-7.5.0/bin/varnishtest/tests/v00013.vtc000066400000000000000000000021561457605730600210510ustar00rootroot00000000000000varnishtest "Check obj.hits" server s1 { rxreq expect req.url == "/" txresp -body "slash" rxreq expect req.url == "/foo" txresp -body "foo" rxreq expect req.url == "/pass" txresp -body "pass" } -start varnish v1 -vcl+backend { sub vcl_recv { if (req.url == "/pass") { return (pass); } } sub vcl_hit { set req.http.hit-hits = obj.hits; } sub vcl_deliver { set resp.http.deliver-hits = obj.hits; if (req.http.hit-hits) { set resp.http.hit-hits = req.http.hit-hits; } } } -start client c1 { txreq rxresp expect resp.status == 200 expect resp.http.deliver-hits == 0 expect resp.http.hit-hits == txreq rxresp expect resp.status == 200 expect resp.http.deliver-hits == 1 expect resp.http.hit-hits == 1 txreq -url /foo rxresp expect resp.status == 200 expect resp.http.deliver-hits == 0 expect resp.http.hit-hits == delay .1 txreq rxresp expect resp.status == 200 expect resp.http.deliver-hits == 2 expect resp.http.hit-hits == 2 txreq -url /pass rxresp expect resp.status == 200 expect resp.http.deliver-hits == 0 expect resp.http.hit-hits == } -run varnish-7.5.0/bin/varnishtest/tests/v00014.vtc000066400000000000000000000020341457605730600210450ustar00rootroot00000000000000varnishtest "Check req.backend.healthy" barrier b1 cond 2 barrier b2 cond 2 barrier b3 cond 2 barrier b4 cond 2 server s1 { rxreq barrier b1 sync expect req.url == "/" txresp -body "slash" accept rxreq barrier b2 sync barrier b3 sync expect req.url == "/" txresp -body "slash" accept barrier b4 sync } -start varnish v1 -vcl { import std; probe foo { .url = "/"; .timeout = 2s; .interval = 2s; .window = 3; .threshold = 2; .initial = 0; } backend default { .host = "${s1_addr}"; .port = "${s1_port}"; .max_connections = 1; .probe = foo; } sub vcl_recv { if (std.healthy(default)) { return(synth(200,"Backend healthy " + req.url)); } else { return(synth(500,"Backend sick " + req.url)); } } } -start varnish v1 -cliok "backend.list -p" varnish v1 -clijson "backend.list -j -p" client c1 { txreq rxresp expect resp.status == 500 barrier b1 sync barrier b2 sync txreq rxresp expect resp.status == 500 barrier b3 sync barrier b4 sync txreq rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/v00015.vtc000066400000000000000000000005471457605730600210550ustar00rootroot00000000000000varnishtest "Check subroutine calls with no action return" server s1 { rxreq expect req.url == "/" expect req.http.foobar == "snafu" txresp -body "slash" } -start varnish v1 -vcl+backend { sub vcl_recv { call some_subr; } sub some_subr { set req.http.foobar = "snafu"; } } -start client c1 { txreq rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/v00016.vtc000066400000000000000000000064411457605730600210550ustar00rootroot00000000000000varnishtest "Various VCL compiler coverage tests" shell "true > ${tmpdir}/_varnishtest_empty_file" varnish v1 -vcl { backend b { .host = "${localhost}"; } include "${tmpdir}/_varnishtest_empty_file" ; } varnish v1 -errvcl {include not followed by semicolon.} { backend b none; include "${tmpdir}/_varnishtest_empty_file" | } shell "rm -f ${tmpdir}/_varnishtest_empty_file" varnish v1 -errvcl {include not followed by string constant.} { backend b none; include << } varnish v1 -errvcl {include not followed by string constant.} { /* token test */ error lookup hash pipe pass fetch deliver discard keep restart include if else elseif elsif ++ -- && || <= == != >= >> << += -= *= /= { } ( ) * + - / % > < = ; ! & . | ~ , } varnish v1 -errvcl {Unknown duration unit 'k'} { backend b none; sub vcl_backend_response { set beresp.ttl = 1. k; } } varnish v1 -errvcl {Operator > not possible on BACKEND} { backend a none; backend b none; sub vcl_recv { if (a > b) { } } } varnish v1 -errvcl {Unknown property 'foo' for type HTTP} { backend b none; sub vcl_hash { if (req.foo != "bar") { } } } varnish v1 -errvcl {Symbol not found: 'foo.bar'} { sub vcl_init { new bar = foo.bar(); } } varnish v1 -errvcl {Cannot be set in subroutine 'vcl_pipe'} { backend b none; sub vcl_pipe { set bereq.first_byte_timeout = 10s; } } varnish v1 -errvcl {Cannot be set in subroutine 'vcl_pipe'.} { backend b none; sub vcl_pipe { set bereq.between_bytes_timeout = 10s; } } varnish v1 -errvcl {Undefined backend c, first reference:} { backend b none; sub vcl_backend_response { if (beresp.backend == c) { set beresp.ttl = 1h; } } } varnish v1 -errvcl {Regexp compilation error:} { backend b none; sub vcl_recv { if (req.url ~ "[a") {} } } varnish v1 -errvcl {Regexp compilation error:} { backend b none; sub vcl_recv { if (req.url ~ "[" + "a") {} } } varnish v1 -errvcl {Expected CSTR got 'req'} { backend b none; sub vcl_recv { if (req.url ~ "a" + req.http.foo) {} } } varnish v1 -errvcl {Expected ')' got '-'} { backend b none; sub vcl_recv { if (req.url ~ "a" - "b") {} } } varnish v1 -errvcl {Expression has type BODY, expected BOOL} { sub vcl_synth { if (resp.body) { } } } varnish v1 -errvcl {Expression has type directors.shard, expected ACL} { import directors; backend b none; sub vcl_init { new foo = directors.shard(); } sub vcl_recv { if (client.ip ~ foo) { return (synth(200)); } } } varnish v1 -syntax 4.0 -errvcl {Expression has type directors.shard, expected ACL} { import directors; backend b none; sub vcl_init { new foo = directors.shard(); } sub vcl_recv { if (client.ip ~ foo) { return (synth(200)); } } } varnish v1 -errvcl {Undefined sub foo} { backend dummy None; sub vcl_recv { call foo; } } # NB: The line break in -errvcl is here on purpose, it prevents # a spurious "Only available when" addition to be missed when the # foo constructor could be confused with the foo instance name. varnish v1 -syntax 4.0 -errvcl {Symbol not found: 'directors.foo' At:} { import directors; backend b none; sub vcl_init { new foo = directors.foo(); } } # 'foo' overloaded varnish v1 -syntax 4.0 -errvcl {Symbol not found: 'foo'} { backend b none; acl foo -pedantic { "${localhost}"/32; } sub vcl_init { new bar = foo; } } varnish-7.5.0/bin/varnishtest/tests/v00017.vtc000066400000000000000000000052561457605730600210610ustar00rootroot00000000000000varnishtest "VCL compiler coverage test: vcc_acl.c" varnish v1 -errvcl {Too wide mask (/33) for IPv4 address} { backend b { .host = "${localhost}"; } acl a { "10.1.2.3"/33; } sub vcl_recv { if (client.ip ~ a) { return(pass); } } } varnish v1 -errvcl {Too wide mask (/129) for IPv6 address} { backend b { .host = "${localhost}"; } acl a { "1::2"/129; } sub vcl_recv { if (client.ip ~ a) { return(pass); } } } varnish v1 -vcl { backend b { .host = "${localhost}"; } acl a { "1.2.3.4"/31; "1.2.3.4"/31; } sub vcl_recv { if (client.ip ~ a) { return(pass); } } } varnish v1 -errvcl {Conflicting ACL entries:} { backend b { .host = "${localhost}"; } acl a { "1.2.3.4"; !"1.2.3.4"; } sub vcl_recv { if (client.ip ~ a) { return(pass); } } } varnish v1 -errvcl {DNS lookup(...com): } { backend b { .host = "${localhost}"; } acl a { "...com"; } sub vcl_recv { if (client.ip ~ a) { return(pass); } } } varnish v1 -errvcl {DNS lookup(10.1..2): } { backend b { .host = "${localhost}"; } acl a { "10.1..2"; } sub vcl_recv { if (client.ip ~ a) { return(pass); } } } varnish v1 -errvcl {Expected ')' got ';'} { backend b { .host = "${localhost}"; } acl a { ( "10.1.2"; } sub vcl_recv { if (client.ip ~ a) { return(pass); } } } varnish v1 -errvcl {Expected ';' got ')'} { backend b { .host = "${localhost}"; } acl a { "10.1.2" ); } sub vcl_recv { if (client.ip ~ a) { return(pass); } } } varnish v1 -vcl { backend b { .host = "${localhost}"; } acl a { ! "10.1.3"; ("...com" / 22); (!"...com"); } sub vcl_recv { if (client.ip ~ a) { return(pass); } } } varnish v1 -errvcl {Operator > not possible on IP} { backend b { .host = "${localhost}"; } sub vcl_recv { if (client.ip > "${localhost}") { return(pass); } } } varnish v1 -vcl { backend b { .host = "${localhost}"; } acl a { "10.1.1"/25; "10.1.3"/26; "10.1.3"/25; "10.1.2"/25; "10.1.2"/26; "10.1.4"/25; "10.2.66"/23; ! "10.2.64"/23; "10.2.68"/23; } sub vcl_recv { if (client.ip ~ a) { return(pass); } } } varnish v1 -errvcl {.../mask is not numeric.} { backend b { .host = "${localhost}"; } acl a { "10.0.1.0/1bc"; } sub vcl_recv { if (client.ip ~ a) { return(pass); } } } varnish v1 -errvcl {/mask only allowed once} { backend b { .host = "${localhost}"; } acl a { "10.0.1.0/22" / 22; } sub vcl_recv { if (client.ip ~ a) { return(pass); } } } varnish v1 -errvcl {Expected a flag at:} { backend b { .host = "${localhost}"; } acl a + foobar { "10.0.1.0/22" / 22; } sub vcl_recv { if (client.ip ~ a) { return(pass); } } } varnish v1 -errvcl {Unknown ACL flag:} { backend b { .host = "${localhost}"; } acl a +foobar { "10.0.1.0/22" / 22; } sub vcl_recv { if (client.ip ~ a) { return(pass); } } } varnish-7.5.0/bin/varnishtest/tests/v00018.vtc000066400000000000000000000072711457605730600210610ustar00rootroot00000000000000varnishtest "VCL compiler coverage test: vcc_action.c" varnish v1 -vcl { backend b { .host = "${localhost}"; } sub vcl_miss { return(synth(100,req.url)); } sub vcl_hit { return(synth(100,"the butter please")); } sub vcl_deliver { return(synth(resp.status, resp.reason)); } } varnish v1 -errvcl {Variable is read only.} { backend b { .host = "${localhost}"; } sub vcl_miss { set now += 1s; } } varnish v1 -vcl { backend b { .host = "${localhost}"; } sub vcl_backend_response { set beresp.ttl /= 2; } } varnish v1 -errvcl {Expected '=' got '>>'} { backend b { .host = "${localhost}"; } sub vcl_backend_response { set beresp.ttl >>= 2; } } varnish v1 -errvcl {Expected '=' got '+='} { backend b { .host = "${localhost}"; } sub vcl_backend_fetch { set bereq.backend += b; } } varnish v1 -errvcl {Expected ';' got 'if'} { backend b { .host = "${localhost}"; } /* XXX: This should not really be an synth */ sub vcl_recv { set req.url = "foo" if "bar"; } } varnish v1 -errvcl {Unknown property 'foo' for type HTTP} { backend b { .host = "${localhost}"; } sub vcl_hash { hash_data(req.foo); } } varnish v1 -vcl { backend b { .host = "${localhost}"; } sub vcl_recv { set req.url = 1; } } varnish v1 -errvcl {Expected '=' got '+='} { backend b { .host = "${localhost}"; } sub vcl_backend_response { set beresp.do_gzip += 1; } } varnish v1 -vcl { backend b { .host = "${localhost}"; } sub vcl_backend_response { set beresp.do_gzip = true; } } varnish v1 -vcl { backend b { .host = "${localhost}"; } sub vcl_backend_response { set beresp.do_gzip = false; } } varnish v1 -errvcl {Symbol not found: 'mu'} { backend b { .host = "${localhost}"; } sub vcl_backend_response { set beresp.do_gzip = mu; } } varnish v1 -errvcl {Variable cannot be unset} { backend b { .host = "${localhost}"; } sub vcl_backend_response { unset beresp.do_gzip; } } varnish v1 -errvcl {Variable cannot be set.} { backend b { .host = "${localhost}"; } sub vcl_backend_fetch { set bereq.body = "foo"; } } varnish v1 -errvcl {Unknown token '<<' when looking for STRING} { backend b { .host = "${localhost}"; } sub vcl_recv { ban (<<); } } varnish v1 -errvcl {Symbol not found} { backend b { .host = "${localhost}"; } sub vcl_recv { ban_hash (if); } } varnish v1 -vcl { backend b { .host = "${localhost}"; } sub vcl_recv { ban ("req.url ~ foo"); } } varnish v1 -errvcl "Symbol not found" { backend b { .host = "${localhost}"; } sub vcl_recv { kluf ; } } varnish v1 -errvcl {Unknown token '<<' when looking for STRING} { backend b { .host = "${localhost}"; } sub vcl_synth { synthetic( << "foo"; } } varnish v1 -errvcl {Missing argument.} { backend b { .host = "${localhost}"; } sub vcl_recv { return(synth); } } varnish v1 -errvcl {Arguments not allowed.} { backend b { .host = "${localhost}"; } sub vcl_recv { return(pipe(XXX); } } varnish v1 -errvcl {Expected return action name.} { backend b { .host = "${localhost}"; } sub vcl_recv { return(foobar); } } # issue #936 varnish v1 -errvcl {Not a valid action in subroutine 'vcl_recv'} { backend foo { .host = "${localhost}"; } sub vcl_recv { synthetic("XXX"); return (synth(503)); } } varnish v1 -errvcl {Symbol 'vcl_recv' has wrong type (sub), expected vcl:} { sub vcl_recv { return (vcl(vcl_recv)); } } varnish v1 -syntax 4.0 -errvcl {Symbol not found:} { sub vcl_recv { return (vcl(vcl_recv)); } } varnish v1 -errvcl {Syntax error} { import directors; sub vcl_recv { set req.backend_hint = directors.round_robin.backend(); } } varnish v1 -errvcl {Syntax error} { import directors; sub vcl_recv { directors.round_robin.backend(); } } varnish v1 -errvcl {Expected '.' got ';'} { import directors; sub vcl_init { new rr = directors.round_robin(); rr; } } varnish-7.5.0/bin/varnishtest/tests/v00019.vtc000066400000000000000000000035541457605730600210620ustar00rootroot00000000000000varnishtest "VCL compiler coverage test: vcc_token.c" varnish v1 -errvcl {Unterminated inline C source, starting at} " C{ " varnish v1 -vcl { backend b { .host = "${localhost}"; } # comment sub vcl_recv { set req.url = "x"; } } varnish v1 -errvcl {Unterminated /* ... */ comment, starting at} { backend b { .host = "${localhost}"; } /* } varnish v1 -errvcl {Unterminated long-string, starting at} { backend b { .host = "${localhost}"; } {" } } varnish v1 -errvcl {Unterminated long-string, starting at} { backend b { .host = "${localhost}"; } """ "" } varnish v1 -errvcl {Unterminated string, starting at} { backend b { .host = "${localhost}"; } " } varnish v1 -cliok "param.set vcc_feature +allow_inline_c" -vcl { backend b { .host = "${localhost}"; } sub vcl_recv { C{ int i; (void)i; }C } } varnish v1 -errvcl {Syntax error at} { backend b { .host = "${localhost}"; } ? } varnish v1 -errvcl {Comparison of different types: STRING '==' INT} { backend b { .host = "${localhost}"; } sub vcl_recv { if ("foo" + "bar" == 777) { set req.http.host = 1; } } } varnish v1 -errvcl {Comparison of different types: STRING '==' INT} { backend b { .host = "${localhost}"; } sub vcl_recv { if ("foo" + "bar" == 777) { set req.http.host = 1; } } } varnish v1 -errvcl {Comparison of different types: STRING '==' INT} { backend b { .host = "${localhost}"; } sub vcl_recv { if ("foo" + "bar" == 777) { set req.http.host = 1; } } } varnish v1 -errvcl {Symbol not found: 'req.http.req.http.foo'} { backend b { .host = "${localhost}"; } sub vcl_recv { set req.http.req.http.foo = "bar"; } } varnish v1 -errvcl {Unknown token '--' when looking for INT} { backend be none; sub vcl_synth { set resp.status = --200; } } varnish v1 -errvcl "Syntax error" { import debug; backend be none; sub vcl_init { new _invalid = debug.obj(); } } varnish-7.5.0/bin/varnishtest/tests/v00020.vtc000066400000000000000000000270411457605730600210470ustar00rootroot00000000000000varnishtest "VCL compiler coverage test: vcc_parse.c & vcc_expr.c" varnish v1 -cliok "param.set vcc_feature +allow_inline_c" -vcl { backend b { .host = "${localhost}"; } C{ #include }C } varnish v1 -errvcl {Found: '0' at} { 0; } # The next test issues a quite confusing error message: # Expected an action, 'if', '{' or '}'\n # ('Default' Line 42 Pos 1)\n # sub vcl_recv {\n # ###-----------\n # \n # It's actually complaining about the first token in # the default.vcl which is appended after the proffered # VCLs tokenstream. # XXX: A better error message would be desirable varnish v1 -errvcl {Symbol cannot be used here} " sub vcl_recv { { } { " varnish v1 -errvcl {Comparison of different types: INT '!=' STRING} { sub vcl_recv { if (!req.restarts != req.url) { set req.http.foo = "foo"; } } } varnish v1 -errvcl {Symbol 'vcl_recv' can only be used as a SUB expression} { backend proforma none; sub vcl_recv { set req.http.foo = vcl_recv; } } varnish v1 -errvcl {Symbol 'vcl_recv' can only be used as a SUB expression} { backend proforma none; sub vcl_recv { return (synth(200)); } sub vcl_synth { set req.http.foo = vcl_recv; } } varnish v1 -errvcl {Symbol 'asub' can only be used as a SUB expression} { backend proforma none; sub asub {} sub vcl_recv { set req.http.foo = asub; } } varnish v1 -errvcl {Symbol 'acl' type (reserved) can not be used in expression.} { import std; sub vcl_recv { call acl; } } varnish v1 -errvcl {Symbols named 'vcl_*' are reserved.} { import debug; sub vcl_recv { debug.test_probe(vcl_sub1, vcl_sub2); } } varnish v1 -errvcl {Operator * not possible on type STRING.} { sub vcl_recv { set req.http.foo = "bla" * "foo"; } } varnish v1 -errvcl {DURATION + INT not possible.} { sub vcl_backend_response { set req.ttl = req.ttl + beresp.status; } } varnish v1 -errvcl {BOOL + BOOL not possible.} { sub vcl_backend_response { if (beresp.do_gzip + beresp.do_gunzip) { } } } varnish v1 -errvcl {Operator % only possible on INT} { sub vcl_recv { if (req.ttl % 1000) { } } } varnish v1 -vcl { sub vcl_recv { # 3555 (define after use) set req.backend_hint = b; } backend b { .host = "${localhost}"; } sub vcl_recv { set req.http.foo = "foo" + "bar"; set req.http.foo = "foo" + 1; set req.http.foo = "foo" + 1.5; set req.http.foo = "foo" + now; set req.http.foo = now + 1s; set req.http.foo = now - 1s; set req.http.foo = now - now; set req.http.foo = 1 + 1s; set req.http.foo = 1 + 1; set req.http.foo = 1 - 1; set req.http.foo = 1 + -1; set req.http.foo = 1 - -1; set req.http.foo = 3 * 2; set req.http.foo = 3 / 2; set req.http.foo = 3 * -2; set req.http.foo = 3 / -2; set req.http.foo = 3.6 + 1.4; set req.http.foo = 3.6 - 1.4; set req.http.foo = 3.6 + -1.4; set req.http.foo = 3.6 - -1.4; set req.http.foo = 3.6 * 1.4; set req.http.foo = 3.6 / 1.4; set req.http.foo = 3.6 * -1.4; set req.http.foo = 3.6 / -1.4; set req.http.foo = 1.0 + 1; set req.http.foo = 1.0 - 1; set req.http.foo = 1.0 + -1; set req.http.foo = 1.0 - -1; set req.http.foo = 3.0 * 2; set req.http.foo = 3.0 / 2; set req.http.foo = 3.0 * -2; set req.http.foo = 3.0 / -2; set req.http.foo = req.http.foo + "bar" ~ "bar"; set req.http.foo = req.http.foo + "bar" !~ "bar"; set req.http.foo = "foo" + req.ttl; set req.http.foo = client.ip + ", " + server.ip; set req.ttl = -1s; set req.ttl = 1s; set req.ttl *= 1.5; set req.ttl = 1.5 s * 2.5; set req.ttl = 1.5 s / 2.5; set req.ttl = 1.5h + 1.5s; set req.ttl = 1.5h - 1.5s; if (req.ttl) { } if (!req.ttl) { } if (req.ttl > 1d) { } if (req.ttl < 1d) { } if (1) { } if (2 == 3) { } if (2 < 3) { } if (2 > 3) { } if (req.backend_hint == b) { } if (req.backend_hint != b) { } if (req.url) { } if (!req.url) { } if (req.url == "foo") { } if (req.url != "foo") { } if (req.url ~ "foo") { } if (req.url !~ "foo") { } if (1) { } elsif (2) { } elseif (3) { } else if (4) { } else { } # regression test for #2729 if (req.grace < 0s || req.grace < 1s && req.grace < 2s) { return (pass); } } } # XXX: not the most clear error message varnish v1 -errvcl {STRING - STRING not possible.} { sub vcl_recv { set req.http.foo = "foo" - "bar"; } } varnish v1 -errvcl {TIME + STRING not possible.} { sub vcl_recv { set req.ttl = now + "foo"; } } varnish v1 -errvcl {TIME + TIME not possible.} { sub vcl_recv { set req.ttl = now + now; } } varnish v1 -errvcl {INT + STRING not possible.} { sub vcl_backend_response { set beresp.status = 1 + "foo"; } } varnish v1 -errvcl {INT + TIME not possible.} { sub vcl_backend_response { set beresp.status = 1 + now; } } varnish v1 -errvcl {DURATION + INT not possible.} { sub vcl_recv { set req.ttl = 1s + 1; } } varnish v1 -errvcl {DURATION + TIME not possible.} { sub vcl_recv { set req.ttl = 1s + now; } } varnish v1 -errvcl {DURATION + STRING not possible.} { sub vcl_recv { set req.ttl = 1s + "foo"; } } varnish v1 -errvcl {IP + IP not possible.} { sub vcl_recv { set req.ttl = client.ip + server.ip; } } varnish v1 -errvcl {Name of subroutine, 'foo.bar', contains illegal character '.'} { sub foo.bar { } sub vcl_recv { call foo.bar; } } # 3555 (define after use) varnish v1 -vcl { import debug; backend be none; sub caller { debug.call(foo); } sub foo { } sub vcl_recv { call caller; call foo; } } # 3555 (don't launder partial symbols) varnish v1 -errvcl {Symbol not found: 'foo.bar'} { sub vcl_recv { set req.backend_hint = foo.bar; } } varnish v1 -errvcl {The names 'vcl_*' are reserved for subroutines.} { sub vcl_bar { } sub vcl_recv { call vcl_bar; } } varnish v1 -errvcl {Function returns VOID} { import vtc; sub vcl_recv { set req.http.foo = vtc.sleep(1m); } } varnish v1 -errvcl {Not available in subroutine 'vcl_recv'.} { import blob; backend b { .host = "${localhost}"; } sub vcl_recv { blob.encode(HEX, LOWER, req.hash); } } varnish v1 -errvcl {Not available in subroutine 'vcl_hash'.} { import blob; backend b { .host = "${localhost}"; } sub vcl_hash { blob.encode(HEX, LOWER, req.hash); } } varnish v1 -errvcl {Not available in subroutine 'vcl_recv'.} { backend b { .host = "${localhost}"; } sub vcl_recv { set req.http.foo = 100 + beresp.status; } } varnish v1 -cliok "param.set vcc_feature -err_unref" varnish v1 -errvcl {Impossible Subroutine} { backend b { .host = "127.0.0.1"; } sub foo { set req.http.foo = 100 + beresp.status; } } varnish v1 -cliok "param.set vcc_feature +err_unref" varnish v1 -errvcl { ('' Line 4 Pos 44) -- (Pos 49) sub foo { set req.http.foo = 100 + beresp.status; } -------------------------------------------######---------- Not available from subroutine 'vcl_recv'. ...in subroutine "foo" ('' Line 4 Pos 13) sub foo { set req.http.foo = 100 + beresp.status; } ------------###-------------------------------------------- ...called from "vcl_recv" ('' Line 5 Pos 29) sub vcl_recv { call foo; } ----------------------------###--- } { backend b { .host = "${localhost}"; } sub foo { set req.http.foo = 100 + beresp.status; } sub vcl_recv { call foo; } } varnish v1 -errvcl { ('' Line 4 Pos 44) -- (Pos 49) sub foo { set req.http.foo = 100 + beresp.status; } -------------------------------------------######---------- Not available from subroutine 'vcl_recv'. ...in subroutine "foo" ('' Line 4 Pos 13) sub foo { set req.http.foo = 100 + beresp.status; } ------------###-------------------------------------------- ...called from "bar" ('' Line 5 Pos 24) sub bar { call foo; } -----------------------###--- ...called from "vcl_recv" ('' Line 6 Pos 29) sub vcl_recv { call bar; } ----------------------------###--- } { backend b { .host = "${localhost}"; } sub foo { set req.http.foo = 100 + beresp.status; } sub bar { call foo; } sub vcl_recv { call bar; } } varnish v1 -errvcl {Name of ACL, 'foo.bar', contains illegal character '.'} { acl foo.bar { } sub vcl_recv { if (client.ip ~ foo.bar) { } } } varnish v1 -errvcl {Expected 'from path ...'} { import std to; } varnish v1 -errvcl {INT * BLOB not possible.} { import blob; sub vcl_deliver { set resp.status = 100 * blob.decode(HEX, encoded="a"); } } varnish v1 -errvcl {INT * BLOB not possible.} { import blob; sub vcl_deliver { set resp.status = 100 * :4thASR0O18ZxnoKtc4zd8KuO25rPvwvMQyAvRfilz6o=:); } } # XXX: should spot nonexistent storage varnish v1 -errvcl {Symbol not found: 'storage.foo'} { sub vcl_backend_response { set beresp.storage = storage.foo; } } varnish v1 -errvcl {Comparison of different types: BACKEND '==' STRING} { sub vcl_backend_response { set beresp.http.bereq_backend = bereq.backend; if (bereq.backend == beresp.http.bereq_backend) { set beresp.http.bereq_backend_cmp = "ok"; } } } varnish v1 -errvcl {'!' must be followed by BOOL, found REAL.} { import std; sub vcl_recv { if (!std.random(0,1)) { } } } varnish v1 -errvcl {'&&' must be followed by BOOL, found REAL.} { import std; sub vcl_recv { if (0 && std.random(0,1)) { } } } varnish v1 -errvcl {'||' must be followed by BOOL, found REAL.} { import std; sub vcl_recv { if (0 || std.random(0,1)) { } } } varnish v1 -errvcl {'&&' must be preceeded by BOOL, found REAL.} { import std; sub vcl_recv { if (std.random(0,1) && 0) { } } } varnish v1 -errvcl {'||' must be preceeded by BOOL, found REAL.} { import std; sub vcl_recv { if (std.random(0,1) || 0) { } } } varnish v1 -errvcl {Symbol 'acl' type (reserved) can not be used in expression.} { import std; sub vcl_recv { if (client.ip ~ acl) {} } } varnish v1 -errvcl {Symbol 'default' is a reserved word.} { import std; sub vcl_recv { set req.http.foo = default; } } varnish v1 -errvcl {Cannot convert HTTP to STRING} { sub vcl_synth { set resp.http.foo = resp; } } server s1 { rxreq txresp -hdr "bar: X" } -start varnish v1 -vcl+backend { // Ticket 2727 sub vcl_recv { if ((2 - 1) > 0) { # nothing } } } varnish v1 -vcl+backend { import std; import debug; sub vcl_deliver { // Ticket 2745 debug.sethdr(resp.http.rst, req.restarts); set resp.http.foo = (resp.http.foo + resp.http.bar) == ("X" + resp.http.foo); // Ticket 2809 set resp.http.bar = true == false; set resp.http.a = req.http.inexistent; // correct? #3844 set resp.http.a-eq-inexistent = resp.http.a == req.http.inexistent; // discussion in #3844 - and should !!hdr work? set resp.http.is-a = !(!resp.http.a); set resp.http.is-inexistent = !(!req.http.inexistent); } sub vcl_deliver { set resp.http.p = (0 + 999999999999999); set resp.http.n = (0 - 999999999999999); if (resp.status == -(-200)) { set resp.http.o = -std.integer("-200"); } } } -start client c1 { txreq rxresp expect resp.http.rst == "0" expect resp.http.foo == "true" expect resp.http.bar == "false" expect resp.http.p == 999999999999999 expect resp.http.n == -999999999999999 expect resp.http.o == 200 expect resp.http.a == "" # correct? #3844 expect resp.http.a-eq-inexistent == "false" # discussion in #3844 expect resp.http.is-a == "true" expect resp.http.is-inexistent == "false" } -run varnish v1 -vcl { import std; backend be none; sub vcl_recv { return (synth(200)); } sub vcl_synth { if (req.url ~ "double-minus") { set resp.status = -(-204); } if (req.url ~ "minus-std") { set resp.status = -std.integer("-204"); } return (deliver); } } client c1 { txreq -url "/double-minus" rxresp expect resp.status == 204 txreq -url "/minus-std" rxresp expect resp.status == 204 } -run varnish-7.5.0/bin/varnishtest/tests/v00021.vtc000066400000000000000000000072671457605730600210600ustar00rootroot00000000000000varnishtest "VCL compiler coverage test: vcc_xref.c vcc_var.c vcc_symb.c" varnish v1 -errvcl {Variable is read only.} { backend b { .host = "${localhost}"; } sub vcl_deliver { set obj.ttl = 1 w; } } varnish v1 -errvcl {Variable is read only.} { backend b { .host = "${localhost}"; } sub foo { set obj.ttl = 1 w; } sub vcl_deliver { call foo; } } varnish v1 -errvcl {Not available in subroutine 'vcl_recv'.} { backend b { .host = "${localhost}"; } sub vcl_recv { set obj.ttl = 1 w; } } varnish v1 -errvcl {Not available from subroutine 'vcl_recv'.} { backend b { .host = "${localhost}"; } sub foo { set obj.ttl = 1 w; } sub vcl_recv { call foo; } } varnish v1 -errvcl "Symbol not found" { backend b { .host = "${localhost}"; } sub vcl_recv { discard; } } varnish v1 -errvcl "Symbol not found" { backend b { .host = "${localhost}"; } sub foo { discard; } sub vcl_recv { call foo; } } varnish v1 -errvcl { Subroutine recurses on ('' Line 5 Pos 13) sub foo { call foo; } ------------###-------------- ...called from "foo" ('' Line 5 Pos 24) sub foo { call foo; } -----------------------###--- } { backend b { .host = "${localhost}"; } sub foo { call foo; } sub vcl_recv { call foo; } } varnish v1 -errvcl { ('' Line 5 Pos 13) sub bar { call foo; } ------------###-------------- ...called from "foo" ('' Line 6 Pos 24) sub foo { call bar; } -----------------------###--- ...called from "bar" ('' Line 5 Pos 24) sub bar { call foo; } -----------------------###--- } { backend b { .host = "${localhost}"; } sub bar { call foo; } sub foo { call bar; } sub vcl_recv { call foo; } } varnish v1 -errvcl {Unused acl foo, defined:} { backend b { .host = "${localhost}"; } acl foo { "${localhost}"; } } varnish v1 -errvcl {Unused sub foo, defined:} { backend b { .host = "${localhost}"; } sub foo { } } # deliberately testing for name "none" varnish v1 -errvcl {Unused sub none, defined:} { backend b { .host = "${localhost}"; } sub none { } } varnish v1 -errvcl {Invalid return "deliver"} { backend b { .host = "${localhost}"; } sub vcl_recv { call foo; } sub foo { return (deliver); } } varnish v1 -errvcl {HTTP header (buckinghambuckingham..) is too long.} { backend foo { .host = "${localhost}"; } sub vcl_deliver { set resp.http.buckinghambuckinghambuckinghambuckinghambuckinghambuckinghambuckinghambuckinghambuckinghambuckinghambuckinghambuckinghambucking = "foobar"; } } varnish v1 -vcl { backend foo { .host = "${localhost}"; } sub vcl_deliver { set resp.http.buckinghambuckinghambuckinghambuckinghambuckinghambuckinghambuckinghambuckinghambuckinghambuckinghambuckinghambuckinghambuckin = "foobar"; } } varnish v1 -errvcl {Symbol not found: 'req.foobar'} { backend foo { .host = "${localhost}"; } sub vcl_recv { set req.foobar = 3; } } varnish v1 -errvcl {Symbol 'anacl' has wrong type (acl), expected sub:} { sub vcl_recv { if (client.ip ~ anacl) { } } sub anacl { } } varnish v1 -errvcl {Symbols named 'vcl_*' are reserved.} { sub vcl_recv { if (client.ip ~ vcl_foo) { } } } # 3628 varnish v1 -vcl { backend be none; acl foo { } sub vcl_recv { if (client.ip ~ foo || server.ip ~ foo) { } } } varnish v1 -errvcl {Expression has type BOOL, expected ACL} { sub vcl_recv { if (client.ip ~ true) { } } } varnish v1 -errvcl {Symbol 'default' is a reserved word.} { sub vcl_recv { if (client.ip ~ default) { } } } varnish v1 -syntax 4.1 -errvcl {(Only available when VCL syntax <= 4.0)} { sub vcl_recv { set req.esi = false; } } varnish v1 -syntax 4.0 -errvcl {(Only available when 4.1 <= VCL syntax)} { sub vcl_deliver { set resp.do_esi = false; } } varnish-7.5.0/bin/varnishtest/tests/v00022.vtc000066400000000000000000000003501457605730600210430ustar00rootroot00000000000000varnishtest "Various VCL compiler coverage tests - DNS dependent" feature dns varnish v1 -errvcl {resolves to too many addresses} { backend b none; sub vcl_recv { if (remote.ip == "dns-canary-multi.varnish-cache.org") {} } } varnish-7.5.0/bin/varnishtest/tests/v00024.vtc000066400000000000000000000007271457605730600210550ustar00rootroot00000000000000varnishtest "Test that headers can be compared" server s1 { rxreq expect req.url == "/foo" txresp -status 200 -body "1" } -start varnish v1 -vcl+backend { sub vcl_recv { if (req.http.etag == req.http.if-none-match) { return(synth(400,"FOO")); } } } -start client c1 { txreq -url "/foo" rxresp expect resp.status == 200 expect resp.bodylen == 1 txreq -url "/foo" -hdr {etag: "foo"} -hdr {if-none-match: "foo"} rxresp expect resp.status == 400 } -run varnish-7.5.0/bin/varnishtest/tests/v00025.vtc000066400000000000000000000131021457605730600210450ustar00rootroot00000000000000varnishtest "More VCL coverage" server s1 { rxreq expect req.http.c_id == "Samuel B. Nobody" txresp rxreq expect req.http.c_id == ${localhost} txresp } -start # Test max_vcl param varnish v1 -arg "-i J.F.Nobody" -vcl+backend { } -start varnish v1 -cliok "param.set max_vcl 2" varnish v1 -cliok "param.set max_vcl_handling 2" varnish v1 -vcl+backend { } varnish v1 -errvcl {Too many (2) VCLs already loaded} {} varnish v1 -cliok "param.set max_vcl_handling 1" varnish v1 -cliexpect "Remember to vcl.discard the old/unused VCLs." \ {vcl.inline foobar "vcl 4.1; backend b1 {.host=\"${s1_addr}\";}"} varnish v1 -cliok "param.set max_vcl 100" varnish v1 -syntax 4.0 -vcl+backend { import std; import directors; sub vcl_init { new rr = directors.round_robin(); rr.add_backend(s1); } sub vcl_recv { set client.identity = "Samuel B. Nobody"; set req.backend_hint = rr.backend(); } sub vcl_deliver { set resp.http.server_port = std.port(server.ip); set resp.http.server_port_foo = std.port(server.ip) + "_foo"; set resp.http.ttl1 = (req.ttl + 10s); set resp.http.ttl2 = req.ttl + 10s; set resp.http.id = server.identity; set resp.http.esi = req.esi; set resp.http.be = req.backend_hint; set resp.http.c_id = client.identity; set resp.http.l_ip = local.ip; set resp.http.r_ip = remote.ip; if (obj.uncacheable) { } set resp.http.o_age = obj.age; set resp.http.o_ttl = obj.ttl; set resp.http.o_grace = obj.grace; set resp.http.o_keep = obj.keep; } sub vcl_backend_response { set beresp.http.bereq_backend = bereq.backend; set beresp.http.beresp_backend = beresp.backend; set beresp.http.keep = beresp.keep; set beresp.http.stv = beresp.storage; set beresp.http.be_ip = beresp.backend.ip; set beresp.http.be_nm = beresp.backend.name; set beresp.http.unc = bereq.uncacheable; if (beresp.ttl > 3 s) { set beresp.http.ttl = "long"; } else { set beresp.http.ttl = "short"; } } sub vcl_hit { if (obj.status != 200) { return(synth(700)); } if (obj.proto) { } if (obj.reason) { } if (obj.keep > 1m) { } if (obj.grace < 3m) { return (deliver); } if (obj.ttl < 3m) { return (deliver); } } sub vcl_backend_fetch { set bereq.http.c_id = client.identity; if (bereq.between_bytes_timeout < 10s) { set bereq.http.quick = "please"; } if (bereq.connect_timeout < 10s) { set bereq.http.hello = "please"; } set bereq.connect_timeout = 10s; } } client c1 { txreq rxresp expect resp.status == 200 expect resp.http.bereq_backend == "rr" expect resp.http.beresp_backend == "s1" expect resp.http.be_ip == "${localhost}" expect resp.http.be_nm == "s1" expect resp.http.be == "rr" txreq rxresp } -run varnish v1 -syntax 4.0 -errvcl {Symbol not found: 'sess.xid' (Only available when 4.1 <= VCL syntax)} { sub vcl_recv { set req.http.Sess-XID = sess.xid; } } varnish v1 -syntax 4.0 -errvcl {Symbol not found: 'sess.xid' (Only available when 4.1 <= VCL syntax)} { sub vcl_backend_fetch { set bereq.http.Sess-XID = sess.xid; } } varnish v1 -syntax 4.0 -errvcl {Symbol not found: 'local.endpoint' (Only available when 4.1 <= VCL syntax)} { sub vcl_recv { set req.http.Endpoint = local.endpoint; } } varnish v1 -syntax 4.0 -errvcl {Symbol not found: 'local.endpoint' (Only available when 4.1 <= VCL syntax)} { sub vcl_backend_fetch { set bereq.http.Endpoint = local.endpoint; } } varnish v1 -syntax 4.0 -errvcl {Symbol not found: 'local.socket' (Only available when 4.1 <= VCL syntax)} { sub vcl_recv { set req.http.Socket = local.socket; } } varnish v1 -syntax 4.0 -errvcl {Symbol not found: 'local.socket' (Only available when 4.1 <= VCL syntax)} { sub vcl_backend_fetch { set bereq.http.Socket = local.socket; } } varnish v1 -syntax 4.1 -vcl+backend { sub vcl_backend_fetch { set bereq.http.c_id = client.identity; } sub vcl_backend_response { set beresp.http.B-Sess-XID = sess.xid; set beresp.http.B-Endpoint = local.endpoint; set beresp.http.B-Socket = local.socket; } sub vcl_deliver { set resp.http.C-Sess-XID = sess.xid; set resp.http.C-Endpoint = local.endpoint; set resp.http.C-Socket = local.socket; } } client c1 { txreq -url "/uncached" rxresp expect resp.status == 200 expect resp.http.C-Sess-XID ~ "^[0-9]+$" expect resp.http.B-Sess-XID ~ "^[0-9]+$" expect resp.http.C-Sess-XID == resp.http.B-Sess-XID expect resp.http.C-Endpoint ~ ".?${v1_addr}.?:${v1_port}" expect resp.http.B-Endpoint ~ ".?${v1_addr}.?:${v1_port}" expect resp.http.C-Socket == "a0" expect resp.http.B-Socket == "a0" } -run varnish v1 -stop server s1 { rxreq txresp rxreq txresp } -start varnish v2 -arg "-a foo=${tmpdir}/foo.sock -a bar=${tmpdir}/bar.sock" \ -syntax 4.1 -vcl+backend { sub vcl_backend_response { set beresp.http.B-Endpoint = local.endpoint; set beresp.http.B-Socket = local.socket; } sub vcl_deliver { set resp.http.C-Endpoint = local.endpoint; set resp.http.C-Socket = local.socket; } } -start client c2 -connect "${tmpdir}/foo.sock" { txreq rxresp expect resp.status == 200 expect resp.http.C-Endpoint == "${tmpdir}/foo.sock" expect resp.http.B-Endpoint == "${tmpdir}/foo.sock" expect resp.http.C-Socket == "foo" expect resp.http.B-Socket == "foo" } -run # The backend endpoint/socket may be either of the two possibilities, # because the busyobj may point to the session started for the first # fetch. client c2 -connect "${tmpdir}/bar.sock" { txreq rxresp expect resp.status == 200 expect resp.http.C-Endpoint == "${tmpdir}/bar.sock" expect resp.http.B-Endpoint ~ "^${tmpdir}/(bar|foo).sock$" expect resp.http.C-Socket == "bar" expect resp.http.B-Socket ~ "^(bar|foo)$" } -run varnish-7.5.0/bin/varnishtest/tests/v00027.vtc000066400000000000000000000007371457605730600210610ustar00rootroot00000000000000varnishtest "Check that backend named 'default' is the default" server s1 { rxreq txresp -bodylen 25 } -start server s2 { rxreq txresp -bodylen 52 } -start varnish v1 -vcl { backend s1 { .host = "${s1_addr}"; .port = "${s1_port}"; } backend default { .host = "${s2_addr}"; .port = "${s2_port}"; } sub vcl_backend_fetch { if (bereq.url != bereq.url) { set bereq.backend = s1; } } } -start client c1 { txreq rxresp expect resp.bodylen == 52 } -run varnish-7.5.0/bin/varnishtest/tests/v00031.vtc000066400000000000000000000010001457605730600210340ustar00rootroot00000000000000varnishtest "param vcc_feature::err_unref" varnish v1 -errvcl {Unused backend c, defined:} { backend b { .host = "${localhost}"; } backend c { .host = "${localhost}"; } } varnish v1 -cliok "param.set vcc_feature -err_unref" varnish v1 -vcl { backend b { .host = "${localhost}"; } backend c { .host = "${localhost}"; } } varnish v1 -cliok "param.set vcc_feature +err_unref" varnish v1 -errvcl {Unused backend c, defined:} { backend b { .host = "${localhost}"; } backend c { .host = "${localhost}"; } } varnish-7.5.0/bin/varnishtest/tests/v00032.vtc000066400000000000000000000006351457605730600210520ustar00rootroot00000000000000varnishtest "Storage related vcl variables" server s1 { rxreq txresp } -start varnish v1 -syntax 4.0 -vcl+backend { sub vcl_backend_response { set beresp.http.has_s0 = storage.s0; set beresp.http.has_Transient = storage.Transient; } } -start varnish v1 -cliok "storage.list" client c1 { txreq rxresp expect resp.http.has_s0 == storage.s0 expect resp.http.has_Transient == storage.Transient } -run varnish-7.5.0/bin/varnishtest/tests/v00033.vtc000066400000000000000000000017371457605730600210570ustar00rootroot00000000000000varnishtest "Stevedore variables and BYTES type test" server s1 { rxreq txresp -bodylen 4 rxreq txresp -bodylen 5 } -start varnish v1 -syntax 4.0 -vcl+backend { sub vcl_backend_response { set beresp.http.foo = storage.Transient.used_space + 1 B + 1 KB + 1 MB + 1GB + 1TB; if (bereq.url == "/foo") { set beresp.storage = storage.Transient; } } sub vcl_deliver { set resp.http.bar = storage.Transient.used_space > 0B; } } -start client c1 { txreq rxresp expect resp.status == 200 expect resp.http.foo == 1100586419201 expect resp.http.bar == false txreq -url /foo rxresp expect resp.status == 200 expect resp.http.foo == 1100586419201 expect resp.http.bar == true } -run varnish v1 -syntax 4.0 -errvcl {Expected BYTES unit (B, KB, MB...) got '"X"'} { sub vcl_recv { if (storage.Transient.free_space > 4 "X") { } } } varnish v1 -syntax 4.0 -errvcl {Unknown BYTES unit} { sub vcl_recv { if (storage.Transient.free_space > 4 X) { } } } varnish-7.5.0/bin/varnishtest/tests/v00034.vtc000066400000000000000000000015651457605730600210570ustar00rootroot00000000000000varnishtest "Test sub and backend redefinition" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { } -start varnish v1 -errvcl {Subroutine 'c1' redefined} { backend foo { .host = "${localhost}"; } sub c1 { } sub c1 { } sub vcl_recv { call c1; } } varnish v1 -errvcl {Backend 's1' redefined} { backend s1 { .host = "${localhost}"; } backend s1 { .host = "${localhost}"; } } varnish v1 -errvcl {Probe 'p1' redefined} { probe p1 { } probe p1 { } backend s1 { .host = "${localhost}"; .probe = p1;} } varnish v1 -errvcl {Expected '(' got ';'} { backend s1 { .host = "${localhost}"; } sub vcl_recv { return; } } varnish v1 -vcl+backend { sub foobar { set resp.http.foo = "foo"; return; set resp.http.foo = "bar"; } sub vcl_deliver { call foobar; } } client c1 { txreq rxresp expect resp.http.foo == "foo" expect resp.http.bar == } -run varnish-7.5.0/bin/varnishtest/tests/v00037.vtc000066400000000000000000000006441457605730600210570ustar00rootroot00000000000000varnishtest "test restart in miss" server s1 { rxreq txresp -body "FOOBAR" } -start varnish v1 -vcl+backend { sub vcl_recv { if (req.restarts > 0) { unset req.http.foobar; } } sub vcl_miss { if (req.http.foobar) { return (restart); } } sub vcl_deliver { set resp.http.restarts = req.restarts; } } -start client c1 { txreq -hdr "foobar: snafu" rxresp expect resp.http.restarts == 1 } -run varnish-7.5.0/bin/varnishtest/tests/v00038.vtc000066400000000000000000000070331457605730600210570ustar00rootroot00000000000000varnishtest "VCL compiler coverage test: vcc_backend.c" varnish v1 -errvcl "IPv6 address lacks ']'" { backend b1 { .host = "[0:0:0:0"; } } varnish v1 -errvcl "IPv6 address has wrong port separator" { backend b1 { .host = "[0:0:0:0]/0"; } } varnish v1 -errvcl "with exactly three digits" { backend b1 { .host = "${localhost}"; .probe = { .expected_response = 1000; } } } varnish v1 -errvcl "Must specify .threshold with .window" { backend b1 { .host = "${localhost}"; .probe = { .window = 32; } } } varnish v1 -errvcl "Threshold must be 64 or less" { backend b1 { .host = "${localhost}"; .probe = { .threshold = 65; } } } varnish v1 -errvcl "Window must be 64 or less" { backend b1 { .host = "${localhost}"; .probe = { .window = 65; .threshold = 64; } } } varnish v1 -errvcl "Threshold can not be greater than window" { backend b1 { .host = "${localhost}"; .probe = { .window = 63; .threshold = 64; } } } varnish v1 -errvcl "NB: Backend Syntax has changed:" { backend b1 { set .host = "${localhost}"; } } varnish v1 -errvcl "Expected '{' or name of probe, got" { backend b1 { .host = "${localhost}"; .probe = "NONE"; } } varnish v1 -errvcl "Field 'port' redefined at:" { backend b1 { .host = "${localhost}"; .port = "NONE"; .port = "NONE"; } } varnish v1 -errvcl "Unknown field:" { backend b1 { .host = "${localhost}"; .fourscoreandsevenyearsago = "NONE"; } } varnish v1 -errvcl "Expected .host or .path." { backend b1 { .port = "NONE"; } } varnish v1 -errvcl "No default probe defined" { backend b1 { .probe = default; } } varnish v1 -errvcl "Only one default director possible." { backend b1 { .host = "${localhost}"; } backend default { .host = "${localhost}"; } backend default { .host = "${localhost}"; } } varnish v1 -errvcl "Unused backend b1, defined:" { backend b1 { .host = "${localhost}"; } backend default { .host = "${localhost}"; } } varnish v1 -errvcl "Address redefinition at:" { backend b1 { .host = "${localhost}"; .path = "/path/to/uds"; } } varnish v1 -errvcl "Must be a valid path or abstract socket:" { backend b1 { .path = "server.sock"; } } varnish v1 -errvcl "Path too long for a Unix domain socket" { backend b1 { .path = "/this/super/long/path/this/super/long/path/this/super/long/path/this/super/long/path/this/super/long/path/this/super/long/path/this/super/long/path/this/super/long/path/this/super/long/path/this/super/long/path/this/super/long/path/this/super/long/path/this/super/long/path/this/super/long/path/this/super/long/path/this/super/long/path/this/super/long/path/this/super/long/path/this/super/long/path/this/super/long/path/this/super/long/path/this/super/long/path/this/super/long/path"; } } varnish v1 -errvcl "Not a socket:" { backend b1 { .path = "${tmpdir}"; } } # VCC warns, but does not fail, if stat(UDS) fails with ENOENT. shell { rm -f ${tmpdir}/foo } varnish v1 -vcl { backend b None; } varnish v1 -cliexpect "(?s)Cannot stat:.+That was just a warning" \ {vcl.inline test "vcl 4.1; backend b {.path=\"${tmpdir}/foo\";}"} # VCC also warns but doesn't fail for EACCES. Tested in c00086.vtc. # The following test verifies that Varnish continues connecting to a # socket file, even if the listener at that location changes. server s1 -listen "${tmpdir}/server.sock" { rxreq txresp -hdr "Connection: close" -hdr "Cache-Control: max-age=0" } -start varnish v1 -vcl { backend b {.path = "${s1_sock}"; } } -start client c1 { txreq rxresp expect resp.status == 200 } -run server s1 -start client c1 -run varnish-7.5.0/bin/varnishtest/tests/v00039.vtc000066400000000000000000000021571457605730600210620ustar00rootroot00000000000000varnishtest "obj.hits vs Vary" server s1 { rxreq txresp -hdr "Vary: bar" -body "foobar" rxreq txresp -hdr "Vary: bar" -body "barf" } -start varnish v1 \ -arg "-p ban_lurker_sleep=0.01" \ -arg "-p ban_lurker_age=0.01" \ -vcl+backend { sub vcl_deliver { set resp.http.hits = obj.hits; } } -start client c1 { # This is a miss -> hits == 0 txreq -url "/" -hdr "Bar: 1" rxresp expect resp.status == 200 expect resp.bodylen == 6 expect resp.http.hits == 0 # This is a hit -> hits == 1 txreq -url "/" -hdr "Bar: 1" rxresp expect resp.status == 200 expect resp.bodylen == 6 expect resp.http.hits == 1 # This is a miss on different vary -> hits == 0 txreq -url "/" -hdr "Bar: 2" rxresp expect resp.status == 200 expect resp.bodylen == 4 expect resp.http.hits == 0 # This is a hit -> hits == 1 txreq -url "/" -hdr "Bar: 2" rxresp expect resp.status == 200 expect resp.bodylen == 4 expect resp.http.hits == 1 } -run # Ban everything on this hash-key varnish v1 -cliok "ban obj.http.vary ~ ." delay 1 # And run the entire test again to see that obj.hits got reset. server s1 -start client c1 -run varnish-7.5.0/bin/varnishtest/tests/v00040.vtc000066400000000000000000000007501457605730600210470ustar00rootroot00000000000000varnishtest "test failing in vcl_init{}" server s1 { rxreq txresp } -start varnish v1 -vcl+backend {} -start client c1 { txreq rxresp } -run varnish v1 -errvcl {VCL "vcl2" Failed initialization} { sub vcl_init { return(fail("Do Not Press This Button Again")); } backend b1 { .host = "${s1_addr}"; } } varnish v1 -errvcl {Do Not Press} { sub vcl_init { return(fail("Do Not Press This Button Again")); } backend b1 { .host = "${s1_addr}"; } } varnish v1 -cliok vcl.list varnish-7.5.0/bin/varnishtest/tests/v00041.vtc000066400000000000000000000203251457605730600210500ustar00rootroot00000000000000varnishtest "Test priv_task" feature !sanitizer server s1 { rxreq txresp rxreq txresp rxreq txresp expect_close accept rxreq txresp } -start varnish v1 -arg "-p debug=+vclrel -p workspace_client=1m" -vcl+backend { import debug; import std; sub vcl_init { new objc = debug.obj(); new objb = debug.obj(); } sub log_obj { std.log("objc " + objc.test_priv_task()); std.log("objb " + objb.test_priv_task()); } sub vcl_init { debug.test_priv_task("something"); debug.test_priv_task("to remember"); std.log("func " + debug.test_priv_task()); objc.test_priv_task("initX"); objb.test_priv_task("initY"); call log_obj; } sub vcl_recv { if (req.url == "/perf") { return (synth(200)); } debug.test_priv_task(req.url); set req.http.x0 = debug.test_priv_task(); debug.test_priv_task("bazz"); call log_obj; objc.test_priv_task("c" + req.xid); if (req.url == "/pipe") { return (pipe); } } sub vcl_pipe { call log_obj; objc.test_priv_task("p" + req.xid); debug.test_priv_task(req.url); set req.http.x0 = debug.test_priv_task(); debug.test_priv_task("bazz"); } sub vcl_synth { call log_obj; objc.test_priv_task("s" + req.xid); std.log("discard 1000 " + debug.priv_perf(1000)); std.log("perf 1 " + debug.priv_perf(1)); std.log("perf 10 " + debug.priv_perf(10)); std.log("perf 100 " + debug.priv_perf(100)); // std.log("perf 1000 " + debug.priv_perf(1000)); return (deliver); } sub vcl_deliver { call log_obj; objc.test_priv_task("d" + req.xid); set resp.http.x0 = req.http.x0; set resp.http.x1 = debug.test_priv_task(); set resp.http.objc = objc.test_priv_task(); } sub vcl_backend_fetch { call log_obj; objb.test_priv_task("f" + bereq.xid); debug.test_priv_task("b"); set bereq.http.bx0 = debug.test_priv_task(bereq.url); } sub vcl_backend_response { call log_obj; objb.test_priv_task("r" + bereq.xid); set beresp.http.bx0 = bereq.http.bx0; set beresp.http.bx1 = debug.test_priv_task(""); set beresp.http.objb = objb.test_priv_task(""); } sub vcl_fini { debug.test_priv_task("cleaning"); debug.test_priv_task("up"); std.log("func " + debug.test_priv_task()); std.log("obj " + objc.test_priv_task()); } } -start logexpect l0 -v v1 -g raw -d 1 -m -q "vxid == 0" { expect 0 0 CLI {^Rd vcl.load} expect 0 = Debug {^test_priv_task.*new.$} expect 0 = Debug {^test_priv_task.*update.$} expect 0 = Debug {^test_priv_task.*exists.$} expect 0 = VCL_Log {^func something to remember} expect 0 = Debug {^objc.priv_task.. = .*"initX". .new.} expect 0 = Debug {^objb.priv_task.. = .*"initY". .new.} expect 0 = Debug {^objc.priv_task.. = .*"initX"} expect 0 = VCL_Log {^objc initX} expect 0 = Debug {^objb.priv_task.. = .*"initY"} expect 0 = VCL_Log {^objb initY} expect ? = Debug {^priv_task_fini} expect ? = Debug {^obj_priv_task_fini} expect ? = Debug {^obj_priv_task_fini} expect 0 = Debug {^vcl1: VCL_EVENT_WARM} # need an anchor for the ? expects to begin expect * = CLI {^Rd debug.xid} expect 0 = CLI {^Wr 200} expect 0 = CLI {^Rd debug.listen_address} expect 0 = CLI {^Wr 200} # ... # 1006 pipe # vcl_fini expect * = Debug {^vcl1: VCL_EVENT_COLD} expect 0 = CLI {^Wr 200 0 } expect 0 = CLI {^Rd vcl.discard vcl1} expect ? = Debug {^test_priv_task.*new.$} expect ? = Debug {^test_priv_task.*update.$} expect ? = Debug {^test_priv_task.*exists.$} expect ? = Debug {^objc.priv_task.. = NULL} expect ? = Debug {^priv_task_fini} expect ? = VCL_Log {^func cleaning up} expect ? = VCL_Log {^obj } expect 0 = CLI {^Wr 200 0 } } -start logexpect l1001 -v v1 -g vxid -q "vxid == 1001" { expect * 1001 VCL_call {^RECV} expect 0 = Debug {^test_priv_task.*new.$} expect 0 = Debug {^test_priv_task.*exists.$} expect 0 = ReqHeader {^x0: /foobar} expect 0 = Debug {^test_priv_task.*update.$} expect 0 = Debug {^objc.priv_task.. = NULL} expect 0 = VCL_Log {^objc } expect 0 = Debug {^objb.priv_task.. = NULL} expect 0 = VCL_Log {^objb } expect 0 = Debug {^objc.priv_task.. =.*"c1001". .new.} expect * = VCL_call {^DELIVER} expect 0 = Debug {^objc.priv_task.. =.*"c1001".} expect 0 = VCL_Log {^objc c1001} expect 0 = Debug {^objb.priv_task.. = NULL} expect 0 = VCL_Log {^objb } expect 0 = Debug {^objc.priv_task.. =.*"d1001". .update.} expect 0 = RespHeader {^x0: /foobar} expect 0 = Debug {^test_priv_task.*exists.$} expect 0 = RespHeader {^x1: /foobar bazz} expect 0 = Debug {^objc.priv_task.. =.*"d1001".} expect 0 = RespHeader {^objc: d1001} expect 0 = VCL_return {^deliver} expect 9 = Timestamp {^Resp} expect ? = Debug {^priv_task_fini} expect ? = Debug {^obj_priv_task_fini} } -start logexpect l1002 -v v1 -g vxid -q "vxid == 1002" { expect * 1002 VCL_call {^BACKEND_FETCH} expect 0 = Debug {^objc.priv_task.. = NULL} expect 0 = VCL_Log {^objc } expect 0 = Debug {^objb.priv_task.. = NULL} expect 0 = VCL_Log {^objb } expect 0 = Debug {^objb.priv_task.. =.*"f1002". .new.} expect 0 = Debug {^test_priv_task.*new.$} expect 0 = Debug {^test_priv_task.*update.$} expect 0 = BereqHeader {^bx0: b /foobar} expect 0 = VCL_return {^fetch} expect * = VCL_call {^BACKEND_RESPONSE} expect 0 = Debug {^objc.priv_task.. = NULL} expect 0 = VCL_Log {^objc } expect 0 = Debug {^objb.priv_task.. =.*"f1002".} expect 0 = VCL_Log {^objb f1002} expect 0 = Debug {^objb.priv_task.. =.*"r1002". .update.} expect 0 = BerespHeader {^bx0: b /foobar} expect 0 = Debug {^test_priv_task.*exists.$} expect 0 = BerespHeader {^bx1: b /foobar} expect 0 = Debug {^objb.priv_task.. =.*"r1002".} expect 0 = BerespHeader {^objb: r1002} expect 0 = VCL_return {^deliver} expect 9 = Timestamp {^BerespBody} expect ? = Debug {^priv_task_fini} expect ? = Debug {^obj_priv_task_fini} } -start logexpect l1006 -v v1 -g vxid -q "vxid == 1006" { expect * 1006 VCL_call {^RECV} expect 0 = Debug {^test_priv_task.*new.$} expect 0 = Debug {^test_priv_task.*exists.$} expect 0 = ReqHeader {^x0: /pipe} expect 0 = Debug {^test_priv_task.*update.$} expect 0 = Debug {^objc.priv_task.. = NULL} expect 0 = VCL_Log {^objc } expect 0 = Debug {^objb.priv_task.. = NULL} expect 0 = VCL_Log {^objb } expect 0 = Debug {^objc.priv_task.. =.*"c1006". .new.} expect 0 = VCL_return {^pipe} expect 0 = VCL_call {^HASH} expect 0 = VCL_return {^lookup} expect 0 = Link {^bereq 1007 pipe} expect 0 = VCL_call {^PIPE} expect 0 = Debug {^objc.priv_task.. =.*"c1006".} expect 0 = VCL_Log {^objc c1006} expect 0 = Debug {^objb.priv_task.. = NULL} expect 0 = VCL_Log {^objb } expect 0 = Debug {^objc.priv_task.. =.*"p1006". .update.} expect 0 = Debug {^test_priv_task.*update.$} expect 0 = Debug {^test_priv_task.*exists.$} expect 0 = ReqUnset {^x0: /pipe} expect 0 = ReqHeader {^x0: /pipe bazz /pipe} expect 0 = Debug {^test_priv_task.*update.$} expect 0 = VCL_return {^pipe} expect 4 = PipeAcct expect ? = Debug {^priv_task_fini} expect ? = Debug {^obj_priv_task_fini} } -start client c1 { txreq -url /foobar rxresp expect resp.http.x0 == /foobar expect resp.http.x1 == "/foobar bazz" expect resp.http.objc == "d1001" expect resp.http.bx0 == "b /foobar" expect resp.http.bx1 == "b /foobar" expect resp.http.objb == "r1002" txreq -url /snafu rxresp expect resp.http.x0 == /snafu expect resp.http.x1 == "/snafu bazz" expect resp.http.objc == "d1003" expect resp.http.bx0 == "b /snafu" expect resp.http.bx1 == "b /snafu" expect resp.http.objb == "r1004" txreq -url /perf rxresp txreq -url /pipe rxresp } -run shell "echo 'vcl 4.0; backend foo { .host = \"${s1_addr}\"; .port = \"${s1_port}\"; }' > ${tmpdir}/_b00014.vcl" varnish v1 -cliok "vcl.load foo ${tmpdir}/_b00014.vcl" \ -cliok "vcl.use foo" \ -cliok "vcl.list" \ -cliok "vcl.discard vcl1" \ -cliok "vcl.list" client c1 { txreq -url /foo rxresp } -run varnish v1 -cliok "vcl.list" logexpect l0 -wait logexpect l1001 -wait logexpect l1002 -wait logexpect l1006 -wait varnish-7.5.0/bin/varnishtest/tests/v00042.vtc000066400000000000000000000025521457605730600210530ustar00rootroot00000000000000varnishtest "Make sure we also get separate PRIV_TASK contexts in ESI subrequests." server s1 { rxreq expect req.url == "/a" expect req.http.x0 == "/a0" expect req.http.x1 == "/a0" txresp -body { } rxreq expect req.url == "/foo" expect req.http.x0 == "/foo1" expect req.http.x1 == "/foo1" txresp -body { } rxreq expect req.url == "/bar" expect req.http.x0 == "/bar2" expect req.http.x1 == "/bar2" txresp rxreq expect req.url == "/b" expect req.http.x0 == "/b0" expect req.http.x1 == "/b0" txresp } -start # give enough stack to 32bit systems varnish v1 -cliok "param.set thread_pool_stack 80k" varnish v1 -vcl+backend { import debug; sub vcl_init { new o = debug.obj(); } sub vcl_recv { set req.http.x0 = debug.test_priv_task(req.url + req.esi_level); o.test_priv_task(req.url + req.esi_level); } sub vcl_miss { set req.http.x1 = debug.test_priv_task(""); } sub vcl_backend_response { set beresp.do_esi = true; } sub vcl_deliver { set resp.http.x1 = debug.test_priv_task(""); set resp.http.o1 = o.test_priv_task(""); } } -start client c1 { txreq -url /a rxresp expect resp.http.x1 == "/a0" expect resp.http.o1 == "/a0" txreq -url /b rxresp expect resp.http.x1 == "/b0" expect resp.http.o1 == "/b0" } -run varnish v1 -expect client_req == 2 varnish-7.5.0/bin/varnishtest/tests/v00043.vtc000066400000000000000000000046711457605730600210600ustar00rootroot00000000000000varnishtest "Test PRIV_TOP" # same as v00042.vtc, but the priv remains the same across esi includes server s1 { rxreq expect req.url == "/a" expect req.http.x0 == "/a0" expect req.http.x1 == "/a0" expect req.http.o1 == "" expect req.http.o2 == "/a0" txresp -body { } rxreq expect req.url == "/foo1" expect req.http.x0 == "/a0" expect req.http.x1 == "/a0" expect req.http.o1 == "/foo11" expect req.http.o2 == "/a0" txresp -body { } rxreq expect req.url == "/bar" expect req.http.x0 == "/a0" expect req.http.x1 == "/a0" expect req.http.o1 == "/foo11" expect req.http.o2 == "/a0" txresp rxreq expect req.url == "/foo2" expect req.http.x0 == "/a0" expect req.http.x1 == "/a0" expect req.http.o1 == "/foo11" expect req.http.o2 == "/a0" txresp -body { } rxreq expect req.url == "/b" expect req.http.x0 == "/b0" expect req.http.x1 == "/b0" expect req.http.o1 == "" expect req.http.o2 == "/b0" txresp } -start varnish v1 -errvcl "Not available in subroutine 'vcl_backend_fetch'" { import debug; backend be none; sub vcl_backend_fetch { debug.test_priv_top("only works on client side"); } } varnish v1 -errvcl "Not available in subroutine 'vcl_init'" { import debug; backend be none; sub vcl_init { debug.test_priv_top("only works on client side"); } } # give enough stack to 32bit systems varnish v1 -cliok "param.set thread_pool_stack 80k" varnish v1 -cliok "param.set debug +syncvsl" varnish v1 -vcl+backend { import debug; sub vcl_init { new o = debug.obj(); new o2 = debug.obj(); } sub vcl_recv { set req.http.x0 = debug.test_priv_top(req.url + req.esi_level); # coverage set req.http.x0 = debug.test_priv_top(req.url + req.esi_level); if (req.url == "/foo1") { o.test_priv_top(req.url + req.esi_level); } if (req.esi_level == 0) { o2.test_priv_top(req.url + req.esi_level); } } sub vcl_miss { set req.http.x1 = debug.test_priv_top(""); set req.http.o1 = o.test_priv_top(""); set req.http.o2 = o2.test_priv_top(""); } sub vcl_backend_fetch { } sub vcl_backend_response { set beresp.do_esi = true; } sub vcl_deliver { set resp.http.x1 = debug.test_priv_top(""); } } -start client c1 { txreq -url /a rxresp expect resp.http.x1 == "/a0" txreq -url /b rxresp expect resp.http.x1 == "/b0" } -run varnish v1 -expect client_req == 2 varnish-7.5.0/bin/varnishtest/tests/v00045.vtc000066400000000000000000000014741457605730600210600ustar00rootroot00000000000000varnishtest "Hold a reference to a VCL after a COLD event" server s1 -start # Load and use a VCL that will hold a reference varnish v1 -vcl+backend { import debug; sub vcl_init { debug.vcl_discard_delay(3s); } } -start # Load and use a new VCL, freeze the first varnish v1 -vcl+backend {} varnish v1 -cliok "vcl.state vcl1 cold" # We should now see it as cooling delay 1 varnish v1 -cliexpect "available cold cooling 0 vcl1" vcl.list varnish v1 -clijson "vcl.list -j" # It can't be warmed up yet delay 1 varnish v1 -cliexpect "vmod-debug ref on vcl1" "vcl.state vcl1 warm" # It will eventually cool down delay 2 varnish v1 -cliexpect "available cold cold 0 vcl1" vcl.list varnish v1 -clijson "vcl.list -j" # At this point it becomes possible to warm up again varnish v1 -cliok "vcl.state vcl1 warm" varnish-7.5.0/bin/varnishtest/tests/v00046.vtc000066400000000000000000000033721457605730600210600ustar00rootroot00000000000000varnishtest "Test relative to vcl_path, dot-include and absolute includes" # relative plain shell "true > ${tmpdir}/_start.vcl" varnish v1 -arg "-p vcl_path=${tmpdir}" -vcl { backend b { .host = "${localhost}"; } include "_start.vcl" ; } # absolute include varnish v1 -vcl { backend b { .host = "${localhost}"; } include "${tmpdir}/_start.vcl" ; } # absolute -> relative include shell "mkdir -p ${tmpdir}/1/2/3" shell "true > ${tmpdir}/1/2/b.vcl" shell "echo 'include \"./2/b.vcl\";' > ${tmpdir}/1/a.vcl" varnish v1 -vcl { backend b { .host = "${localhost}"; } include "${tmpdir}/1/a.vcl" ; } # same but relative to vcl_path shell "echo 'include \"1/2/b.vcl\";' > ${tmpdir}/1/ab.vcl" varnish v1 -vcl { backend b { .host = "${localhost}"; } include "1/ab.vcl" ; } # dot-relative -> relative varnish v1 -vcl { backend b { .host = "${localhost}"; } include "1/a.vcl" ; } # relative -> relative -> relative shell "echo 'include \"./3/c.vcl\";' > ${tmpdir}/1/2/b.vcl" shell "true > ${tmpdir}/1/2/3/c.vcl" varnish v1 -vcl { backend b { .host = "${localhost}"; } include "1/a.vcl" ; } # relative -> absolute shell "echo 'include \"${tmpdir}/1/2/3/c.vcl\";' > ${tmpdir}/1/aa.vcl" varnish v1 -vcl { backend b { .host = "${localhost}"; } include "1/aa.vcl" ; } # relative -> absolute -> relative shell "echo 'include \"${tmpdir}/1/2/b.vcl\";' > ${tmpdir}/1/aaa.vcl" varnish v1 -vcl { backend b { .host = "${localhost}"; } include "1/aaa.vcl" ; } # includes and parses out shell "echo 'zool' > ${tmpdir}/1/2/3/c.vcl" varnish v1 -errvcl {Found: 'zool' at} { backend b { .host = "${localhost}"; } include "1/a.vcl"; } shell "rm -f ${tmpdir}/a" shell "rm -f ${tmpdir}/_start.vcl" varnish v1 -errvcl {needs absolute filename of including file.} { include "./foobar"; } varnish-7.5.0/bin/varnishtest/tests/v00047.vtc000066400000000000000000000016451457605730600210620ustar00rootroot00000000000000varnishtest "Changing Last Modified in vcl_deliver" server s1 { rxreq txresp -hdr "Last-Modified: Wed, 27 Apr 2016 14:00:00 GMT" } -start varnish v1 -vcl+backend { sub vcl_deliver { if (req.http.deliver == "modify") { set resp.http.Last-Modified = "Wed, 27 Apr 2016 16:00:00 GMT"; } } } -start client c1 { txreq rxresp expect resp.status == 200 txreq -hdr "If-Modified-Since: Wed, 27 Apr 2016 14:00:00 GMT" rxresp expect resp.status == 304 txreq -hdr "If-Modified-Since: Wed, 27 Apr 2016 12:00:00 GMT" rxresp expect resp.status == 200 txreq -hdr "If-Modified-Since: Wed, 27 Apr 2016 14:00:00 GMT" -hdr "deliver: modify" rxresp expect resp.status == 200 txreq -hdr "If-Modified-Since: Wed, 27 Apr 2016 16:00:00 GMT" -hdr "deliver: modify" rxresp expect resp.status == 304 txreq -hdr "If-Modified-Since: Wed, 27 Apr 2016 18:00:00 GMT" -hdr "deliver: modify" rxresp expect resp.status == 304 } -run varnish-7.5.0/bin/varnishtest/tests/v00049.vtc000066400000000000000000000013701457605730600210570ustar00rootroot00000000000000varnishtest "VCL syntax numbers" varnish v1 -vcl {backend b1 None;} -start varnish v1 -syntax 3.9 -errvcl "VCL version 3.9 not supported." { backend b1 None; } varnish v1 -syntax 4.0 -errvcl "silly buggers" { vcl 4.01; backend b1 None; } varnish v1 -syntax 4.0 -errvcl "VCL version 9.9 not supported" { vcl 9.9; backend b1 None; } varnish v1 -cliexpect {Don't play silly buggers with VCL version numbers} \ {vcl.inline t0 "vcl 4.00 ; backend b { .host = \"localhost\";} "} varnish v1 -cliexpect {Don't play silly buggers with VCL version numbers} \ {vcl.inline t1 "vcl 04.0 ; backend b { .host = \"localhost\";} "} varnish v1 -cliexpect {Expected 'vcl N.N;' found no semi-colon} \ {vcl.inline t2 "vcl 4.0 backend b { .host = \"localhost\";} "} varnish-7.5.0/bin/varnishtest/tests/v00050.vtc000066400000000000000000000073761457605730600210630ustar00rootroot00000000000000varnishtest "VCL/VRT: status >1000 handling" server s1 { rxreq txresp -status 200 rxreq txresp -status 301 rxreq txresp -status 998 } server s1 -start # - internal status code visible ? # - reason phrase set for status %1000 ? # - status < 1000 stored/delivered varnish v1 -vcl+backend { import std; probe sick_probe { .initial = 0; } backend sick { .host = "${bad_backend}"; .probe = sick_probe; } sub vcl_backend_fetch { if (bereq.retries == 0) { set bereq.backend = s1; } } sub vcl_backend_response { set beresp.status = beresp.status + 1000; std.log(beresp.status); if (beresp.status == 1200) { if (beresp.reason != "OK") { std.log("!OK"); return (abandon); } return (deliver); } else if (beresp.status == 1301) { if (beresp.reason != "Moved Permanently") { std.log("!Moved Permanently"); return (abandon); } return (deliver); } else { if (beresp.status != 1998) { std.log("!1998"); return (abandon); } set beresp.status = 1999; set bereq.backend = sick; # to get to vcl_backend_error return (retry); } # UNREACHED std.log("impossible"); return (abandon); } sub vcl_backend_error { # getting here via FetchError "no backend connection" if (beresp.status != 503) { return (abandon); } set beresp.status = 1200; set beresp.reason = "OK from v_b_e"; if (beresp.status != 1200) { return (abandon); } set beresp.ttl = 1m; } sub vcl_hash { if (req.url == "/deliver") { hash_data("/a"); } else { hash_data(req.url); } return(lookup); } sub vcl_deliver { if (req.url == "/deliver") { set resp.status = 40404; if (resp.status != 40404) { return (synth(400)); } } if (req.http.huge == "yes") { set resp.status = 65536; } if (req.http.small == "yes") { set resp.status = 99; } if (req.http.negative == "yes") { set resp.status = -200; } } sub vcl_recv { if (req.url == "/synth") { return (synth(22301, "Hamburg")); } } sub vcl_synth { if (resp.reason == "VCL failed") { return (deliver); } std.log("synth " + resp.status + " " + resp.reason); if (resp.status != 22301) { set resp.status = 501; return (deliver); } if (resp.reason != "Hamburg") { set resp.status = 502; return (deliver); } set resp.status = 22302; set resp.reason = "Wrong Postcode"; set resp.http.residual = resp.status % 10000; return (deliver); } } -start client c1 { txreq -url "/a" rxresp expect resp.status == 200 txreq -url "/a" rxresp expect resp.status == 200 txreq -url "/b" rxresp expect resp.status == 301 txreq -url "/b" rxresp expect resp.status == 301 txreq -url "/c" rxresp expect resp.status == 200 expect resp.reason == "OK from v_b_e" txreq -url "/c" rxresp expect resp.status == 200 expect resp.reason == "OK from v_b_e" txreq -url "/deliver" rxresp expect resp.status == 404 expect resp.reason == "Not Found" txreq -url "/synth" rxresp expect resp.status == 302 expect resp.http.residual == 2302 expect resp.reason == "Wrong Postcode" } -run logexpect l1 -v v1 { expect * * VCL_Error "illegal resp.status .99. ...0##." expect * * VCL_Error "resp.status .65536. > 65535" expect * * VCL_Error "resp.status .-200. is negative" } -start client c1 { txreq -url "/a" -hdr "Small: yes" rxresp expect resp.status == 503 expect resp.reason == "VCL failed" expect_close } -run client c1 { txreq -url "/a" -hdr "Huge: yes" rxresp expect resp.status == 503 expect resp.reason == "VCL failed" } -run client c1 { txreq -url "/a" -hdr "Negative: yes" rxresp expect resp.status == 503 expect resp.reason == "VCL failed" } -run logexpect l1 -wait varnish-7.5.0/bin/varnishtest/tests/v00051.vtc000066400000000000000000000307171457605730600210570ustar00rootroot00000000000000varnishtest "Test VCL failures" server s1 { rxreq expect req.url == /hit txresp } -start varnish v1 -vcl+backend { import debug; sub vcl_recv { # vxid 1009, 1018, 1020, 1022 if (req.http.foo ~ "^(deliver|hit|miss|hash)") { return(hash); } # vxid 1016 if (req.http.foo == "purge") { return(purge); } # vxid 1014 if (req.http.foo == "pass") { return(pass); } # vxid 1012 if (req.http.foo == "pipe") { return(pipe); } # vxid 1007 if (req.http.foo == "synth") { return(synth(748)); } # vxid 1001, 1003 if (req.restarts == 0) { debug.fail(); set req.http.not = "Should not happen"; } } sub vcl_hash { # vxid 1009 if (req.http.foo == "hash") { debug.fail(); set req.http.not = "Should not happen"; } # vxid 1018, 1020, 1022 default lookup } sub vcl_hit { # vxid 1020 if (req.http.foo == "hit") { debug.fail(); set req.http.not = "Should not happen"; } } sub vcl_miss { # vxid 1018 if (req.http.foo == "miss") { debug.fail(); set req.http.not = "Should not happen"; } } sub vcl_pass { # vxid 1014 if (req.http.foo == "pass") { debug.fail(); set req.http.not = "Should not happen"; } } sub vcl_pipe { # vxid 1012 if (req.http.foo == "pipe") { debug.fail(); set req.http.not = "Should not happen"; } } sub vcl_purge { # vxid 1016 if (req.http.foo == "purge") { debug.fail(); set req.http.not = "Should not happen"; } } sub vcl_deliver { # vxid 1022 if (req.http.foo == "deliver") { debug.fail(); set req.http.not = "Should not happen"; } } sub vcl_synth { # vxid 1007 if (resp.status == 748) { debug.fail(); set req.http.not = "Should not happen"; } # vxid 1001, 1003 if (req.restarts == 0 && req.http.foo == "restart") { return (restart); } } } -start ####################################################################### # Fail in vcl_recv logexpect l1001 -v v1 -g raw { expect * 1001 VCL_call "RECV" expect 0 1001 VCL_Error "Forced failure" expect 0 1001 VCL_return "fail" } -start client c1 { txreq rxresp expect resp.status == 503 expect resp.reason == "VCL failed" } -run varnish v1 -expect vcl_fail == 1 varnish v1 -expect sc_vcl_failure == 1 ####################################################################### # Fail in vcl_recv, vcl_synth restarts successfully logexpect l1003 -v v1 -g raw { expect * 1003 VCL_call "RECV" expect 0 1003 VCL_Error "Forced failure" expect 0 1003 VCL_return "fail" expect * 1003 VCL_call "SYNTH" expect 0 1003 VCL_return "restart" } -start client c1 { txreq -url /hit -hdr "foo: restart" rxresp expect resp.status == 200 expect resp.reason == "OK" } -run varnish v1 -expect vcl_fail == 2 # NB: This is correct, req->doclose = SC_VCL_FAILURE latches varnish v1 -expect sc_vcl_failure == 2 ####################################################################### # Fail in vcl_synth logexpect l1007 -v v1 -g raw { expect * 1007 VCL_call "SYNTH" expect * 1007 VCL_Error "Forced failure" expect 0 1007 VCL_return "fail" } -start client c1 { txreq -hdr "foo: synth" rxresp expect resp.status == 500 expect_close } -run varnish v1 -expect vcl_fail == 3 varnish v1 -expect sc_vcl_failure == 3 ####################################################################### # Fail in vcl_hash logexpect l1009 -v v1 -g raw { expect * 1009 VCL_call "HASH" expect 0 1009 VCL_Error "Forced failure" expect 0 1009 VCL_return "fail" } -start client c1 { txreq -hdr "foo: hash" rxresp expect resp.status == 503 expect resp.reason == "VCL failed" } -run varnish v1 -expect vcl_fail == 4 varnish v1 -expect sc_vcl_failure == 4 ####################################################################### # Fail in vcl_pipe logexpect l1012 -v v1 -g vxid -q "vxid == 1012" { expect 0 1012 Begin {^bereq 1011 pipe} expect 0 = Timestamp {^Start:} expect 0 = BereqMethod {^GET} expect 0 = BereqURL {^/} expect 0 = BereqProtocol {^HTTP/1.1} expect 0 = BereqHeader {^foo: pipe} expect 0 = BereqHeader {^Host: } expect 0 = BereqHeader {^User-Agent: c1} expect 0 = BereqHeader {^X-Forwarded-For: } expect 0 = BereqHeader {^Via: } expect 0 = BereqHeader {^X-Varnish: 1011} expect 0 = BereqHeader {^Connection: close} expect 0 = BereqAcct {^0 0 0 0 0 0} expect 0 = End } -start logexpect l1011 -v v1 -g vxid -q "vxid == 1011" { expect 0 1011 Begin {^req 1010 rxreq} expect 0 = Timestamp {^Start: } expect 0 = Timestamp {^Req: } expect 0 = VCL_use {^vcl1} expect 0 = ReqStart expect 0 = ReqMethod {^GET} expect 0 = ReqURL {^/} expect 0 = ReqProtocol {^HTTP/1.1} expect 0 = ReqHeader {^foo: pipe} expect 0 = ReqHeader {^Host: } expect 0 = reqHeader {^User-Agent: c1} expect 0 = ReqHeader {^X-Forwarded-For: } expect 0 = ReqHeader {^Via: } expect 0 = VCL_call {^RECV} expect 0 = VCL_return {^pipe} expect 0 = VCL_call {^HASH} expect 0 = VCL_return {^lookup} expect 0 = Link {^bereq 1012 pipe} expect 0 = VCL_call {^PIPE} expect 0 = VCL_Error {^Forced failure} expect 0 = VCL_return {^fail} expect 0 = RespProtocol {^HTTP/1.1} expect 0 = RespStatus {^503} expect 0 = RespReason {^VCL failed} } -start client c1 { txreq -hdr "foo: pipe" rxresp expect resp.status == 503 expect resp.reason == "VCL failed" } -run varnish v1 -expect vcl_fail == 5 varnish v1 -expect sc_vcl_failure == 5 ####################################################################### # Fail in vcl_pass, no handling in vcl_synth logexpect l1014 -v v1 -g raw { expect * 1014 VCL_call "PASS" expect 0 1014 VCL_Error "Forced failure" expect 0 1014 VCL_return "fail" } -start client c1 { txreq -hdr "foo: pass" rxresp expect resp.status == 503 expect resp.reason == "VCL failed" } -run varnish v1 -expect vcl_fail == 6 varnish v1 -expect sc_vcl_failure == 6 ####################################################################### # Fail in vcl_purge logexpect l1016 -v v1 -g raw { expect * 1016 VCL_call "PURGE" expect 0 1016 VCL_Error "Forced failure" expect 0 1016 VCL_return "fail" } -start client c1 { txreq -hdr "foo: purge" rxresp expect resp.status == 503 expect resp.reason == "VCL failed" } -run varnish v1 -expect vcl_fail == 7 varnish v1 -expect sc_vcl_failure == 7 ####################################################################### # Fail in vcl_miss logexpect l1018 -v v1 -g raw { expect * 1018 VCL_call "MISS" expect 0 1018 VCL_Error "Forced failure" expect 0 1018 VCL_return "fail" } -start client c1 { txreq -url /miss -hdr "foo: miss" rxresp expect resp.status == 503 expect resp.reason == "VCL failed" } -run varnish v1 -expect vcl_fail == 8 varnish v1 -expect sc_vcl_failure == 8 ####################################################################### # Fail in vcl_hit logexpect l1020 -v v1 -g raw { expect * 1020 VCL_call "HIT" expect 0 1020 VCL_Error "Forced failure" expect 0 1020 VCL_return "fail" } -start client c1 { txreq -url /hit -hdr "foo: hit" rxresp expect resp.status == 503 expect resp.reason == "VCL failed" } -run varnish v1 -expect vcl_fail == 9 varnish v1 -expect sc_vcl_failure == 9 ####################################################################### # Fail in vcl_deliver logexpect l1022 -v v1 -g raw { expect * 1022 VCL_call "DELIVER" expect 0 1022 VCL_Error "Forced failure" expect 0 1022 VCL_return "fail" } -start client c1 { txreq -url /hit -hdr "foo: deliver" rxresp expect resp.status == 503 expect resp.reason == "VCL failed" } -run varnish v1 -expect vcl_fail == 10 varnish v1 -expect sc_vcl_failure == 10 ####################################################################### #wait for all client side logexpects ####################################################################### logexpect l1001 -wait logexpect l1003 -wait logexpect l1007 -wait logexpect l1009 -wait logexpect l1012 -wait logexpect l1011 -wait logexpect l1014 -wait logexpect l1016 -wait logexpect l1018 -wait logexpect l1020 -wait logexpect l1022 -wait ####################################################################### # Fail in vcl_backend_fetch varnish v1 -vcl+backend { import debug; sub vcl_backend_fetch { debug.fail(); set bereq.http.not = "Should not happen"; } } logexpect l1 -v v1 -g raw { expect * 1025 VCL_call "BACKEND_FETCH" expect 0 1025 VCL_Error "Forced failure" expect 0 1025 VCL_return "fail" } -start client c1 { txreq -url /backend_fetch rxresp expect resp.status == 503 expect resp.reason == "Service Unavailable" } -run varnish v1 -expect vcl_fail == 11 varnish v1 -expect sc_vcl_failure == 10 logexpect l1 -wait ####################################################################### # Fail in vcl_backend_error server s1 { rxreq expect req.url == /backend_error } -start varnish v1 -vcl+backend { import debug; sub vcl_backend_error { debug.fail(); set bereq.http.not = "Should not happen"; } } logexpect l1 -v v1 -g raw { expect * 1028 VCL_call "BACKEND_ERROR" expect 0 1028 VCL_Error "Forced failure" expect 0 1028 VCL_return "fail" } -start client c1 { txreq -url /backend_error rxresp expect resp.status == 503 expect resp.reason == "Service Unavailable" } -run varnish v1 -expect vcl_fail == 12 varnish v1 -expect sc_vcl_failure == 10 logexpect l1 -wait ####################################################################### # Fail in vcl_backend_response server s1 { rxreq expect req.url == /backend_response txresp } -start varnish v1 -vcl+backend { import debug; sub vcl_backend_response { debug.fail(); set bereq.http.not = "Should not happen"; } } logexpect l1 -v v1 -g raw { expect * 1031 VCL_call "BACKEND_RESPONSE" expect 0 1031 VCL_Error "Forced failure" expect 0 1031 VCL_return "fail" } -start client c1 { txreq -url /backend_response rxresp expect resp.status == 503 expect resp.reason == "Service Unavailable" } -run varnish v1 -expect vcl_fail == 13 varnish v1 -expect sc_vcl_failure == 10 logexpect l1 -wait ####################################################################### # Fail in vmod call used in an if test varnish v1 -vcl+backend { import debug; sub vcl_recv { if (debug.fail2()) { return (hash); } } } logexpect l1 -v v1 -g raw { expect * 1033 VCL_call "RECV" expect 0 1033 VCL_Error "Forced failure" expect 0 1033 VCL_return "fail" } -start client c1 { txreq rxresp expect resp.status == 503 expect resp.reason == "VCL failed" } -run varnish v1 -expect vcl_fail == 14 varnish v1 -expect sc_vcl_failure == 11 logexpect l1 -wait ####################################################################### # Fail in vcl_init with direct VRT_fail() varnish v1 -errvcl {Forced failure} { import debug; backend proforma none; sub vcl_init { debug.fail(); } } ####################################################################### # Fail in vcl_init via fini callback varnish v1 -errvcl {thou shalt not fini} { import debug; backend proforma none; sub vcl_init { debug.fail_task_fini(); } } ####################################################################### # Fail in vcl_fini with direct VRT_fail() varnish v1 -vcl { import debug; backend proforma none; sub vcl_fini { debug.fail(); } } varnish v1 -vcl+backend { } logexpect l1 -v v1 -g raw { expect * 0 CLI "^Rd vcl.state vcl8 0cold" expect 3 0 VCL_Error "^Forced failure" expect 0 0 VCL_Error {^\QVRT_fail() from vcl_fini{} has no effect\E$} } -start varnish v1 -cliok "vcl.discard vcl8" logexpect l1 -wait ####################################################################### # Fail in vcl_fini via fini callback - ignored but logged as VMOD BUG varnish v1 -vcl { import debug; backend proforma none; sub vcl_fini { debug.fail_task_fini(); } } varnish v1 -vcl+backend { } logexpect l1 -v v1 -g raw { expect * 0 CLI "^Rd vcl.state vcl10 0cold" expect 3 0 VCL_Error "^thou shalt not fini" expect 0 0 VCL_Error {^\QVRT_fail() from vcl_fini{} has no effect\E$} } -start varnish v1 -cliok "vcl.discard vcl10" logexpect l1 -wait ####################################################################### # Fail vcl - debug.client_ip function restricted to client and backend varnish v1 -errvcl {Not available in subroutine 'vcl_init'} { import debug; sub vcl_init { debug.client_ip(); } } ####################################################################### # Fail vcl - obj.test_priv_top method restricted to client varnish v1 -errvcl {Not available in subroutine 'vcl_backend_response'} { import debug; sub vcl_init { new test_obj = debug.obj("bar"); } sub vcl_backend_response { set beresp.http.foo = test_obj.test_priv_top(); } } varnish-7.5.0/bin/varnishtest/tests/v00052.vtc000066400000000000000000000020211457605730600210430ustar00rootroot00000000000000varnishtest "obj.storage coverage" server s1 { rxreq txresp rxreq txresp } -start varnish v1 -syntax 4.0 -vcl+backend { sub vcl_hit { set req.http.Hit-Storage = obj.storage; } sub vcl_backend_response { set beresp.http.Default-Storage = beresp.storage; if (bereq.method == "GET") { set beresp.storage = storage.Transient; } } sub vcl_deliver { set resp.http.Deliver-Storage = obj.storage; if (req.http.Hit-Storage) { set resp.http.Hit-Storage = req.http.Hit-Storage; } } } -start client c1 { txreq rxresp expect resp.http.Default-Storage == storage.s0 expect resp.http.Deliver-Storage == storage.Transient expect resp.http.Hit-Storage == txreq rxresp expect resp.http.Default-Storage == storage.s0 expect resp.http.Deliver-Storage == storage.Transient expect resp.http.Hit-Storage == storage.Transient txreq -req POST rxresp expect resp.http.Default-Storage == storage.Transient expect resp.http.Deliver-Storage == storage.Transient expect resp.http.Hit-Storage == } -run varnish-7.5.0/bin/varnishtest/tests/v00054.vtc000066400000000000000000000023131457605730600210510ustar00rootroot00000000000000varnishtest "client.identity is 0.0.0.0 if unset & client addr is UDS" # varnishtest "vtc remote.ip, remote.port and remote.path" (a00020) server s1 { rxreq expect remote.ip == "${localhost}" expect remote.port > 0 expect remote.path == txresp } -start varnish v1 -vcl+backend {} -start client c1 { txreq rxresp expect remote.ip == "${v1_addr}" expect remote.port == "${v1_port}" expect remote.path == } -run varnish v1 -stop server s1 -wait server s1 -start varnish v2 -arg "-a ${tmpdir}/v2.sock" -vcl+backend {} -start client c1 -connect "${tmpdir}/v2.sock" { txreq rxresp expect remote.ip == "0.0.0.0" expect remote.port == 0 expect remote.path == "${tmpdir}/v2.sock" } -run varnish v3 -arg "-a ${tmpdir}/v3.sock" -vcl { backend b None; sub vcl_recv { if (req.url == "/nobody") { set client.identity = "Samuel B. Nobody"; } set req.http.id = client.identity; return(synth(200)); } sub vcl_synth { set resp.http.id = req.http.id; } } -start client c2 -connect "${tmpdir}/v3.sock" { txreq -url "/nobody" rxresp expect resp.status == 200 expect resp.http.id == "Samuel B. Nobody" txreq rxresp expect resp.status == 200 expect resp.http.id == "0.0.0.0" } -run varnish-7.5.0/bin/varnishtest/tests/v00055.vtc000066400000000000000000000010441457605730600210520ustar00rootroot00000000000000varnishtest "Check backend connection limit with UDS backends" barrier b1 cond 2 barrier b2 cond 2 server s1 -listen "${tmpdir}/s1.sock" { rxreq barrier b1 sync barrier b2 sync txresp } -start varnish v1 -vcl { backend default { .path = "${s1_sock}"; .max_connections = 1; } sub vcl_recv { return(pass); } } -start client c1 { txreq rxresp expect resp.status == 200 } -start client c2 { barrier b1 sync txreq rxresp expect resp.status == 503 } -run barrier b2 sync client c1 -wait varnish v1 -expect backend_busy == 1 varnish-7.5.0/bin/varnishtest/tests/v00056.vtc000066400000000000000000000024201457605730600210520ustar00rootroot00000000000000varnishtest "Check req.backend.healthy with UDS backends" barrier b1 cond 2 barrier b2 cond 2 barrier b3 cond 2 barrier b4 cond 2 server s1 -listen "${tmpdir}/s1.sock" { rxreq barrier b1 sync expect req.url == "/" txresp -body "slash" accept rxreq barrier b2 sync barrier b3 sync expect req.url == "/" txresp -body "slash" accept barrier b4 sync } -start varnish v1 -vcl { import std; probe foo { .url = "/"; .timeout = 1m; .interval = 1s; .window = 3; .threshold = 2; .initial = 0; } backend default { .path = "${s1_sock}"; .max_connections = 1; .probe = foo; } sub vcl_recv { if (std.healthy(default)) { return(synth(200,"Backend healthy")); } else { return(synth(500,"Backend sick")); } } } -start varnish v1 -cliok "backend.list -p" client c1 { txreq rxresp expect resp.status == 500 } -run logexpect l1 -v v1 -g raw -q Backend_health { expect 0 0 Backend_health "default Still sick" } -start barrier b1 sync barrier b2 sync logexpect l1 -wait client c2 { txreq rxresp expect resp.status == 500 } -run logexpect l2 -v v1 -g raw -q Backend_health { expect 0 0 Backend_health "default Went healthy" } -start barrier b3 sync barrier b4 sync logexpect l2 -wait client c3 { txreq rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/v00057.vtc000066400000000000000000000013271457605730600210600ustar00rootroot00000000000000varnishtest "Test host header specification with UDS backends" server s1 -listen "${tmpdir}/s1.sock" { rxreq expect req.url == "/foo" expect req.http.host == "snafu" txresp -body "foo1" rxreq expect req.url == "/bar" expect req.http.host == "0.0.0.0" txresp -body "foo1" } -start varnish v1 -vcl+backend { } -start client c1 { txreq -url "/foo" -hdr "Host: snafu" rxresp txreq -url "/bar" -proto HTTP/1.0 rxresp } -run server s2 -listen "${tmpdir}/s2.sock" { rxreq expect req.url == "/barf" expect req.http.host == "FOObar" txresp -body "foo1" } -start varnish v1 -vcl { backend b1 { .path = "${s2_sock}"; .host_header = "FOObar"; } } client c1 { txreq -url "/barf" -proto HTTP/1.0 rxresp } -run varnish-7.5.0/bin/varnishtest/tests/v00058.vtc000066400000000000000000000130451457605730600210610ustar00rootroot00000000000000varnishtest "Test VRT STRANDS functions" varnish v1 -arg "-i foobar" -vcl { import debug; backend b None; sub vcl_init { # tests VRT_Strands() new c = debug.concat(server.identity + server.hostname + now); new e = debug.concat("" + server.identity + ""); } sub vcl_recv { return (synth(200)); } sub vcl_synth { set resp.http.C = c.get(); set resp.http.E = e.get(); set req.http.Foo = "foo"; set req.http.Bar = "bar"; set req.http.Baz = "baz"; # test VRT_StrandsWS() set resp.http.Concat-1 = debug.concatenate(req.http.Foo + req.http.Bar + req.http.Baz); set resp.http.Concat-2 = debug.concatenate("" + req.http.Unset + req.http.Foo + req.http.Unset + "" + req.http.Bar + "" + req.http.Unset + "" + req.http.Baz + "" + req.http.Unset); set resp.http.Concat-3 = debug.concatenate(req.http.Foo + req.http.Unset + ""); set resp.http.Concat-4 = debug.concatenate(req.http.Unset + "" + req.http.Foo); set resp.http.Concat-5 = debug.concatenate(req.http.Foo + req.http.Unset + req.http.Bar); set resp.http.Concat-6 = debug.concatenate(req.http.Foo); set resp.http.Concat-7 = debug.concatenate(req.http.Unset); # test VRT_StrandsCollect() set resp.http.Collect-1 = debug.collect(req.http.Foo + req.http.Bar + req.http.Baz); set resp.http.Collect-2 = debug.collect("" + req.http.Unset + req.http.Foo + req.http.Unset + "" + req.http.Bar + "" + req.http.Unset + "" + req.http.Baz + "" + req.http.Unset); set resp.http.Collect-3 = debug.collect(req.http.Foo + req.http.Unset + ""); set resp.http.Collect-4 = debug.collect(req.http.Unset + "" + req.http.Foo); set resp.http.Collect-5 = debug.collect(req.http.Foo + req.http.Unset + req.http.Bar); set resp.http.Collect-6 = debug.collect(req.http.Foo); set resp.http.Collect-7 = debug.collect(req.http.Unset); # test a STRANDS version of VRT_SetHdr() debug.sethdr(resp.http.Hdr-1, req.http.Foo + req.http.Bar + req.http.Baz); debug.sethdr(resp.http.Hdr-2, "" + req.http.Unset + req.http.Foo + req.http.Unset + "" + req.http.Bar + "" + req.http.Unset + "" + req.http.Baz + "" + req.http.Unset); debug.sethdr(resp.http.Hdr-3, req.http.Foo + req.http.Unset + ""); debug.sethdr(resp.http.Hdr-4, req.http.Unset + "" + req.http.Foo); debug.sethdr(resp.http.Hdr-5, req.http.Foo + req.http.Unset + req.http.Bar); debug.sethdr(resp.http.Hdr-6, req.http.Foo); debug.sethdr(resp.http.Hdr-7, req.http.Unset); set resp.http.Hdr-8 = debug.return_strands( debug.return_strands( req.url + "<-->" + req.url ) ); } } -start client c1 { txreq rxresp expect resp.status == 200 expect resp.http.C ~ "^foobar" expect resp.http.E == "foobar" expect resp.http.Concat-1 == "foobarbaz" expect resp.http.Concat-2 == "foobarbaz" expect resp.http.Concat-3 == "foo" expect resp.http.Concat-4 == "foo" expect resp.http.Concat-5 == "foobar" expect resp.http.Concat-6 == "foo" expect resp.http.Concat-7 == "" expect resp.http.Collect-1 == "foobarbaz" expect resp.http.Collect-2 == "foobarbaz" expect resp.http.Collect-3 == "foo" expect resp.http.Collect-4 == "foo" expect resp.http.Collect-5 == "foobar" expect resp.http.Collect-6 == "foo" expect resp.http.Collect-7 == "" expect resp.http.Hdr-1 == "foobarbaz" expect resp.http.Hdr-2 == "foobarbaz" expect resp.http.Hdr-3 == "foo" expect resp.http.Hdr-4 == "foo" expect resp.http.Hdr-5 == "foobar" expect resp.http.Hdr-6 == "foo" expect resp.http.Hdr-7 == "" expect resp.http.Hdr-8 == "/<-->/" } -run # out of workspace server s1 { rxreq expect req.http.Foo == "foo" expect req.http.Bar == "bar" expect req.http.Baz == "baz" expect req.http.Quux == "quux" txresp } -start varnish v1 -vcl+backend { import debug; import vtc; sub vcl_recv { set req.http.Foo = "foo"; set req.http.Bar = "bar"; set req.http.Baz = "baz"; set req.http.Quux = "quux"; vtc.workspace_alloc(client, -11); if (req.url == "/1") { # VRT_StrandsWS() marks the WS as overflowed, # returns NULL, but does not invoke VCL failure. # Out-of-workspace doesn't happen until delivery. set req.http.Result = debug.concatenate(req.http.Foo + req.http.Bar + req.http.Baz + req.http.Quux); } elsif (req.url == "/2") { # VRT_CollectStrands() invokes VCL failure. set req.http.Result = debug.collect(req.http.Foo + req.http.Bar + req.http.Baz + req.http.Quux); } } } client c1 { txreq -url "/1" rxresp expect resp.status >= 500 expect resp.status <= 503 } -run client c1 { txreq -url "/2" rxresp expect resp.status >= 500 expect resp.status <= 503 } -run varnish v1 -expect client_resp_500 == 1 varnish v1 -expect ws_client_overflow == 2 # Test $STRINGS.{upper|lower} server s1 { rxreq txresp } -start varnish v1 -vcl+backend { sub vcl_deliver { set resp.http.l-proto = resp.proto.lower(); set resp.http.u-proto = resp.proto.upper(); set resp.http.l-req = req.url.lower(); set resp.http.u-req = req.url.upper(); set resp.http.l-mod = (req.url + "bar").lower(); set resp.http.u-mod = (req.url + "bar").upper(); set resp.http.uu-mod = (req.url + "bar").upper().upper(); set resp.http.ul-mod = (req.url + "bar").upper().lower(); } } client c1 { txreq -url /foo rxresp expect resp.http.l-proto == "http/1.1" expect resp.http.u-proto == "HTTP/1.1" expect resp.http.l-req == "/foo" expect resp.http.u-req == "/FOO" expect resp.http.l-mod == "/foobar" expect resp.http.u-mod == "/FOOBAR" expect resp.http.uu-mod == "/FOOBAR" expect resp.http.ul-mod == "/foobar" } -run varnish-7.5.0/bin/varnishtest/tests/v00059.vtc000066400000000000000000000017441457605730600210650ustar00rootroot00000000000000varnishtest "concurrent pipe limit" barrier b1 cond 2 barrier b2 cond 2 barrier b3 cond 2 barrier b4 cond 2 server s1 { rxreq barrier b1 sync barrier b2 sync txresp } -start server s2 { rxreq barrier b3 sync barrier b4 sync txresp } -start varnish v1 -vcl+backend { sub vcl_recv { if (req.url == "/c1") { set req.backend_hint = s1; } elsif (req.url == "/c2") { set req.backend_hint = s2; } return (pipe); } } -start varnish v1 -expect MAIN.n_pipe == 0 client c1 { txreq -url "/c1" rxresp expect resp.status == 200 } -start barrier b1 sync varnish v1 -expect MAIN.n_pipe == 1 client c2 { txreq -url "/c2" rxresp expect resp.status == 200 } -start barrier b3 sync varnish v1 -expect MAIN.n_pipe == 2 varnish v1 -cliok "param.set pipe_sess_max 2" client c3 { txreq rxresp expect resp.status == 503 } -run barrier b2 sync varnish v1 -expect MAIN.n_pipe == 1 barrier b4 sync varnish v1 -expect MAIN.n_pipe == 0 varnish v1 -expect MAIN.pipe_limited == 1 varnish-7.5.0/bin/varnishtest/tests/v00060.vtc000066400000000000000000000015361457605730600210540ustar00rootroot00000000000000varnishtest "None backend allowed" server s1 { rxreq txresp } -start varnish v1 -vcl { backend default none; } -start client c1 { txreq rxresp expect resp.status == 503 } -run # Test NULL none default backend varnish v1 -vcl+backend { backend null_backend none; backend null_backend_uppercase None; sub vcl_recv { if (req.url ~ "/no_backend_lowercase") { set req.backend_hint = null_backend; } else if (req.url ~ "no_backend_uppercase") { set req.backend_hint = null_backend_uppercase; } } } client c1 { txreq rxresp expect resp.status == 200 txreq -url "/no_backend_lowercase" rxresp expect resp.status == 503 txreq -url "/no_backend_uppercase" rxresp expect resp.status == 503 } -run varnish v1 -cliok "param.set vcc_feature -err_unref" varnish v1 -vcl { backend bad { .host = "${bad_backend}"; } backend nil none; } varnish-7.5.0/bin/varnishtest/tests/v00062.vtc000066400000000000000000000027501457605730600210550ustar00rootroot00000000000000varnishtest "backend_(fetch|response) error state with custom status/reason" server s1 -repeat 3 { rxreq txresp } -start varnish v1 -vcl+backend { sub vcl_backend_fetch { if (bereq.url == "/errorsynth") { return (error(403)); } if (bereq.url == "/errorsynthbody") { return (error(403, "Steven has no cookies")); } if (bereq.url == "/error") { return (error); } } sub vcl_backend_response { if (bereq.url == "/resperrorsynth") { return (error(403)); } if (bereq.url == "/resperrorsynthbody") { return (error(403, "Steven has no cookies")); } if (bereq.url == "/resperror") { return (error); } } sub vcl_backend_error { set beresp.http.error = "visited"; } } -start client c1 { txreq -url /error rxresp expect resp.status == 503 expect resp.http.error == visited txreq -url /errorsynth rxresp expect resp.status == 403 expect resp.http.error == visited txreq -url /errorsynthbody rxresp expect resp.status == 403 expect resp.reason == "Steven has no cookies" expect resp.http.error == visited txreq -url /resperror rxresp expect resp.status == 503 expect resp.http.error == visited txreq -url /resperrorsynth rxresp expect resp.status == 403 expect resp.http.error == visited txreq -url /resperrorsynthbody rxresp expect resp.status == 403 expect resp.reason == "Steven has no cookies" expect resp.http.error == visited } -run # Make sure we don't increment the failed fetch counter varnish v1 -expect fetch_failed == 0 varnish-7.5.0/bin/varnishtest/tests/v00063.vtc000066400000000000000000000013341457605730600210530ustar00rootroot00000000000000varnishtest "Create a backend after a COLD event" server s1 -start varnish v1 -cliok "param.set feature +no_coredump" varnish v1 -expectexit 0x40 varnish v1 -vcl+backend { import debug; sub vcl_init { debug.cold_backend(); } } -start # load and use a new VCL varnish v1 -vcl+backend {} # expect a panic during the COLD event, COLD state varnish v1 -clierr 400 "vcl.state vcl1 cold" delay 3 varnish v1 -cliexpect "Dynamic Backends can only be added to warm VCLs" panic.show varnish v1 -cliok panic.clear varnish v1 -vcl+backend { import debug; sub vcl_init { debug.cooling_backend(); } } varnish v1 -cliok "vcl.use vcl2" # expect no panic during the COLD event, COOLING state varnish v1 -cliok "vcl.state vcl3 cold" varnish-7.5.0/bin/varnishtest/tests/v00064.vtc000066400000000000000000000027751457605730600210660ustar00rootroot00000000000000varnishtest "vbf_stp_condfetch could not get storage #3273" server s1 { rxreq expect req.url == "/transient" txresp -noserver -bodylen 1048400 rxreq expect req.url == "/malloc" txresp -noserver -hdr "Cache-Control: max-age=2" -hdr "Last-Modified: Fri, 03 Apr 2020 13:00:01 GMT" -bodylen 1048292 rxreq expect req.http.If-Modified-Since == "Fri, 03 Apr 2020 13:00:01 GMT" expect req.url == "/malloc" txresp -noserver -status 304 } -start varnish v1 \ -arg "-s Transient=default,1m" \ -arg "-s malloc,1m" \ -arg "-p nuke_limit=0" \ -syntax 4.0 \ -vcl+backend { sub vcl_backend_response { if (bereq.url == "/transient") { set beresp.storage = storage.Transient; # Unset Date header to not change the object sizes unset beresp.http.Date; } } } -start varnish v1 -cliok "param.set debug +syncvsl" delay .1 client c1 { # Fill transient txreq -url "/transient" rxresp expect resp.status == 200 } -run delay .1 varnish v1 -expect SM?.Transient.g_bytes > 1048000 varnish v1 -expect SM?.Transient.g_space < 50 client c1 { # Fill malloc txreq -url "/malloc" -hdr "If-Modified-Since: Fri, 03 Apr 2020 12:00:01 GMT" rxresp expect resp.status == 200 delay 3 } -run varnish v1 -expect SM?.s0.g_bytes > 1048000 varnish v1 -expect SM?.s0.g_space < 50 client c1 { # Check that Varnish is still alive txreq -url "/malloc" -hdr "If-Modified-Since: Fri, 03 Apr 2020 12:00:01 GMT" rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/v00065.vtc000066400000000000000000000014061457605730600210550ustar00rootroot00000000000000varnishtest "VRE_quote() coverage" varnish v1 -vcl { import debug; backend be none; sub vcl_recv { return (synth(200)); } sub vcl_synth { set resp.http.sanity = regsub("\Q", "\Q\Q\E", "sane"); set resp.http.q0 = debug.re_quote(""); set resp.http.q1 = debug.re_quote("hello"); set resp.http.q2 = debug.re_quote("hello\E"); set resp.http.q3 = debug.re_quote("hello\Eworld"); set resp.http.q4 = debug.re_quote("\E"); set resp.http.q5 = debug.re_quote("\Q"); } } -start client c1 { txreq rxresp expect resp.http.sanity == sane expect resp.http.q0 == {} expect resp.http.q1 == {\Qhello\E} expect resp.http.q2 == {\Qhello\\EE} expect resp.http.q3 == {\Qhello\\EE\Qworld\E} expect resp.http.q4 == {\Q\\EE} expect resp.http.q5 == {\Q\Q\E} } -run varnish-7.5.0/bin/varnishtest/tests/v00066.vtc000066400000000000000000000005661457605730600210640ustar00rootroot00000000000000varnishtest "long string coverage" varnish v1 -vcl { backend default none; sub vcl_recv { return (synth(200)); } sub vcl_synth { set resp.body = """{"key":"value"}"""; return (deliver); } } -start client c1 { txreq rxresp expect resp.status == 200 expect resp.bodylen == 15 expect resp.body == {{"key":"value"}} } -run varnish-7.5.0/bin/varnishtest/tests/v00067.vtc000066400000000000000000000027331457605730600210630ustar00rootroot00000000000000varnishtest "The active VCL must always be warm" shell { cat >${tmpdir}/f1 <<-EOF vcl 4.1; backend default none; EOF } # Load a cold VCL. This should not become the active VCL. varnish v1 -cliok "vcl.load vcl_cold ${tmpdir}/f1 cold" varnish v1 -cliexpect "available *cold *cold *- *vcl_cold" "vcl.list" # The cache should not start without a warm VCL. varnish v1 -clierr 300 "start" # Load a warm VCL and make it the active VCL. varnish v1 -cliok "vcl.load vcl_warm ${tmpdir}/f1 warm" varnish v1 -cliok "vcl.use vcl_warm" varnish v1 -cliexpect "active *warm *warm *- *vcl_warm" "vcl.list" # The cache now starts. varnish v1 -cliok "start" varnish v1 -cliexpect "available *cold *cold *0 *vcl_cold" "vcl.list" varnish v1 -cliexpect "active *warm *warm *0 *vcl_warm" "vcl.list" # Load an automatically warming VCL, and set it as the active VCL. varnish v1 -cliok "vcl.load vcl_auto ${tmpdir}/f1 warm" varnish v1 -cliok "vcl.use vcl_auto" varnish v1 -cliexpect "available *warm *warm *0 *vcl_warm" "vcl.list" varnish v1 -cliexpect "active *warm *warm *0 *vcl_auto" "vcl.list" # Cool the previous active VCL. varnish v1 -cliok "vcl.state vcl_warm cold" varnish v1 -cliexpect "available *cold *cold *0 *vcl_warm" "vcl.list" # Restart the cache. varnish v1 -cliok "stop" -cliok "start" varnish v1 -cliexpect "available *cold *cold *0 *vcl_cold" "vcl.list" varnish v1 -cliexpect "available *cold *cold *0 *vcl_warm" "vcl.list" varnish v1 -cliexpect "active *warm *warm *0 *vcl_auto" "vcl.list" varnish-7.5.0/bin/varnishtest/tests/v00068.vtc000066400000000000000000000007221457605730600210600ustar00rootroot00000000000000varnishtest "unset bereq.body with cached req body" server s1 { rxreq expect req.method == "GET" expect req.http.Content-Length == txresp rxreq expect req.method == "GET" txresp } -start varnish v1 -vcl+backend { import std; sub vcl_recv { std.cache_req_body(2KB); } sub vcl_backend_fetch { unset bereq.body; } } -start client c1 { txreq -body "fine" rxresp expect resp.status == 200 txreq rxresp expect resp.status == 200 } -run varnish-7.5.0/bin/varnishtest/tests/v00069.vtc000066400000000000000000000035361457605730600210670ustar00rootroot00000000000000varnishtest "Unset session timeouts" varnish v1 -cliok "param.set timeout_idle 42" varnish v1 -cliok "param.set timeout_linger 42" varnish v1 -cliok "param.set idle_send_timeout 42" varnish v1 -cliok "param.set send_timeout 42" varnish v1 -vcl { backend be none; sub vcl_recv { return (synth(200)); } sub vcl_synth { set resp.http.def_timeout_idle = sess.timeout_idle; set resp.http.def_timeout_linger = sess.timeout_linger; set resp.http.def_idle_send_timeout = sess.idle_send_timeout; set resp.http.def_send_timeout = sess.send_timeout; set sess.timeout_idle = 0s; set sess.timeout_linger = 0s; set sess.idle_send_timeout = 0s; set sess.send_timeout = 0s; set resp.http.set_timeout_idle = sess.timeout_idle; set resp.http.set_timeout_linger = sess.timeout_linger; set resp.http.set_idle_send_timeout = sess.idle_send_timeout; set resp.http.set_send_timeout = sess.send_timeout; unset sess.timeout_idle; unset sess.timeout_linger; unset sess.idle_send_timeout; unset sess.send_timeout; set resp.http.unset_timeout_idle = sess.timeout_idle; set resp.http.unset_timeout_linger = sess.timeout_linger; set resp.http.unset_idle_send_timeout = sess.idle_send_timeout; set resp.http.unset_send_timeout = sess.send_timeout; } } -start client c1 { txreq rxresp expect resp.http.def_timeout_idle == 42.000 expect resp.http.def_timeout_linger == 42.000 expect resp.http.def_idle_send_timeout == 42.000 expect resp.http.def_send_timeout == 42.000 expect resp.http.set_timeout_idle == 0.000 expect resp.http.set_timeout_linger == 0.000 expect resp.http.set_idle_send_timeout == 0.000 expect resp.http.set_send_timeout == 0.000 expect resp.http.unset_timeout_idle == 42.000 expect resp.http.unset_timeout_linger == 42.000 expect resp.http.unset_idle_send_timeout == 42.000 expect resp.http.unset_send_timeout == 42.000 } -run varnish-7.5.0/bin/varnishtest/tests/v00070.vtc000066400000000000000000000031771457605730600210600ustar00rootroot00000000000000varnishtest "Unset bereq timeouts" varnish v1 -cliok "param.set connect_timeout 42" varnish v1 -cliok "param.set first_byte_timeout 42" varnish v1 -cliok "param.set between_bytes_timeout 42" varnish v1 -vcl { backend be none; sub vcl_backend_fetch { return (error(200)); } sub vcl_backend_error { set beresp.http.def_connect_timeout = bereq.connect_timeout; set beresp.http.def_first_byte_timeout = bereq.first_byte_timeout; set beresp.http.def_between_bytes_timeout = bereq.between_bytes_timeout; set bereq.connect_timeout = 0s; set bereq.first_byte_timeout = 0s; set bereq.between_bytes_timeout = 0s; set beresp.http.set_connect_timeout = bereq.connect_timeout; set beresp.http.set_first_byte_timeout = bereq.first_byte_timeout; set beresp.http.set_between_bytes_timeout = bereq.between_bytes_timeout; unset bereq.connect_timeout; unset bereq.first_byte_timeout; unset bereq.between_bytes_timeout; set beresp.http.unset_connect_timeout = bereq.connect_timeout; set beresp.http.unset_first_byte_timeout = bereq.first_byte_timeout; set beresp.http.unset_between_bytes_timeout = bereq.between_bytes_timeout; } } -start client c1 { txreq rxresp expect resp.http.def_connect_timeout == 42.000 expect resp.http.def_first_byte_timeout == 42.000 expect resp.http.def_between_bytes_timeout == 42.000 expect resp.http.set_connect_timeout == 0.000 expect resp.http.set_first_byte_timeout == 0.000 expect resp.http.set_between_bytes_timeout == 0.000 expect resp.http.unset_connect_timeout == 42.000 expect resp.http.unset_first_byte_timeout == 42.000 expect resp.http.unset_between_bytes_timeout == 42.000 } -run varnish-7.5.0/bin/varnishtest/tests/v00071.vtc000066400000000000000000000005401457605730600210500ustar00rootroot00000000000000varnishtest "Disabled timeout in VCL" varnish v1 -cliok "param.set idle_send_timeout never" varnish v1 -vcl { backend be none; sub vcl_recv { return (synth(200)); } sub vcl_synth { set resp.http.idle_send_timeout = sess.idle_send_timeout; } } -start client c1 { txreq rxresp expect resp.http.idle_send_timeout == 999999999999.999 } -run varnish-7.5.0/bin/varnishtest/tests/x00000.vtc000066400000000000000000000005651457605730600210510ustar00rootroot00000000000000varnishtest "Test VMOD import from VEXT" feature topbuild server s1 { rxreq txresp } -start varnish v1 \ -arg "-pvmod_path=/nonexistent" \ -arg "-E${topbuild}/vmod/.libs/libvmod_std.so" \ -vcl+backend { import std; sub vcl_deliver { set resp.http.foobar = std.random(10,99); } } -start client c1 { txreq rxresp expect resp.http.foobar ~ [0-9] } -run varnish-7.5.0/bin/varnishtest/vtc.c000066400000000000000000000316131457605730600173000ustar00rootroot00000000000000/*- * Copyright (c) 2008-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include "config.h" #include #include #include #include #include #include #include #include "vtc.h" #include "vtc_log.h" #include "vav.h" #include "vrnd.h" #define MAX_TOKENS 200 volatile sig_atomic_t vtc_error; /* Error encountered */ int vtc_stop; /* Stops current test without error */ pthread_t vtc_thread; int ign_unknown_macro = 0; static struct vtclog *vltop; static pthread_mutex_t vtc_vrnd_mtx; static void vtc_vrnd_lock(void) { PTOK(pthread_mutex_lock(&vtc_vrnd_mtx)); } static void vtc_vrnd_unlock(void) { PTOK(pthread_mutex_unlock(&vtc_vrnd_mtx)); } static const char *tfn; /********************************************************************** * Macro facility */ struct macro { unsigned magic; #define MACRO_MAGIC 0x803423e3 VTAILQ_ENTRY(macro) list; char *name; char *val; macro_f *func; }; static VTAILQ_HEAD(,macro) macro_list = VTAILQ_HEAD_INITIALIZER(macro_list); static const struct cmds global_cmds[] = { #define CMD_GLOBAL(n) { #n, cmd_##n }, #include "cmds.h" { NULL, NULL } }; static const struct cmds top_cmds[] = { #define CMD_TOP(n) { #n, cmd_##n }, #include "cmds.h" { NULL, NULL } }; /**********************************************************************/ static struct macro * macro_def_int(const char *name, macro_f *func, const char *fmt, va_list ap) { struct macro *m; char buf[2048]; VTAILQ_FOREACH(m, ¯o_list, list) if (!strcmp(name, m->name)) break; if (m == NULL) { ALLOC_OBJ(m, MACRO_MAGIC); AN(m); REPLACE(m->name, name); AN(m->name); VTAILQ_INSERT_TAIL(¯o_list, m, list); } AN(m); if (func != NULL) { AZ(fmt); m->func = func; } else { AN(fmt); vbprintf(buf, fmt, ap); REPLACE(m->val, buf); AN(m->val); } return (m); } /********************************************************************** * This is for defining macros before we fork the child process which * runs the test-case. */ void extmacro_def(const char *name, macro_f *func, const char *fmt, ...) { va_list ap; va_start(ap, fmt); (void)macro_def_int(name, func, fmt, ap); va_end(ap); } /********************************************************************** * Below this point is run inside the testing child-process. */ static pthread_mutex_t macro_mtx; static void init_macro(void) { struct macro *m; /* Dump the extmacros for completeness */ VTAILQ_FOREACH(m, ¯o_list, list) { if (m->val != NULL) vtc_log(vltop, 4, "extmacro def %s=%s", m->name, m->val); else vtc_log(vltop, 4, "extmacro def %s(...)", m->name); } PTOK(pthread_mutex_init(¯o_mtx, NULL)); } void macro_def(struct vtclog *vl, const char *instance, const char *name, const char *fmt, ...) { char buf1[256]; struct macro *m; va_list ap; AN(fmt); if (instance != NULL) { bprintf(buf1, "%s_%s", instance, name); name = buf1; } PTOK(pthread_mutex_lock(¯o_mtx)); va_start(ap, fmt); m = macro_def_int(name, NULL, fmt, ap); va_end(ap); vtc_log(vl, 4, "macro def %s=%s", name, m->val); PTOK(pthread_mutex_unlock(¯o_mtx)); } void macro_undef(struct vtclog *vl, const char *instance, const char *name) { char buf1[256]; struct macro *m; if (instance != NULL) { bprintf(buf1, "%s_%s", instance, name); name = buf1; } PTOK(pthread_mutex_lock(¯o_mtx)); VTAILQ_FOREACH(m, ¯o_list, list) if (!strcmp(name, m->name)) break; if (m != NULL) { if (!vtc_stop) vtc_log(vl, 4, "macro undef %s", name); CHECK_OBJ(m, MACRO_MAGIC); VTAILQ_REMOVE(¯o_list, m, list); free(m->name); free(m->val); FREE_OBJ(m); } PTOK(pthread_mutex_unlock(¯o_mtx)); } unsigned macro_isdef(const char *instance, const char *name) { char buf1[256]; struct macro *m; if (instance != NULL) { bprintf(buf1, "%s_%s", instance, name); name = buf1; } PTOK(pthread_mutex_lock(¯o_mtx)); VTAILQ_FOREACH(m, ¯o_list, list) if (!strcmp(name, m->name)) break; PTOK(pthread_mutex_unlock(¯o_mtx)); return (m != NULL); } void macro_cat(struct vtclog *vl, struct vsb *vsb, const char *b, const char *e) { struct macro *m; char **argv, *retval = NULL; const char *err = NULL; int argc; AN(b); if (e == NULL) e = strchr(b, '\0'); AN(e); argv = VAV_ParseTxt(b, e, &argc, ARGV_COMMA); AN(argv); if (*argv != NULL) vtc_fatal(vl, "Macro ${%.*s} parsing failed: %s", (int)(e - b), b, *argv); assert(argc >= 2); PTOK(pthread_mutex_lock(¯o_mtx)); VTAILQ_FOREACH(m, ¯o_list, list) { CHECK_OBJ_NOTNULL(m, MACRO_MAGIC); if (!strcmp(argv[1], m->name)) break; } if (m != NULL) { if (m->func != NULL) { AZ(m->val); retval = m->func(argc, argv, &err); if (err == NULL) AN(retval); } else { AN(m->val); if (argc == 2) REPLACE(retval, m->val); else err = "macro does not take arguments"; } } PTOK(pthread_mutex_unlock(¯o_mtx)); VAV_Free(argv); if (err != NULL) vtc_fatal(vl, "Macro ${%.*s} failed: %s", (int)(e - b), b, err); if (retval == NULL) { if (!ign_unknown_macro) vtc_fatal(vl, "Macro ${%.*s} not found", (int)(e - b), b); VSB_printf(vsb, "${%.*s}", (int)(e - b), b); return; } VSB_cat(vsb, retval); free(retval); } struct vsb * macro_expandf(struct vtclog *vl, const char *fmt, ...) { va_list ap; struct vsb *vsb1, *vsb2; vsb1 = VSB_new_auto(); AN(vsb1); va_start(ap, fmt); VSB_vprintf(vsb1, fmt, ap); va_end(ap); AZ(VSB_finish(vsb1)); vsb2 = macro_expand(vl, VSB_data(vsb1)); VSB_destroy(&vsb1); return (vsb2); } struct vsb * macro_expand(struct vtclog *vl, const char *text) { struct vsb *vsb; const char *p, *q; vsb = VSB_new_auto(); AN(vsb); while (*text != '\0') { p = strstr(text, "${"); if (p == NULL) { VSB_cat(vsb, text); break; } VSB_bcat(vsb, text, p - text); q = strchr(p, '}'); if (q == NULL) { VSB_cat(vsb, text); break; } assert(p[0] == '$'); assert(p[1] == '{'); assert(q[0] == '}'); p += 2; macro_cat(vl, vsb, p, q); text = q + 1; } AZ(VSB_finish(vsb)); return (vsb); } /********************************************************************** * Parse a string * * We make a copy of the string and deliberately leak it, so that all * the cmd functions we call don't have to strdup(3) all over the place. * * Static checkers like Coverity may bitch about this, but we don't care. */ void parse_string(struct vtclog *vl, void *priv, const char *spec) { char *token_s[MAX_TOKENS], *token_e[MAX_TOKENS]; struct vsb *token_exp; char *e, *p, *q, *f, *buf; int nest_brace; int tn; unsigned n, m; const struct cmds *cp; AN(spec); buf = strdup(spec); AN(buf); e = strchr(buf, '\0'); AN(e); for (p = buf; p < e; p++) { if (vtc_error || vtc_stop) break; /* Start of line */ if (isspace(*p)) continue; if (*p == '\n') continue; if (*p == '#') { for (; *p != '\0' && *p != '\n'; p++) ; if (*p == '\0') break; continue; } q = strchr(p, '\n'); if (q == NULL) q = strchr(p, '\0'); if (q - p > 60) vtc_log(vl, 2, "=== %.60s...", p); else vtc_log(vl, 2, "=== %.*s", (int)(q - p), p); /* First content on line, collect tokens */ memset(token_s, 0, sizeof token_s); memset(token_e, 0, sizeof token_e); tn = 0; f = p; while (p < e) { assert(tn < MAX_TOKENS); assert(p < e); if (*p == '\n') { /* End on NL */ break; } if (isspace(*p)) { /* Inter-token whitespace */ p++; continue; } if (*p == '\\' && p[1] == '\n') { /* line-cont */ p += 2; continue; } if (*p == '"') { /* quotes */ token_s[tn] = ++p; q = p; for (; *p != '\0'; p++) { assert(p < e); if (*p == '"') break; if (*p == '\\') { p += VAV_BackSlash(p, q) - 1; q++; } else { if (*p == '\n') vtc_fatal(vl, "Unterminated quoted string in line: %*.*s", (int)(p - f), (int)(p - f), f); assert(*p != '\n'); *q++ = *p; } } token_e[tn++] = q; p++; } else if (*p == '{') { /* Braces */ nest_brace = 0; token_s[tn] = p + 1; for (; p < e; p++) { if (*p == '{') nest_brace++; else if (*p == '}') { if (--nest_brace == 0) break; } } assert(*p == '}'); token_e[tn++] = p++; } else { /* other tokens */ token_s[tn] = p; for (; p < e && !isspace(*p); p++) continue; token_e[tn++] = p; } } assert(p <= e); assert(tn < MAX_TOKENS); token_s[tn] = NULL; for (tn = 0; token_s[tn] != NULL; tn++) { AN(token_e[tn]); /*lint !e771 */ *token_e[tn] = '\0'; /*lint !e771 */ if (NULL != strstr(token_s[tn], "${")) { token_exp = macro_expand(vl, token_s[tn]); if (vtc_error) return; token_s[tn] = VSB_data(token_exp); token_e[tn] = strchr(token_s[tn], '\0'); } } /* SECTION: loop loop * * loop NUMBER STRING * Process STRING as a specification, NUMBER times. * * This works inside all specification strings */ if (!strcmp(token_s[0], "loop")) { n = strtoul(token_s[1], NULL, 0); for (m = 0; m < n; m++) { vtc_log(vl, 4, "Loop #%u", m); parse_string(vl, priv, token_s[2]); } continue; } AN(vl->cmds); for (cp = vl->cmds; cp->name != NULL; cp++) if (!strcmp(token_s[0], cp->name)) break; if (cp->name == NULL) { for (cp = global_cmds; cp->name != NULL; cp++) if (!strcmp(token_s[0], cp->name)) break; } if (cp->name == NULL) vtc_fatal(vl, "Unknown command: \"%s\"", token_s[0]); assert(cp->cmd != NULL); cp->cmd(token_s, priv, vl); } } /********************************************************************** * Reset commands (between tests) */ static void reset_cmds(const struct cmds *cmd) { for (; cmd->name != NULL; cmd++) cmd->cmd(NULL, NULL, NULL); } /********************************************************************** * Execute a file */ int fail_out(void) { unsigned old_err; static int once = 0; if (once++) { vtc_log(vltop, 1, "failure during reset"); return (vtc_error); } old_err = vtc_error; if (!vtc_stop) vtc_stop = 1; vtc_log(vltop, 1, "RESETTING after %s", tfn); reset_cmds(global_cmds); reset_cmds(top_cmds); vtc_error |= old_err; if (vtc_error) vtc_log(vltop, 1, "TEST %s FAILED", tfn); else vtc_log(vltop, 1, "TEST %s completed", tfn); if (vtc_stop > 1) return (1); return (vtc_error); } int exec_file(const char *fn, const char *script, const char *tmpdir, char *logbuf, unsigned loglen) { FILE *f; struct vsb *vsb; const char *p; (void)signal(SIGPIPE, SIG_IGN); PTOK(pthread_mutex_init(&vtc_vrnd_mtx, NULL)); VRND_Lock = vtc_vrnd_lock; VRND_Unlock = vtc_vrnd_unlock; VRND_SeedAll(); tfn = fn; vtc_loginit(logbuf, loglen); vltop = vtc_logopen("top"); AN(vltop); vtc_log_set_cmd(vltop, top_cmds); vtc_log(vltop, 1, "TEST %s starting", fn); init_macro(); init_server(); init_syslog(); init_tunnel(); vsb = VSB_new_auto(); AN(vsb); if (*fn != '/') macro_cat(vltop, vsb, "pwd", NULL); p = strrchr(fn, '/'); if (p != NULL) { VSB_putc(vsb, '/'); VSB_bcat(vsb, fn, p - fn); } if (VSB_len(vsb) == 0) VSB_putc(vsb, '/'); AZ(VSB_finish(vsb)); macro_def(vltop, NULL, "testdir", "%s", VSB_data(vsb)); VSB_destroy(&vsb); /* Move into our tmpdir */ AZ(chdir(tmpdir)); macro_def(vltop, NULL, "tmpdir", "%s", tmpdir); p = strrchr(tmpdir, '/'); AN(p); p++; AN(*p); macro_def(vltop, NULL, "vtcid", "%s", p); /* Drop file to tell what was going on here */ f = fopen("INFO", "w"); AN(f); fprintf(f, "Test case: %s\n", fn); AZ(fclose(f)); vtc_stop = 0; vtc_thread = pthread_self(); parse_string(vltop, NULL, script); return (fail_out()); } varnish-7.5.0/bin/varnishtest/vtc.h000066400000000000000000000125111457605730600173010ustar00rootroot00000000000000/*- * Copyright (c) 2008-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #include #include #include #include #ifdef HAVE_PTHREAD_NP_H #include #endif #include "vdef.h" #include "miniobj.h" #include "vas.h" #include "vqueue.h" #include "vsb.h" #define VTC_CHECK_NAME(vl, nm, type, chr) \ do { \ AN(nm); \ if (*(nm) != chr) \ vtc_fatal(vl, \ type " name must start with '%c' (got %s)", \ chr, nm); \ } while (0) struct vtclog; struct suckaddr; #define CMD_ARGS char * const *av, void *priv, struct vtclog *vl typedef void cmd_f(CMD_ARGS); struct cmds { const char *name; cmd_f *cmd; }; void parse_string(struct vtclog *vl, void *priv, const char *spec); int fail_out(void); #define CMD_GLOBAL(n) cmd_f cmd_##n; #define CMD_TOP(n) cmd_f cmd_##n; #include "cmds.h" extern volatile sig_atomic_t vtc_error; /* Error, bail out */ extern int vtc_stop; /* Abandon current test, no error */ extern pthread_t vtc_thread; extern int iflg; extern vtim_dur vtc_maxdur; extern char *vmod_path; extern struct vsb *params_vsb; extern int leave_temp; extern int ign_unknown_macro; extern const char *default_listen_addr; void init_server(void); void init_syslog(void); void init_tunnel(void); /* Sessions */ struct vtc_sess *Sess_New(struct vtclog *vl, const char *name); void Sess_Destroy(struct vtc_sess **spp); int Sess_GetOpt(struct vtc_sess *, char * const **); int sess_process(struct vtclog *vl, struct vtc_sess *, const char *spec, int sock, int *sfd, const char *addr); typedef int sess_conn_f(void *priv, struct vtclog *); typedef void sess_disc_f(void *priv, struct vtclog *, int *fd); pthread_t Sess_Start_Thread( void *priv, struct vtc_sess *vsp, sess_conn_f *conn, sess_disc_f *disc, const char *listen_addr, int *asocket, const char *spec ); char * synth_body(const char *len, int rnd); void cmd_server_gen_vcl(struct vsb *vsb); void cmd_server_gen_haproxy_conf(struct vsb *vsb); void vtc_log_set_cmd(struct vtclog *vl, const struct cmds *cmds); void vtc_loginit(char *buf, unsigned buflen); struct vtclog *vtc_logopen(const char *id, ...) v_printflike_(1, 2); void vtc_logclose(void *arg); void vtc_log(struct vtclog *vl, int lvl, const char *fmt, ...) v_printflike_(3, 4); void vtc_fatal(struct vtclog *vl, const char *, ...) v_noreturn_ v_printflike_(2,3); void vtc_dump(struct vtclog *vl, int lvl, const char *pfx, const char *str, int len); void vtc_hexdump(struct vtclog *, int , const char *, const void *, unsigned); int vtc_send_proxy(int fd, int version, const struct suckaddr *sac, const struct suckaddr *sas); int exec_file(const char *fn, const char *script, const char *tmpdir, char *logbuf, unsigned loglen); void macro_undef(struct vtclog *vl, const char *instance, const char *name); void macro_def(struct vtclog *vl, const char *instance, const char *name, const char *fmt, ...) v_printflike_(4, 5); unsigned macro_isdef(const char *instance, const char *name); void macro_cat(struct vtclog *, struct vsb *, const char *, const char *); struct vsb *macro_expand(struct vtclog *vl, const char *text); struct vsb *macro_expandf(struct vtclog *vl, const char *, ...) v_printflike_(2, 3); typedef char* macro_f(int, char *const *, const char **); void extmacro_def(const char *name, macro_f *func, const char *fmt, ...) v_printflike_(3, 4); struct http; void cmd_stream(CMD_ARGS); void start_h2(struct http *hp); void stop_h2(struct http *hp); void b64_settings(const struct http *hp, const char *s); /* vtc_gzip.c */ void vtc_gunzip(struct http *, char *, long *); int vtc_gzip_cmd(struct http *hp, char * const *argv, char **body, long *bodylen); /* vtc_subr.c */ struct vsb *vtc_hex_to_bin(struct vtclog *vl, const char *arg); void vtc_expect(struct vtclog *, const char *, const char *, const char *, const char *, const char *); void vtc_wait4(struct vtclog *, long, int, int, int); void *vtc_record(struct vtclog *, int, struct vsb *); varnish-7.5.0/bin/varnishtest/vtc_barrier.c000066400000000000000000000277411457605730600210150ustar00rootroot00000000000000/*- * Copyright (c) 2005 Varnish Software AS * All rights reserved. * * Author: Dridi Boukelmoune * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include "config.h" #include #include #include #include #include #include #include /* for MUSL */ #include "vtc.h" #include "vtcp.h" #include "vtim.h" #include "vsa.h" enum barrier_e { BARRIER_NONE = 0, BARRIER_COND, BARRIER_SOCK, }; struct barrier { unsigned magic; #define BARRIER_MAGIC 0x7b54c275 char *name; VTAILQ_ENTRY(barrier) list; pthread_mutex_t mtx; pthread_cond_t cond; int waiters; int expected; int cyclic; enum barrier_e type; union { int cond_cycle; pthread_t sock_thread; }; }; static VTAILQ_HEAD(, barrier) barriers = VTAILQ_HEAD_INITIALIZER(barriers); static struct barrier * barrier_new(const char *name, struct vtclog *vl) { struct barrier *b; ALLOC_OBJ(b, BARRIER_MAGIC); AN(b); if (!pthread_equal(pthread_self(), vtc_thread)) vtc_fatal(vl, "Barrier %s can only be created on the top thread", name); REPLACE(b->name, name); PTOK(pthread_mutex_init(&b->mtx, NULL)); PTOK(pthread_cond_init(&b->cond, NULL)); b->waiters = 0; b->expected = 0; VTAILQ_INSERT_TAIL(&barriers, b, list); return (b); } /********************************************************************** * Init a barrier */ static void barrier_expect(struct barrier *b, const char *av, struct vtclog *vl) { unsigned expected; if (b->type != BARRIER_NONE) vtc_fatal(vl, "Barrier(%s) use error: already initialized", b->name); AZ(b->expected); AZ(b->waiters); expected = strtoul(av, NULL, 0); if (expected < 2) vtc_fatal(vl, "Barrier(%s) use error: wrong expectation (%u)", b->name, expected); b->expected = expected; } static void barrier_cond(struct barrier *b, const char *av, struct vtclog *vl) { CHECK_OBJ_NOTNULL(b, BARRIER_MAGIC); PTOK(pthread_mutex_lock(&b->mtx)); barrier_expect(b, av, vl); b->type = BARRIER_COND; PTOK(pthread_mutex_unlock(&b->mtx)); } static void * barrier_sock_thread(void *priv) { struct barrier *b; struct vtclog *vl; const char *err; char buf[vsa_suckaddr_len]; const struct suckaddr *sua; char abuf[VTCP_ADDRBUFSIZE], pbuf[VTCP_PORTBUFSIZE]; int i, sock, *conns; struct pollfd pfd[1]; CAST_OBJ_NOTNULL(b, priv, BARRIER_MAGIC); assert(b->type == BARRIER_SOCK); PTOK(pthread_mutex_lock(&b->mtx)); vl = vtc_logopen("%s", b->name); pthread_cleanup_push(vtc_logclose, vl); sock = VTCP_listen_on(default_listen_addr, NULL, b->expected, &err); if (sock < 0) { PTOK(pthread_cond_signal(&b->cond)); PTOK(pthread_mutex_unlock(&b->mtx)); vtc_fatal(vl, "Barrier(%s) %s fails: %s (errno=%d)", b->name, err, strerror(errno), errno); } assert(sock > 0); VTCP_nonblocking(sock); sua = VSA_getsockname(sock, buf, sizeof buf); AN(sua); VTCP_name(sua, abuf, sizeof abuf, pbuf, sizeof pbuf); macro_def(vl, b->name, "addr", "%s", abuf); macro_def(vl, b->name, "port", "%s", pbuf); if (VSA_Get_Proto(sua) == AF_INET) macro_def(vl, b->name, "sock", "%s:%s", abuf, pbuf); else macro_def(vl, b->name, "sock", "[%s]:%s", abuf, pbuf); PTOK(pthread_cond_signal(&b->cond)); PTOK(pthread_mutex_unlock(&b->mtx)); conns = calloc(b->expected, sizeof *conns); AN(conns); while (!vtc_stop && !vtc_error) { pfd[0].fd = sock; pfd[0].events = POLLIN; i = poll(pfd, 1, 100); if (i == 0) continue; if (i < 0) { if (errno == EINTR) continue; closefd(&sock); vtc_fatal(vl, "Barrier(%s) select fails: %s (errno=%d)", b->name, strerror(errno), errno); } assert(i == 1); assert(b->waiters <= b->expected); if (b->waiters == b->expected) vtc_fatal(vl, "Barrier(%s) use error: " "more waiters than the %u expected", b->name, b->expected); i = accept(sock, NULL, NULL); if (i < 0) { closefd(&sock); vtc_fatal(vl, "Barrier(%s) accept fails: %s (errno=%d)", b->name, strerror(errno), errno); } /* NB. We don't keep track of the established connections, only * that connections were made to the barrier's socket. */ conns[b->waiters] = i; if (++b->waiters < b->expected) { vtc_log(vl, 4, "Barrier(%s) wait %u of %u", b->name, b->waiters, b->expected); continue; } vtc_log(vl, 4, "Barrier(%s) wake %u", b->name, b->expected); for (i = 0; i < b->expected; i++) closefd(&conns[i]); if (b->cyclic) b->waiters = 0; else break; } if (b->waiters % b->expected > 0) { /* wake up outstanding waiters */ for (i = 0; i < b->waiters; i++) closefd(&conns[i]); if (!vtc_error) vtc_fatal(vl, "Barrier(%s) has %u outstanding waiters", b->name, b->waiters); } macro_undef(vl, b->name, "addr"); macro_undef(vl, b->name, "port"); macro_undef(vl, b->name, "sock"); closefd(&sock); free(conns); pthread_cleanup_pop(0); vtc_logclose(vl); return (NULL); } static void barrier_sock(struct barrier *b, const char *av, struct vtclog *vl) { CHECK_OBJ_NOTNULL(b, BARRIER_MAGIC); PTOK(pthread_mutex_lock(&b->mtx)); barrier_expect(b, av, vl); b->type = BARRIER_SOCK; /* NB. We can use the BARRIER_COND's pthread_cond_t to wait until the * socket is ready for convenience. */ PTOK(pthread_create(&b->sock_thread, NULL, barrier_sock_thread, b)); PTOK(pthread_cond_wait(&b->cond, &b->mtx)); PTOK(pthread_mutex_unlock(&b->mtx)); } static void barrier_cyclic(struct barrier *b, struct vtclog *vl) { enum barrier_e t; int w; CHECK_OBJ_NOTNULL(b, BARRIER_MAGIC); PTOK(pthread_mutex_lock(&b->mtx)); t = b->type; w = b->waiters; PTOK(pthread_mutex_unlock(&b->mtx)); if (t == BARRIER_NONE) vtc_fatal(vl, "Barrier(%s) use error: not initialized", b->name); if (w != 0) vtc_fatal(vl, "Barrier(%s) use error: already in use", b->name); PTOK(pthread_mutex_lock(&b->mtx)); b->cyclic = 1; PTOK(pthread_mutex_unlock(&b->mtx)); } /********************************************************************** * Sync a barrier */ static void barrier_cond_sync(struct barrier *b, struct vtclog *vl) { struct timespec ts; int r, w, c; CHECK_OBJ_NOTNULL(b, BARRIER_MAGIC); assert(b->type == BARRIER_COND); PTOK(pthread_mutex_lock(&b->mtx)); w = b->waiters; assert(w <= b->expected); if (w == b->expected) w = -1; else b->waiters = ++w; c = b->cond_cycle; PTOK(pthread_mutex_unlock(&b->mtx)); if (w < 0) vtc_fatal(vl, "Barrier(%s) use error: more waiters than the %u expected", b->name, b->expected); PTOK(pthread_mutex_lock(&b->mtx)); if (w == b->expected) { vtc_log(vl, 4, "Barrier(%s) wake %u", b->name, b->expected); b->cond_cycle++; if (b->cyclic) b->waiters = 0; PTOK(pthread_cond_broadcast(&b->cond)); } else { vtc_log(vl, 4, "Barrier(%s) wait %u of %u", b->name, b->waiters, b->expected); do { ts = VTIM_timespec(VTIM_real() + .1); r = pthread_cond_timedwait(&b->cond, &b->mtx, &ts); assert(r == 0 || r == ETIMEDOUT); } while (!vtc_stop && !vtc_error && r == ETIMEDOUT && c == b->cond_cycle); } PTOK(pthread_mutex_unlock(&b->mtx)); } static void barrier_sock_sync(const struct barrier *b, struct vtclog *vl) { struct vsb *vsb; const char *err; char buf[32]; int i, sock; ssize_t sz; CHECK_OBJ_NOTNULL(b, BARRIER_MAGIC); assert(b->type == BARRIER_SOCK); vsb = macro_expandf(vl, "${%s_sock}", b->name); vtc_log(vl, 4, "Barrier(%s) sync with socket", b->name); sock = VTCP_open(VSB_data(vsb), NULL, 0., &err); if (sock < 0) vtc_fatal(vl, "Barrier(%s) connection failed: %s", b->name, err); VSB_destroy(&vsb); sz = read(sock, buf, sizeof buf); /* XXX loop with timeout? */ i = errno; closefd(&sock); if (sz < 0) vtc_fatal(vl, "Barrier(%s) read failed: %s (errno=%d)", b->name, strerror(i), i); if (sz > 0) vtc_fatal(vl, "Barrier(%s) unexpected data (%zdB)", b->name, sz); } static void barrier_sync(struct barrier *b, struct vtclog *vl) { CHECK_OBJ_NOTNULL(b, BARRIER_MAGIC); switch (b->type) { case BARRIER_NONE: vtc_fatal(vl, "Barrier(%s) use error: not initialized", b->name); break; case BARRIER_COND: barrier_cond_sync(b, vl); break; case BARRIER_SOCK: barrier_sock_sync(b, vl); break; default: WRONG("Wrong barrier type"); } } /* SECTION: barrier barrier * * NOTE: This command is available everywhere commands are given. * * Barriers allows you to synchronize different threads to make sure events * occur in the right order. It's even possible to use them in VCL. * * First, it's necessary to declare the barrier:: * * barrier bNAME TYPE NUMBER [-cyclic] * * With the arguments being: * * bNAME * this is the name of the barrier, used to identify it when you'll * create sync points. It must start with 'b'. * * TYPE * it can be "cond" (mutex) or "sock" (socket) and sets internal * behavior. If you don't need VCL synchronization, use cond. * * NUMBER * number of sync point needed to go through the barrier. * * \-cyclic * if present, the barrier will reset itself and be ready for another * round once gotten through. * * Then, to add a sync point:: * * barrier bNAME sync * * This will block the parent thread until the number of sync points for bNAME * reaches the NUMBER given in the barrier declaration. * * If you wish to synchronize the VCL, you need to declare a "sock" barrier. * This will emit a macro definition named "bNAME_sock" that you can use in * VCL (after importing the vtc vmod):: * * vtc.barrier_sync("${bNAME_sock}"); * * This function returns 0 if everything went well and is the equivalent of * ``barrier bNAME sync`` at the VTC top-level. * * */ void cmd_barrier(CMD_ARGS) { struct barrier *b, *b2; int r; (void)priv; if (av == NULL) { /* Reset and free */ VTAILQ_FOREACH_SAFE(b, &barriers, list, b2) { r = pthread_mutex_trylock(&b->mtx); assert(r == 0 || r == EBUSY); switch (b->type) { case BARRIER_COND: break; case BARRIER_SOCK: PTOK(pthread_join(b->sock_thread, NULL)); break; default: WRONG("Wrong barrier type"); } if (r == 0) PTOK(pthread_mutex_unlock(&b->mtx)); } return; } AZ(strcmp(av[0], "barrier")); av++; VTC_CHECK_NAME(vl, av[0], "Barrier", 'b'); VTAILQ_FOREACH(b, &barriers, list) if (!strcmp(b->name, av[0])) break; if (b == NULL) b = barrier_new(av[0], vl); av++; for (; *av != NULL; av++) { if (!strcmp(*av, "cond")) { av++; AN(*av); barrier_cond(b, *av, vl); continue; } if (!strcmp(*av, "sock")) { av++; AN(*av); barrier_sock(b, *av, vl); continue; } if (!strcmp(*av, "sync")) { barrier_sync(b, vl); continue; } if (!strcmp(*av, "-cyclic")) { barrier_cyclic(b, vl); continue; } vtc_fatal(vl, "Unknown barrier argument: %s", *av); } } varnish-7.5.0/bin/varnishtest/vtc_client.c000066400000000000000000000205451457605730600206400ustar00rootroot00000000000000/*- * Copyright (c) 2008-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include "config.h" #include #include #include #include #include #include #include #include #include "vtc.h" #include "vsa.h" #include "vss.h" #include "vtcp.h" #include "vus.h" struct client { unsigned magic; #define CLIENT_MAGIC 0x6242397c char *name; struct vtclog *vl; VTAILQ_ENTRY(client) list; struct vtc_sess *vsp; char *spec; char connect[256]; char *addr; char *proxy_spec; int proxy_version; unsigned running; pthread_t tp; }; static VTAILQ_HEAD(, client) clients = VTAILQ_HEAD_INITIALIZER(clients); /********************************************************************** * Send the proxy header */ static void client_proxy(struct vtclog *vl, int fd, int version, const char *spec) { const struct suckaddr *sac, *sas; char *p, *p2; p = strdup(spec); AN(p); p2 = strchr(p, ' '); AN(p2); *p2++ = '\0'; sac = VSS_ResolveOne(NULL, p, NULL, 0, SOCK_STREAM, AI_PASSIVE); if (sac == NULL) vtc_fatal(vl, "Could not resolve client address"); sas = VSS_ResolveOne(NULL, p2, NULL, 0, SOCK_STREAM, AI_PASSIVE); if (sas == NULL) vtc_fatal(vl, "Could not resolve server address"); if (vtc_send_proxy(fd, version, sac, sas)) vtc_fatal(vl, "Write failed: %s", strerror(errno)); free(p); VSA_free(&sac); VSA_free(&sas); } /********************************************************************** * Socket connect. */ static int client_tcp_connect(struct vtclog *vl, const char *addr, double tmo, const char **errp) { int fd; char mabuf[VTCP_ADDRBUFSIZE], mpbuf[VTCP_PORTBUFSIZE]; AN(addr); AN(errp); fd = VTCP_open(addr, NULL, tmo, errp); if (fd < 0) return (fd); VTCP_myname(fd, mabuf, sizeof mabuf, mpbuf, sizeof mpbuf); vtc_log(vl, 3, "connected fd %d from %s %s to %s", fd, mabuf, mpbuf, addr); return (fd); } /* cf. VTCP_Open() */ static int v_matchproto_(vus_resolved_f) uds_open(void *priv, const struct sockaddr_un *uds) { double *p; int s, i, tmo; struct pollfd fds[1]; socklen_t sl; sl = VUS_socklen(uds); AN(priv); AN(uds); p = priv; assert(*p > 0.); tmo = (int)(*p * 1e3); s = socket(uds->sun_family, SOCK_STREAM, 0); if (s < 0) return (s); VTCP_nonblocking(s); i = connect(s, (const void*)uds, sl); if (i == 0) return (s); if (errno != EINPROGRESS) { closefd(&s); return (-1); } fds[0].fd = s; fds[0].events = POLLWRNORM; fds[0].revents = 0; i = poll(fds, 1, tmo); if (i == 0) { closefd(&s); errno = ETIMEDOUT; return (-1); } return (VTCP_connected(s)); } static int client_uds_connect(struct vtclog *vl, const char *path, double tmo, const char **errp) { int fd; assert(tmo >= 0); errno = 0; fd = VUS_resolver(path, uds_open, &tmo, errp); if (fd < 0) { *errp = strerror(errno); return (fd); } vtc_log(vl, 3, "connected fd %d to %s", fd, path); return (fd); } static int client_connect(struct vtclog *vl, struct client *c) { const char *err; int fd; vtc_log(vl, 3, "Connect to %s", c->addr); if (VUS_is(c->addr)) fd = client_uds_connect(vl, c->addr, 10., &err); else fd = client_tcp_connect(vl, c->addr, 10., &err); if (fd < 0) vtc_fatal(c->vl, "Failed to open %s: %s", c->addr, err); /* VTCP_blocking does its own checks, trust it */ VTCP_blocking(fd); if (c->proxy_spec != NULL) client_proxy(vl, fd, c->proxy_version, c->proxy_spec); return (fd); } /********************************************************************** * Client thread */ static int client_conn(void *priv, struct vtclog *vl) { struct client *c; CAST_OBJ_NOTNULL(c, priv, CLIENT_MAGIC); return (client_connect(vl, c)); } static void client_disc(void *priv, struct vtclog *vl, int *fdp) { (void)priv; vtc_log(vl, 3, "closing fd %d", *fdp); VTCP_close(fdp); } /********************************************************************** * Allocate and initialize a client */ static struct client * client_new(const char *name) { struct client *c; ALLOC_OBJ(c, CLIENT_MAGIC); AN(c); REPLACE(c->name, name); c->vl = vtc_logopen("%s", name); AN(c->vl); c->vsp = Sess_New(c->vl, name); AN(c->vsp); bprintf(c->connect, "%s", "${v1_sock}"); VTAILQ_INSERT_TAIL(&clients, c, list); return (c); } /********************************************************************** * Clean up client */ static void client_delete(struct client *c) { CHECK_OBJ_NOTNULL(c, CLIENT_MAGIC); Sess_Destroy(&c->vsp); vtc_logclose(c->vl); free(c->spec); free(c->name); free(c->addr); free(c->proxy_spec); /* XXX: MEMLEAK (?)*/ FREE_OBJ(c); } /********************************************************************** * Start the client thread */ static void client_start(struct client *c) { struct vsb *vsb; CHECK_OBJ_NOTNULL(c, CLIENT_MAGIC); vtc_log(c->vl, 2, "Starting client"); c->running = 1; vsb = macro_expand(c->vl, c->connect); AN(vsb); REPLACE(c->addr, VSB_data(vsb)); VSB_destroy(&vsb); c->tp = Sess_Start_Thread( c, c->vsp, client_conn, client_disc, c->addr, NULL, c->spec ); } /********************************************************************** * Wait for client thread to stop */ static void client_wait(struct client *c) { void *res; CHECK_OBJ_NOTNULL(c, CLIENT_MAGIC); vtc_log(c->vl, 2, "Waiting for client"); PTOK(pthread_join(c->tp, &res)); if (res != NULL) vtc_fatal(c->vl, "Client returned \"%s\"", (char *)res); c->tp = 0; c->running = 0; } /********************************************************************** * Run the client thread */ static void client_run(struct client *c) { client_start(c); client_wait(c); } /********************************************************************** * Client command dispatch */ void cmd_client(CMD_ARGS) { struct client *c, *c2; (void)priv; if (av == NULL) { /* Reset and free */ VTAILQ_FOREACH_SAFE(c, &clients, list, c2) { VTAILQ_REMOVE(&clients, c, list); if (c->tp != 0) client_wait(c); client_delete(c); } return; } AZ(strcmp(av[0], "client")); av++; VTC_CHECK_NAME(vl, av[0], "Client", 'c'); VTAILQ_FOREACH(c, &clients, list) if (!strcmp(c->name, av[0])) break; if (c == NULL) c = client_new(av[0]); av++; for (; *av != NULL; av++) { if (vtc_error) break; if (!strcmp(*av, "-wait")) { client_wait(c); continue; } /* Don't muck about with a running client */ if (c->running) client_wait(c); AZ(c->running); if (Sess_GetOpt(c->vsp, &av)) continue; if (!strcmp(*av, "-connect")) { bprintf(c->connect, "%s", av[1]); av++; continue; } if (!strcmp(*av, "-proxy1")) { REPLACE(c->proxy_spec, av[1]); c->proxy_version = 1; av++; continue; } if (!strcmp(*av, "-proxy2")) { REPLACE(c->proxy_spec, av[1]); c->proxy_version = 2; av++; continue; } if (!strcmp(*av, "-start")) { client_start(c); continue; } if (!strcmp(*av, "-run")) { client_run(c); continue; } if (**av == '-') vtc_fatal(c->vl, "Unknown client argument: %s", *av); REPLACE(c->spec, *av); } } varnish-7.5.0/bin/varnishtest/vtc_gzip.c000066400000000000000000000150371457605730600203330ustar00rootroot00000000000000/*- * Copyright (c) 2008-2019 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include "config.h" #include #include #include #include "vtc.h" #include "vtc_http.h" #include "vgz.h" #ifdef VGZ_EXTENSIONS static void vtc_report_gz_bits(struct vtclog *vl, const z_stream *vz) { vtc_log(vl, 4, "startbit = %ju %ju/%ju", (uintmax_t)vz->start_bit, (uintmax_t)vz->start_bit >> 3, (uintmax_t)vz->start_bit & 7); vtc_log(vl, 4, "lastbit = %ju %ju/%ju", (uintmax_t)vz->last_bit, (uintmax_t)vz->last_bit >> 3, (uintmax_t)vz->last_bit & 7); vtc_log(vl, 4, "stopbit = %ju %ju/%ju", (uintmax_t)vz->stop_bit, (uintmax_t)vz->stop_bit >> 3, (uintmax_t)vz->stop_bit & 7); } #endif static size_t APOS(ssize_t sz) { assert(sz >= 0); return (sz); } /********************************************************************** * GUNZIPery */ static struct vsb * vtc_gunzip_vsb(struct vtclog *vl, int fatal, const struct vsb *vin) { z_stream vz; struct vsb *vout; int i; char buf[BUFSIZ]; memset(&vz, 0, sizeof vz); vout = VSB_new_auto(); AN(vout); vz.next_in = (void*)VSB_data(vin); vz.avail_in = APOS(VSB_len(vin)); assert(Z_OK == inflateInit2(&vz, 31)); do { vz.next_out = (void*)buf; vz.avail_out = sizeof buf; i = inflate(&vz, Z_FINISH); if (vz.avail_out != sizeof buf) VSB_bcat(vout, buf, sizeof buf - vz.avail_out); } while (i == Z_OK || i == Z_BUF_ERROR); if (i != Z_STREAM_END) vtc_log(vl, fatal, "Gunzip error = %d (%s) in:%jd out:%jd len:%zd", i, vz.msg, (intmax_t)vz.total_in, (intmax_t)vz.total_out, VSB_len(vin)); AZ(VSB_finish(vout)); #ifdef VGZ_EXTENSIONS vtc_report_gz_bits(vl, &vz); #endif assert(Z_OK == inflateEnd(&vz)); return (vout); } void vtc_gunzip(struct http *hp, char *body, long *bodylen) { struct vsb *vin, *vout; AN(body); if (body[0] != (char)0x1f || body[1] != (char)0x8b) vtc_log(hp->vl, hp->fatal, "Gunzip error: body lacks gzip magic"); vin = VSB_new_auto(); AN(vin); VSB_bcat(vin, body, *bodylen); AZ(VSB_finish(vin)); vout = vtc_gunzip_vsb(hp->vl, hp->fatal, vin); VSB_destroy(&vin); memcpy(body, VSB_data(vout), APOS(VSB_len(vout) + 1)); *bodylen = APOS(VSB_len(vout)); VSB_destroy(&vout); vtc_log(hp->vl, 3, "new bodylen %ld", *bodylen); vtc_dump(hp->vl, 4, "body", body, *bodylen); bprintf(hp->bodylen, "%ld", *bodylen); } /********************************************************************** * GZIPery */ static int vtc_gzip_chunk(z_stream *vz, struct vsb *vout, const char *in, size_t inlen, int flush) { int i; char buf[BUFSIZ]; vz->next_in = TRUST_ME(in); vz->avail_in = APOS(inlen); do { vz->next_out = (void*)buf; vz->avail_out = sizeof buf; i = deflate(vz, flush); if (vz->avail_out != sizeof buf) VSB_bcat(vout, buf, sizeof buf - vz->avail_out); } while (i == Z_OK || vz->avail_in > 0); vz->next_out = NULL; vz->avail_out = 0; vz->next_in = NULL; AZ(vz->avail_in); vz->avail_in = 0; return (i); } static void vtc_gzip(struct http *hp, const char *input, char **body, long *bodylen, int fragment) { struct vsb *vout; int i, res; size_t inlen = strlen(input); z_stream vz; memset(&vz, 0, sizeof vz); vout = VSB_new_auto(); AN(vout); assert(Z_OK == deflateInit2(&vz, hp->gziplevel, Z_DEFLATED, 31, 9, Z_DEFAULT_STRATEGY)); while (fragment && inlen > 3) { res = inlen / 3; i = vtc_gzip_chunk(&vz, vout, input, res, Z_BLOCK); if (i != Z_OK && i != Z_BUF_ERROR) { vtc_log(hp->vl, hp->fatal, "Gzip error = %d (%s) in:%jd out:%jd len:%zd", i, vz.msg, (intmax_t)vz.total_in, (intmax_t)vz.total_out, strlen(input)); } input += res; inlen -= res; } i = vtc_gzip_chunk(&vz, vout, input, inlen, Z_FINISH); if (i != Z_STREAM_END) { vtc_log(hp->vl, hp->fatal, "Gzip error = %d (%s) in:%jd out:%jd len:%zd", i, vz.msg, (intmax_t)vz.total_in, (intmax_t)vz.total_out, strlen(input)); } AZ(VSB_finish(vout)); #ifdef VGZ_EXTENSIONS res = vz.stop_bit & 7; vtc_report_gz_bits(hp->vl, &vz); #else res = 0; #endif assert(Z_OK == deflateEnd(&vz)); #ifdef VGZ_EXTENSIONS if (hp->gzipresidual >= 0 && hp->gzipresidual != res) vtc_log(hp->vl, hp->fatal, "Wrong gzip residual got %d wanted %d", res, hp->gzipresidual); #endif *body = malloc(APOS(VSB_len(vout) + 1)); AN(*body); memcpy(*body, VSB_data(vout), APOS(VSB_len(vout) + 1)); *bodylen = APOS(VSB_len(vout)); VSB_destroy(&vout); vtc_log(hp->vl, 3, "new bodylen %ld", *bodylen); vtc_dump(hp->vl, 4, "body", *body, *bodylen); bprintf(hp->bodylen, "%ld", *bodylen); } int vtc_gzip_cmd(struct http *hp, char * const *av, char **body, long *bodylen) { char *b; AN(hp); AN(av); AN(body); AN(bodylen); if (!strcmp(*av, "-gzipresidual")) { hp->gzipresidual = strtoul(av[1], NULL, 0); return (1); } if (!strcmp(*av, "-gziplevel")) { hp->gziplevel = strtoul(av[1], NULL, 0); return (1); } if (!strcmp(*av, "-gzipbody")) { if (*body != NULL) free(*body); *body = NULL; vtc_gzip(hp, av[1], body, bodylen, 0); AN(*body); return (2); } if (!strcmp(*av, "-gziplen")) { if (*body != NULL) free(*body); *body = NULL; b = synth_body(av[1], 1); vtc_gzip(hp, b, body, bodylen, 1); AN(*body); free(b); return (2); } return (0); } varnish-7.5.0/bin/varnishtest/vtc_h2_enctbl.h000066400000000000000000000210511457605730600212200ustar00rootroot00000000000000/*- * For Copyright information see RFC7541 [BSD3] */ HPACK(0, 0x1ff8, 13) HPACK(1, 0x7fffd8, 23) HPACK(2, 0xfffffe2, 28) HPACK(3, 0xfffffe3, 28) HPACK(4, 0xfffffe4, 28) HPACK(5, 0xfffffe5, 28) HPACK(6, 0xfffffe6, 28) HPACK(7, 0xfffffe7, 28) HPACK(8, 0xfffffe8, 28) HPACK(9, 0xffffea, 24) HPACK(10, 0x3ffffffc, 30) HPACK(11, 0xfffffe9, 28) HPACK(12, 0xfffffea, 28) HPACK(13, 0x3ffffffd, 30) HPACK(14, 0xfffffeb, 28) HPACK(15, 0xfffffec, 28) HPACK(16, 0xfffffed, 28) HPACK(17, 0xfffffee, 28) HPACK(18, 0xfffffef, 28) HPACK(19, 0xffffff0, 28) HPACK(20, 0xffffff1, 28) HPACK(21, 0xffffff2, 28) HPACK(22, 0x3ffffffe, 30) HPACK(23, 0xffffff3, 28) HPACK(24, 0xffffff4, 28) HPACK(25, 0xffffff5, 28) HPACK(26, 0xffffff6, 28) HPACK(27, 0xffffff7, 28) HPACK(28, 0xffffff8, 28) HPACK(29, 0xffffff9, 28) HPACK(30, 0xffffffa, 28) HPACK(31, 0xffffffb, 28) HPACK(32, 0x14, 6) /* ' ' */ HPACK(33, 0x3f8, 10) /* '!' */ HPACK(34, 0x3f9, 10) /* '"' */ HPACK(35, 0xffa, 12) /* '#' */ HPACK(36, 0x1ff9, 13) /* '$' */ HPACK(37, 0x15, 6) /* '%' */ HPACK(38, 0xf8, 8) /* '&' */ HPACK(39, 0x7fa, 11) /* ''' */ HPACK(40, 0x3fa, 10) /* '(' */ HPACK(41, 0x3fb, 10) /* ')' */ HPACK(42, 0xf9, 8) /* '*' */ HPACK(43, 0x7fb, 11) /* '+' */ HPACK(44, 0xfa, 8) /* ',' */ HPACK(45, 0x16, 6) /* '-' */ HPACK(46, 0x17, 6) /* '.' */ HPACK(47, 0x18, 6) /* '/' */ HPACK(48, 0x0, 5) /* '0' */ HPACK(49, 0x1, 5) /* '1' */ HPACK(50, 0x2, 5) /* '2' */ HPACK(51, 0x19, 6) /* '3' */ HPACK(52, 0x1a, 6) /* '4' */ HPACK(53, 0x1b, 6) /* '5' */ HPACK(54, 0x1c, 6) /* '6' */ HPACK(55, 0x1d, 6) /* '7' */ HPACK(56, 0x1e, 6) /* '8' */ HPACK(57, 0x1f, 6) /* '9' */ HPACK(58, 0x5c, 7) /* ':' */ HPACK(59, 0xfb, 8) /* ';' */ HPACK(60, 0x7ffc, 15) /* '<' */ HPACK(61, 0x20, 6) /* '=' */ HPACK(62, 0xffb, 12) /* '>' */ HPACK(63, 0x3fc, 10) /* '?' */ HPACK(64, 0x1ffa, 13) /* '@' */ HPACK(65, 0x21, 6) /* 'A' */ HPACK(66, 0x5d, 7) /* 'B' */ HPACK(67, 0x5e, 7) /* 'C' */ HPACK(68, 0x5f, 7) /* 'D' */ HPACK(69, 0x60, 7) /* 'E' */ HPACK(70, 0x61, 7) /* 'F' */ HPACK(71, 0x62, 7) /* 'G' */ HPACK(72, 0x63, 7) /* 'H' */ HPACK(73, 0x64, 7) /* 'I' */ HPACK(74, 0x65, 7) /* 'J' */ HPACK(75, 0x66, 7) /* 'K' */ HPACK(76, 0x67, 7) /* 'L' */ HPACK(77, 0x68, 7) /* 'M' */ HPACK(78, 0x69, 7) /* 'N' */ HPACK(79, 0x6a, 7) /* 'O' */ HPACK(80, 0x6b, 7) /* 'P' */ HPACK(81, 0x6c, 7) /* 'Q' */ HPACK(82, 0x6d, 7) /* 'R' */ HPACK(83, 0x6e, 7) /* 'S' */ HPACK(84, 0x6f, 7) /* 'T' */ HPACK(85, 0x70, 7) /* 'U' */ HPACK(86, 0x71, 7) /* 'V' */ HPACK(87, 0x72, 7) /* 'W' */ HPACK(88, 0xfc, 8) /* 'X' */ HPACK(89, 0x73, 7) /* 'Y' */ HPACK(90, 0xfd, 8) /* 'Z' */ HPACK(91, 0x1ffb, 13) /* '[' */ HPACK(92, 0x7fff0, 19) /* '\' */ HPACK(93, 0x1ffc, 13) /* ']' */ HPACK(94, 0x3ffc, 14) /* '^' */ HPACK(95, 0x22, 6) /* '_' */ HPACK(96, 0x7ffd, 15) /* '`' */ HPACK(97, 0x3, 5) /* 'a' */ HPACK(98, 0x23, 6) /* 'b' */ HPACK(99, 0x4, 5) /* 'c' */ HPACK(100, 0x24, 6) /* 'd' */ HPACK(101, 0x5, 5) /* 'e' */ HPACK(102, 0x25, 6) /* 'f' */ HPACK(103, 0x26, 6) /* 'g' */ HPACK(104, 0x27, 6) /* 'h' */ HPACK(105, 0x6, 5) /* 'i' */ HPACK(106, 0x74, 7) /* 'j' */ HPACK(107, 0x75, 7) /* 'k' */ HPACK(108, 0x28, 6) /* 'l' */ HPACK(109, 0x29, 6) /* 'm' */ HPACK(110, 0x2a, 6) /* 'n' */ HPACK(111, 0x7, 5) /* 'o' */ HPACK(112, 0x2b, 6) /* 'p' */ HPACK(113, 0x76, 7) /* 'q' */ HPACK(114, 0x2c, 6) /* 'r' */ HPACK(115, 0x8, 5) /* 's' */ HPACK(116, 0x9, 5) /* 't' */ HPACK(117, 0x2d, 6) /* 'u' */ HPACK(118, 0x77, 7) /* 'v' */ HPACK(119, 0x78, 7) /* 'w' */ HPACK(120, 0x79, 7) /* 'x' */ HPACK(121, 0x7a, 7) /* 'y' */ HPACK(122, 0x7b, 7) /* 'z' */ HPACK(123, 0x7ffe, 15) /* '{' */ HPACK(124, 0x7fc, 11) /* '|' */ HPACK(125, 0x3ffd, 14) /* '}' */ HPACK(126, 0x1ffd, 13) /* '~' */ HPACK(127, 0xffffffc, 28) HPACK(128, 0xfffe6, 20) HPACK(129, 0x3fffd2, 22) HPACK(130, 0xfffe7, 20) HPACK(131, 0xfffe8, 20) HPACK(132, 0x3fffd3, 22) HPACK(133, 0x3fffd4, 22) HPACK(134, 0x3fffd5, 22) HPACK(135, 0x7fffd9, 23) HPACK(136, 0x3fffd6, 22) HPACK(137, 0x7fffda, 23) HPACK(138, 0x7fffdb, 23) HPACK(139, 0x7fffdc, 23) HPACK(140, 0x7fffdd, 23) HPACK(141, 0x7fffde, 23) HPACK(142, 0xffffeb, 24) HPACK(143, 0x7fffdf, 23) HPACK(144, 0xffffec, 24) HPACK(145, 0xffffed, 24) HPACK(146, 0x3fffd7, 22) HPACK(147, 0x7fffe0, 23) HPACK(148, 0xffffee, 24) HPACK(149, 0x7fffe1, 23) HPACK(150, 0x7fffe2, 23) HPACK(151, 0x7fffe3, 23) HPACK(152, 0x7fffe4, 23) HPACK(153, 0x1fffdc, 21) HPACK(154, 0x3fffd8, 22) HPACK(155, 0x7fffe5, 23) HPACK(156, 0x3fffd9, 22) HPACK(157, 0x7fffe6, 23) HPACK(158, 0x7fffe7, 23) HPACK(159, 0xffffef, 24) HPACK(160, 0x3fffda, 22) HPACK(161, 0x1fffdd, 21) HPACK(162, 0xfffe9, 20) HPACK(163, 0x3fffdb, 22) HPACK(164, 0x3fffdc, 22) HPACK(165, 0x7fffe8, 23) HPACK(166, 0x7fffe9, 23) HPACK(167, 0x1fffde, 21) HPACK(168, 0x7fffea, 23) HPACK(169, 0x3fffdd, 22) HPACK(170, 0x3fffde, 22) HPACK(171, 0xfffff0, 24) HPACK(172, 0x1fffdf, 21) HPACK(173, 0x3fffdf, 22) HPACK(174, 0x7fffeb, 23) HPACK(175, 0x7fffec, 23) HPACK(176, 0x1fffe0, 21) HPACK(177, 0x1fffe1, 21) HPACK(178, 0x3fffe0, 22) HPACK(179, 0x1fffe2, 21) HPACK(180, 0x7fffed, 23) HPACK(181, 0x3fffe1, 22) HPACK(182, 0x7fffee, 23) HPACK(183, 0x7fffef, 23) HPACK(184, 0xfffea, 20) HPACK(185, 0x3fffe2, 22) HPACK(186, 0x3fffe3, 22) HPACK(187, 0x3fffe4, 22) HPACK(188, 0x7ffff0, 23) HPACK(189, 0x3fffe5, 22) HPACK(190, 0x3fffe6, 22) HPACK(191, 0x7ffff1, 23) HPACK(192, 0x3ffffe0, 26) HPACK(193, 0x3ffffe1, 26) HPACK(194, 0xfffeb, 20) HPACK(195, 0x7fff1, 19) HPACK(196, 0x3fffe7, 22) HPACK(197, 0x7ffff2, 23) HPACK(198, 0x3fffe8, 22) HPACK(199, 0x1ffffec, 25) HPACK(200, 0x3ffffe2, 26) HPACK(201, 0x3ffffe3, 26) HPACK(202, 0x3ffffe4, 26) HPACK(203, 0x7ffffde, 27) HPACK(204, 0x7ffffdf, 27) HPACK(205, 0x3ffffe5, 26) HPACK(206, 0xfffff1, 24) HPACK(207, 0x1ffffed, 25) HPACK(208, 0x7fff2, 19) HPACK(209, 0x1fffe3, 21) HPACK(210, 0x3ffffe6, 26) HPACK(211, 0x7ffffe0, 27) HPACK(212, 0x7ffffe1, 27) HPACK(213, 0x3ffffe7, 26) HPACK(214, 0x7ffffe2, 27) HPACK(215, 0xfffff2, 24) HPACK(216, 0x1fffe4, 21) HPACK(217, 0x1fffe5, 21) HPACK(218, 0x3ffffe8, 26) HPACK(219, 0x3ffffe9, 26) HPACK(220, 0xffffffd, 28) HPACK(221, 0x7ffffe3, 27) HPACK(222, 0x7ffffe4, 27) HPACK(223, 0x7ffffe5, 27) HPACK(224, 0xfffec, 20) HPACK(225, 0xfffff3, 24) HPACK(226, 0xfffed, 20) HPACK(227, 0x1fffe6, 21) HPACK(228, 0x3fffe9, 22) HPACK(229, 0x1fffe7, 21) HPACK(230, 0x1fffe8, 21) HPACK(231, 0x7ffff3, 23) HPACK(232, 0x3fffea, 22) HPACK(233, 0x3fffeb, 22) HPACK(234, 0x1ffffee, 25) HPACK(235, 0x1ffffef, 25) HPACK(236, 0xfffff4, 24) HPACK(237, 0xfffff5, 24) HPACK(238, 0x3ffffea, 26) HPACK(239, 0x7ffff4, 23) HPACK(240, 0x3ffffeb, 26) HPACK(241, 0x7ffffe6, 27) HPACK(242, 0x3ffffec, 26) HPACK(243, 0x3ffffed, 26) HPACK(244, 0x7ffffe7, 27) HPACK(245, 0x7ffffe8, 27) HPACK(246, 0x7ffffe9, 27) HPACK(247, 0x7ffffea, 27) HPACK(248, 0x7ffffeb, 27) HPACK(249, 0xffffffe, 28) HPACK(250, 0x7ffffec, 27) HPACK(251, 0x7ffffed, 27) HPACK(252, 0x7ffffee, 27) HPACK(253, 0x7ffffef, 27) HPACK(254, 0x7fffff0, 27) HPACK(255, 0x3ffffee, 26) HPACK(0, 0x3fffffff, 30) varnish-7.5.0/bin/varnishtest/vtc_h2_hpack.c000066400000000000000000000224471457605730600210440ustar00rootroot00000000000000/*- * Copyright (c) 2008-2016 Varnish Software AS * All rights reserved. * * Author: Guillaume Quintard * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include #include #include #include #include "vdef.h" #include "vas.h" #include "vqueue.h" #include "hpack.h" #include "vtc_h2_priv.h" struct symbol { uint32_t val; uint8_t size; }; static const struct symbol coding_table[] = { #define HPACK(i, v, l) {v, l}, #include "vtc_h2_enctbl.h" #undef HPACK {0, 0} }; #include "vtc_h2_dectbl.h" #define MASK(pack, n) (pack >> (64 - n)) static int huff_decode(char *str, int nm, struct hpk_iter *iter, int ilen) { int l = 0; uint64_t pack = 0; unsigned pl = 0; /* pack length*/ struct stbl *tbl = &tbl_0; struct ssym *sym; (void)nm; while (ilen > 0 || pl != 0) { /* make sure we have enough data*/ if (pl < tbl->msk) { if (ilen == 0) { if (pl == 0 || (MASK(pack, pl) == (unsigned)((1U << pl) - 1U))) { assert(tbl == &tbl_0); return (l); } } /* fit as many bytes as we can in pack */ while (pl <= 56 && ilen > 0) { pack |= (uint64_t)(*iter->buf & 0xff) << (56 - pl); pl += 8; iter->buf++; ilen--; } } assert(tbl); assert(tbl->msk); sym = &tbl->syms[MASK(pack, tbl->msk)]; assert(sym->csm <= tbl->msk); if (sym->csm == 0 || pl < sym->csm) return (0); assert(sym->csm <= 8); pack <<= sym->csm; assert(sym->csm <= pl); pl -= sym->csm; if (sym->nxt) { tbl = sym->nxt; continue; } str[l++] = sym->chr; tbl = &tbl_0; } return (l); } /* inspired from Dridi Boukelmoune's cashpack. */ static enum hpk_result huff_encode(struct hpk_iter *iter, const char *str, int len) { uint64_t pack = 0; int pl = 0; /* pack length*/ uint32_t v; uint8_t s; assert(iter->buf < iter->end); while (len--) { v = coding_table[(uint8_t)*str].val; s = coding_table[(uint8_t)*str].size; pl += s; pack |= (uint64_t)v << (64 - pl); while (pl >= 8) { if (iter->buf == iter->end) return (hpk_done); *iter->buf = (char)(pack >> 56); iter->buf++; pack <<= 8; pl -= 8; } str++; } /* padding */ if (pl) { assert(pl < 8); if (iter->buf == iter->end) return (hpk_done); pl += 8; pack |= (uint64_t)0xff << (64 - pl); *iter->buf = (char)(pack >> 56); iter->buf++; } return (hpk_more); } static int huff_simulate(const char *str, int ilen, int huff) { int olen = 0; if (!huff || !ilen) return (ilen); while (ilen--) { olen += coding_table[(unsigned char)*str].size; str++; } return ((olen + 7) / 8); } static enum hpk_result num_decode(uint32_t *result, struct hpk_iter *iter, uint8_t prefix) { uint8_t shift = 0; assert(iter->buf < iter->end); assert(prefix); assert(prefix <= 8); *result = 0; *result = *iter->buf & (0xff >> (8-prefix)); if (*result < (1U << prefix) - 1U) { iter->buf++; return (ITER_DONE(iter)); } do { iter->buf++; if (iter->end == iter->buf) return (hpk_err); /* check for overflow */ if ((UINT32_MAX - *result) >> shift < (*iter->buf & 0x7f)) return (hpk_err); *result += (uint32_t)(*iter->buf & 0x7f) << shift; shift += 7; } while (*iter->buf & 0x80); iter->buf++; return (ITER_DONE(iter)); } static enum hpk_result num_encode(struct hpk_iter *iter, uint8_t prefix, uint32_t num) { assert(prefix); assert(prefix <= 8); assert(iter->buf < iter->end); uint8_t pmax = (1U << prefix) - 1U; *iter->buf &= 0xffU << prefix; if (num <= pmax) { *iter->buf++ |= num; return (ITER_DONE(iter)); } else if (iter->end - iter->buf < 2) return (hpk_err); iter->buf[0] |= pmax; num -= pmax; do { iter->buf++; if (iter->end == iter->buf) return (hpk_err); *iter->buf = num % 128; *iter->buf |= 0x80; num /= 128; } while (num); *iter->buf++ &= 127; return (ITER_DONE(iter)); } static enum hpk_result str_encode(struct hpk_iter *iter, const struct hpk_txt *t) { int slen = huff_simulate(t->ptr, t->len, t->huff); assert(iter->buf < iter->end); if (t->huff) *iter->buf = 0x80; else *iter->buf = 0; if (hpk_err == num_encode(iter, 7, slen)) return (hpk_err); if (slen > iter->end - iter->buf) return (hpk_err); if (t->huff) { return (huff_encode(iter, t->ptr, t->len)); } else { memcpy(iter->buf, t->ptr, slen); iter->buf += slen; return (ITER_DONE(iter)); } } static enum hpk_result str_decode(struct hpk_iter *iter, struct hpk_txt *t) { uint32_t num; int huff; assert(iter->buf < iter->end); huff = (*iter->buf & 0x80); if (hpk_more != num_decode(&num, iter, 7)) return (hpk_err); assert(iter->buf < iter->end); if (num > (unsigned)(iter->end - iter->buf)) return (hpk_err); if (huff) { /*Huffman encoding */ t->ptr = malloc((num * 8L) / 5L + 1L); AN(t->ptr); num = huff_decode(t->ptr, (num * 8) / 5, iter, num); if (!num) { free(t->ptr); return (hpk_err); } t->huff = 1; /* XXX: do we care? */ t->ptr = realloc(t->ptr, num + 1L); AN(t->ptr); } else { /* literal string */ t->huff = 0; t->ptr = malloc(num + 1L); AN(t->ptr); memcpy(t->ptr, iter->buf, num); iter->buf += num; } t->ptr[num] = '\0'; t->len = num; return (ITER_DONE(iter)); } static inline void txtcpy(struct hpk_txt *to, const struct hpk_txt *from) { //AZ(to->ptr); to->ptr = malloc(from->len + 1L); AN(to->ptr); memcpy(to->ptr, from->ptr, from->len + 1L); to->len = from->len; } int gethpk_iterLen(const struct hpk_iter *iter) { return (iter->buf - iter->orig); } enum hpk_result HPK_DecHdr(struct hpk_iter *iter, struct hpk_hdr *header) { int pref = 0; const struct hpk_txt *t; uint32_t num; int must_index = 0; assert(iter); assert(iter->buf < iter->end); /* Indexed Header Field */ if (*iter->buf & 128) { header->t = hpk_idx; if (hpk_err == num_decode(&num, iter, 7)) return (hpk_err); if (num) { /* indexed key and value*/ t = tbl_get_key(iter->ctx, num); if (!t) return (hpk_err); txtcpy(&header->key, t); t = tbl_get_value(iter->ctx, num); if (!t) { free(header->key.ptr); return (hpk_err); } txtcpy(&header->value, t); if (iter->buf < iter->end) return (hpk_more); else return (hpk_done); } else return (hpk_err); } /* Literal Header Field with Incremental Indexing */ else if (*iter->buf >> 6 == 1) { header->t = hpk_inc; pref = 6; must_index = 1; } /* Literal Header Field without Indexing */ else if (*iter->buf >> 4 == 0) { header->t = hpk_not; pref = 4; } /* Literal Header Field never Indexed */ else if (*iter->buf >> 4 == 1) { header->t = hpk_never; pref = 4; } /* Dynamic Table Size Update */ /* XXX if under max allowed value */ else if (*iter->buf >> 5 == 1) { if (hpk_done != num_decode(&num, iter, 5)) return (hpk_err); return (HPK_ResizeTbl(iter->ctx, num)); } else { return (hpk_err); } assert(pref); if (hpk_more != num_decode(&num, iter, pref)) return (hpk_err); header->i = num; if (num) { /* indexed key */ t = tbl_get_key(iter->ctx, num); if (!t) return (hpk_err); txtcpy(&header->key, t); } else { if (hpk_more != str_decode(iter, &header->key)) { free(header->key.ptr); return (hpk_err); } } if (hpk_err == str_decode(iter, &header->value)) { free(header->key.ptr); return (hpk_err); } if (must_index) push_header(iter->ctx, header); return (ITER_DONE(iter)); } enum hpk_result HPK_EncHdr(struct hpk_iter *iter, const struct hpk_hdr *h) { int pref; int must_index = 0; enum hpk_result ret; switch (h->t) { case hpk_idx: *iter->buf = 0x80; assert(num_encode(iter, 7, h->i) != hpk_err); return (ITER_DONE(iter)); case hpk_inc: *iter->buf = 0x40; pref = 6; must_index = 1; break; case hpk_not: *iter->buf = 0x00; pref = 4; break; case hpk_never: *iter->buf = 0x10; pref = 4; break; default: INCOMPL(); } if (h->i) { if (hpk_more != num_encode(iter, pref, h->i)) return (hpk_err); } else { iter->buf++; if (hpk_more != str_encode(iter, &h->key)) return (hpk_err); } ret = str_encode(iter, &h->value); if (ret == hpk_err) return (hpk_err); if (must_index) push_header(iter->ctx, h); return (ret); } varnish-7.5.0/bin/varnishtest/vtc_h2_priv.h000066400000000000000000000036671457605730600207460ustar00rootroot00000000000000/*- * Copyright (c) 2008-2016 Varnish Software AS * All rights reserved. * * Author: Guillaume Quintard * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #define ITER_DONE(iter) (iter->buf == iter->end ? hpk_done : hpk_more) struct dynhdr { struct hpk_hdr header; VTAILQ_ENTRY(dynhdr) list; }; VTAILQ_HEAD(dynamic_table,dynhdr); struct hpk_iter { struct hpk_ctx *ctx; uint8_t *orig; uint8_t *buf; uint8_t *end; }; const struct hpk_txt * tbl_get_key(const struct hpk_ctx *ctx, uint32_t index); const struct hpk_txt * tbl_get_value(const struct hpk_ctx *ctx, uint32_t index); void push_header (struct hpk_ctx *ctx, const struct hpk_hdr *h); varnish-7.5.0/bin/varnishtest/vtc_h2_stattbl.h000066400000000000000000000062351457605730600214350ustar00rootroot00000000000000/*- * For Copyright information see RFC7541 [BSD3] */ STAT_HDRS(1, ":authority", "") STAT_HDRS(2, ":method", "GET") STAT_HDRS(3, ":method", "POST") STAT_HDRS(4, ":path", "/") STAT_HDRS(5, ":path", "/index.html") STAT_HDRS(6, ":scheme", "http") STAT_HDRS(7, ":scheme", "https") STAT_HDRS(8, ":status", "200") STAT_HDRS(9, ":status", "204") STAT_HDRS(10, ":status", "206") STAT_HDRS(11, ":status", "304") STAT_HDRS(12, ":status", "400") STAT_HDRS(13, ":status", "404") STAT_HDRS(14, ":status", "500") STAT_HDRS(15, "accept-charset", "") STAT_HDRS(16, "accept-encoding", "gzip,deflate") STAT_HDRS(17, "accept-language", "") STAT_HDRS(18, "accept-ranges", "") STAT_HDRS(19, "accept", "") STAT_HDRS(20, "access-control-allow-origin", "") STAT_HDRS(21, "age", "") STAT_HDRS(22, "allow", "") STAT_HDRS(23, "authorization", "") STAT_HDRS(24, "cache-control", "") STAT_HDRS(25, "content-disposition", "") STAT_HDRS(26, "content-encoding", "") STAT_HDRS(27, "content-language", "") STAT_HDRS(28, "content-length", "") STAT_HDRS(29, "content-location", "") STAT_HDRS(30, "content-range", "") STAT_HDRS(31, "content-type", "") STAT_HDRS(32, "cookie", "") STAT_HDRS(33, "date", "") STAT_HDRS(34, "etag", "") STAT_HDRS(35, "expect", "") STAT_HDRS(36, "expires", "") STAT_HDRS(37, "from", "") STAT_HDRS(38, "host", "") STAT_HDRS(39, "if-match", "") STAT_HDRS(40, "if-modified-since", "") STAT_HDRS(41, "if-none-match", "") STAT_HDRS(42, "if-range", "") STAT_HDRS(43, "if-unmodified-since", "") STAT_HDRS(44, "last-modified", "") STAT_HDRS(45, "link", "") STAT_HDRS(46, "location", "") STAT_HDRS(47, "max-forwards", "") STAT_HDRS(48, "proxy-authenticate", "") STAT_HDRS(49, "proxy-authorization", "") STAT_HDRS(50, "range", "") STAT_HDRS(51, "referer", "") STAT_HDRS(52, "refresh", "") STAT_HDRS(53, "retry-after", "") STAT_HDRS(54, "server", "") STAT_HDRS(55, "set-cookie", "") STAT_HDRS(56, "strict-transport-security", "") STAT_HDRS(57, "transfer-encoding", "") STAT_HDRS(58, "user-agent", "") STAT_HDRS(59, "vary", "") STAT_HDRS(60, "via", "") STAT_HDRS(61, "www-authenticate", "") varnish-7.5.0/bin/varnishtest/vtc_h2_tbl.c000066400000000000000000000145401457605730600205320ustar00rootroot00000000000000/*- * Copyright (c) 2008-2016 Varnish Software AS * All rights reserved. * * Author: Guillaume Quintard * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include #include #include #include #include "vdef.h" #include "vas.h" #include "vqueue.h" #include "hpack.h" #include "vtc_h2_priv.h" /* TODO: fix that crazy workaround */ #define STAT_HDRS(i, k, v) \ static char key_ ## i[] = k; \ static char value_ ## i[] = v; #include "vtc_h2_stattbl.h" #undef STAT_HDRS /*lint -save -e778 */ static const struct hpk_hdr sttbl[] = { {{NULL, 0, 0}, {NULL, 0, 0}, hpk_idx, 0}, #define STAT_HDRS(j, k, v) \ { \ .key = { \ .ptr = key_ ## j, \ .len = sizeof(k) - 1, \ .huff = 0 \ }, \ .value = { \ .ptr = value_ ## j, \ .len = sizeof(v) - 1, \ .huff = 0 \ }, \ .t = hpk_idx, \ .i = j, \ }, #include "vtc_h2_stattbl.h" #undef STAT_HDRS }; /*lint -restore */ struct hpk_ctx { const struct hpk_hdr *sttbl; struct dynamic_table dyntbl; uint32_t maxsize; uint32_t size; }; struct hpk_iter * HPK_NewIter(struct hpk_ctx *ctx, void *buf, int size) { struct hpk_iter *iter = malloc(sizeof(*iter)); assert(iter); assert(ctx); assert(buf); assert(size); iter->ctx = ctx; iter->orig = buf; iter->buf = buf; iter->end = iter->buf + size; return (iter); } void HPK_FreeIter(struct hpk_iter *iter) { free(iter); } static void pop_header(struct hpk_ctx *ctx) { assert(!VTAILQ_EMPTY(&ctx->dyntbl)); struct dynhdr *h = VTAILQ_LAST(&ctx->dyntbl, dynamic_table); VTAILQ_REMOVE(&ctx->dyntbl, h, list); ctx->size -= h->header.key.len + h->header.value.len + 32; free(h->header.key.ptr); free(h->header.value.ptr); free(h); } void push_header (struct hpk_ctx *ctx, const struct hpk_hdr *oh) { const struct hpk_hdr *ih; struct dynhdr *h; uint32_t len; assert(ctx->size <= ctx->maxsize); AN(oh); if (!ctx->maxsize) return; len = oh->value.len + 32; if (oh->key.ptr) len += oh->key.len; else { AN(oh->i); ih = HPK_GetHdr(ctx, oh->i); AN(ih); len += ih->key.len; } while (!VTAILQ_EMPTY(&ctx->dyntbl) && ctx->maxsize - ctx->size < len) pop_header(ctx); if (ctx->maxsize - ctx->size >= len) { h = malloc(sizeof(*h)); AN(h); h->header.t = hpk_idx; if (oh->key.ptr) { h->header.key.len = oh->key.len; h->header.key.ptr = malloc(oh->key.len + 1L); AN(h->header.key.ptr); memcpy(h->header.key.ptr, oh->key.ptr, oh->key.len + 1L); } else { AN(oh->i); ih = HPK_GetHdr(ctx, oh->i); AN(ih); h->header.key.len = ih->key.len; h->header.key.ptr = malloc(ih->key.len + 1L); AN(h->header.key.ptr); memcpy(h->header.key.ptr, ih->key.ptr, ih->key.len + 1L); } h->header.value.len = oh->value.len; h->header.value.ptr = malloc(oh->value.len + 1L); AN(h->header.value.ptr); memcpy(h->header.value.ptr, oh->value.ptr, oh->value.len + 1L); VTAILQ_INSERT_HEAD(&ctx->dyntbl, h, list); ctx->size += len; } } enum hpk_result HPK_ResizeTbl(struct hpk_ctx *ctx, uint32_t num) { ctx->maxsize = num; while (!VTAILQ_EMPTY(&ctx->dyntbl) && ctx->maxsize < ctx->size) pop_header(ctx); return (hpk_done); } static const struct hpk_txt * tbl_get_field(const struct hpk_ctx *ctx, uint32_t idx, int key) { struct dynhdr *dh; assert(ctx); if (idx > 61 + ctx->size) return (NULL); else if (idx <= 61) { if (key) return (&ctx->sttbl[idx].key); else return (&ctx->sttbl[idx].value); } idx -= 62; VTAILQ_FOREACH(dh, &ctx->dyntbl, list) if (!idx--) break; if (idx && dh) { if (key) return (&dh->header.key); else return (&dh->header.value); } else return (NULL); } const struct hpk_txt * tbl_get_key(const struct hpk_ctx *ctx, uint32_t idx) { return (tbl_get_field(ctx, idx, 1)); } const struct hpk_txt * tbl_get_value(const struct hpk_ctx *ctx, uint32_t idx) { return (tbl_get_field(ctx, idx, 0)); } const struct hpk_hdr * HPK_GetHdr(const struct hpk_ctx *ctx, uint32_t idx) { uint32_t oi = idx; struct dynhdr *dh; assert(ctx); if (idx > 61 + ctx->size) return (NULL); else if (idx <= 61) return (&ctx->sttbl[idx]); idx -= 62; VTAILQ_FOREACH(dh, &ctx->dyntbl, list) if (!idx--) break; if (idx && dh) { dh->header.i = oi; return (&dh->header); } else return (NULL); } uint32_t HPK_GetTblSize(const struct hpk_ctx *ctx) { return (ctx->size); } uint32_t HPK_GetTblMaxSize(const struct hpk_ctx *ctx) { return (ctx->maxsize); } uint32_t HPK_GetTblLength(const struct hpk_ctx *ctx) { struct dynhdr *dh; uint32_t l = 0; VTAILQ_FOREACH(dh, &ctx->dyntbl, list) l++; return (l); } #if 0 void dump_dyn_tbl(const struct hpk_ctx *ctx) { int i = 0; struct dynhdr *dh; printf("DUMPING %u/%u\n", ctx->size, ctx->maxsize); VTAILQ_FOREACH(dh, &ctx->dyntbl, list) { printf(" (%d) %s: %s\n", i++, dh->header.key.ptr, dh->header.value.ptr); } printf("DONE\n"); } #endif struct hpk_ctx * HPK_NewCtx(uint32_t maxsize) { struct hpk_ctx *ctx = calloc(1, sizeof(*ctx)); assert(ctx); ctx->sttbl = sttbl; ctx->maxsize = maxsize; ctx->size = 0; return (ctx); } void HPK_FreeCtx(struct hpk_ctx *ctx) { while (!VTAILQ_EMPTY(&ctx->dyntbl)) pop_header(ctx); free(ctx); } varnish-7.5.0/bin/varnishtest/vtc_haproxy.c000066400000000000000000000630761457605730600210620ustar00rootroot00000000000000/*- * Copyright (c) 2008-2018 Varnish Software AS * All rights reserved. * * Author: Frédéric Lécaille * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include "config.h" #include #include #include #include #include /* for MUSL (mode_t) */ #include #include #include #include "vtc.h" #include "vfil.h" #include "vpf.h" #include "vre.h" #include "vtcp.h" #include "vsa.h" #include "vtim.h" #define HAPROXY_PROGRAM_ENV_VAR "HAPROXY_PROGRAM" #define HAPROXY_ARGS_ENV_VAR "HAPROXY_ARGS" #define HAPROXY_OPT_WORKER "-W" #define HAPROXY_OPT_MCLI "-S" #define HAPROXY_OPT_DAEMON "-D" #define HAPROXY_SIGNAL SIGINT #define HAPROXY_EXPECT_EXIT (128 + HAPROXY_SIGNAL) struct envar { VTAILQ_ENTRY(envar) list; char *name; char *value; }; struct haproxy { unsigned magic; #define HAPROXY_MAGIC 0x8a45cf75 char *name; struct vtclog *vl; VTAILQ_ENTRY(haproxy) list; const char *filename; struct vsb *args; int opt_worker; int opt_mcli; int opt_daemon; int opt_check_mode; char *pid_fn; pid_t pid; pid_t ppid; int fds[4]; char *cfg_fn; struct vsb *cfg_vsb; pthread_t tp; int expect_exit; int expect_signal; int its_dead_jim; /* UNIX socket CLI. */ char *cli_fn; /* TCP socket CLI. */ struct haproxy_cli *cli; /* master CLI */ struct haproxy_cli *mcli; char *workdir; struct vsb *msgs; char closed_sock[256]; /* Closed TCP socket */ VTAILQ_HEAD(,envar) envars; }; static VTAILQ_HEAD(, haproxy) haproxies = VTAILQ_HEAD_INITIALIZER(haproxies); struct haproxy_cli { unsigned magic; #define HAPROXY_CLI_MAGIC 0xb09a4ed8 struct vtclog *vl; char running; char *spec; int sock; char connect[256]; pthread_t tp; size_t txbuf_sz; char *txbuf; size_t rxbuf_sz; char *rxbuf; double timeout; }; static void haproxy_write_conf(struct haproxy *h); static void haproxy_add_envar(struct haproxy *h, const char *name, const char *value) { struct envar *e; e = malloc(sizeof *e); AN(e); e->name = strdup(name); e->value = strdup(value); AN(e->name); AN(e->value); VTAILQ_INSERT_TAIL(&h->envars, e, list); } static void haproxy_delete_envars(struct haproxy *h) { struct envar *e, *e2; VTAILQ_FOREACH_SAFE(e, &h->envars, list, e2) { VTAILQ_REMOVE(&h->envars, e, list); free(e->name); free(e->value); free(e); } } static void haproxy_build_env(const struct haproxy *h) { struct envar *e; VTAILQ_FOREACH(e, &h->envars, list) { if (setenv(e->name, e->value, 0) == -1) vtc_fatal(h->vl, "setenv() failed: %s (%d)", strerror(errno), errno); } } /********************************************************************** * Socket connect (same as client_tcp_connect()). */ static int haproxy_cli_tcp_connect(struct vtclog *vl, const char *addr, double tmo, const char **errp) { int fd; char mabuf[VTCP_ADDRBUFSIZE], mpbuf[VTCP_PORTBUFSIZE]; AN(addr); AN(errp); fd = VTCP_open(addr, NULL, tmo, errp); if (fd < 0) return (fd); VTCP_myname(fd, mabuf, sizeof mabuf, mpbuf, sizeof mpbuf); vtc_log(vl, 3, "CLI connected fd %d from %s %s to %s", fd, mabuf, mpbuf, addr); return (fd); } /* * SECTION: haproxy.cli haproxy CLI Specification * SECTION: haproxy.cli.send * send STRING * Push STRING on the CLI connection. STRING will be terminated by an * end of line character (\n). */ static void v_matchproto_(cmd_f) cmd_haproxy_cli_send(CMD_ARGS) { struct vsb *vsb; struct haproxy_cli *hc; int j; (void)vl; CAST_OBJ_NOTNULL(hc, priv, HAPROXY_CLI_MAGIC); AZ(strcmp(av[0], "send")); AN(av[1]); AZ(av[2]); vsb = VSB_new_auto(); AN(vsb); AZ(VSB_cat(vsb, av[1])); AZ(VSB_cat(vsb, "\n")); AZ(VSB_finish(vsb)); if (hc->sock == -1) { int fd; const char *err; struct vsb *vsb_connect; vsb_connect = macro_expand(hc->vl, hc->connect); AN(vsb_connect); fd = haproxy_cli_tcp_connect(hc->vl, VSB_data(vsb_connect), 10., &err); if (fd < 0) vtc_fatal(hc->vl, "CLI failed to open %s: %s", VSB_data(vsb), err); VSB_destroy(&vsb_connect); hc->sock = fd; } vtc_dump(hc->vl, 4, "CLI send", VSB_data(vsb), -1); if (VSB_tofile(vsb, hc->sock)) vtc_fatal(hc->vl, "CLI fd %d send error %s", hc->sock, strerror(errno)); /* a CLI command must be followed by a SHUT_WR if we want HAProxy to * close after the response */ j = shutdown(hc->sock, SHUT_WR); vtc_log(hc->vl, 3, "CLI shutting fd %d", hc->sock); if (!VTCP_Check(j)) vtc_fatal(hc->vl, "Shutdown failed: %s", strerror(errno)); VSB_destroy(&vsb); } #define HAPROXY_CLI_RECV_LEN (1 << 14) static void haproxy_cli_recv(struct haproxy_cli *hc) { ssize_t ret; size_t rdz, left, off; rdz = ret = off = 0; /* We want to null terminate this buffer. */ left = hc->rxbuf_sz - 1; while (!vtc_error && left > 0) { VTCP_set_read_timeout(hc->sock, hc->timeout); ret = recv(hc->sock, hc->rxbuf + off, HAPROXY_CLI_RECV_LEN, 0); if (ret < 0) { if (errno == EINTR || errno == EAGAIN) continue; vtc_fatal(hc->vl, "CLI fd %d recv() failed (%s)", hc->sock, strerror(errno)); } /* Connection closed. */ if (ret == 0) { if (hc->rxbuf[rdz - 1] != '\n') vtc_fatal(hc->vl, "CLI rx timeout (fd: %d %.3fs ret: %zd)", hc->sock, hc->timeout, ret); vtc_log(hc->vl, 4, "CLI connection normally closed"); vtc_log(hc->vl, 3, "CLI closing fd %d", hc->sock); VTCP_close(&hc->sock); break; } rdz += ret; left -= ret; off += ret; } hc->rxbuf[rdz] = '\0'; vtc_dump(hc->vl, 4, "CLI recv", hc->rxbuf, rdz); } /* * SECTION: haproxy.cli.expect * expect OP STRING * Regex match the CLI reception buffer with STRING * if OP is ~ or, on the contraty, if OP is !~ check that there is * no regex match. */ static void v_matchproto_(cmd_f) cmd_haproxy_cli_expect(CMD_ARGS) { struct haproxy_cli *hc; struct vsb vsb[1]; vre_t *vre; int error, erroroffset, i, ret; char *cmp, *spec, errbuf[VRE_ERROR_LEN]; (void)vl; CAST_OBJ_NOTNULL(hc, priv, HAPROXY_CLI_MAGIC); AZ(strcmp(av[0], "expect")); av++; cmp = av[0]; spec = av[1]; AN(cmp); AN(spec); AZ(av[2]); assert(!strcmp(cmp, "~") || !strcmp(cmp, "!~")); haproxy_cli_recv(hc); vre = VRE_compile(spec, 0, &error, &erroroffset, 1); if (vre == NULL) { AN(VSB_init(vsb, errbuf, sizeof errbuf)); AZ(VRE_error(vsb, error)); AZ(VSB_finish(vsb)); VSB_fini(vsb); vtc_fatal(hc->vl, "CLI regexp error: '%s' (@%d) (%s)", errbuf, erroroffset, spec); } i = VRE_match(vre, hc->rxbuf, 0, 0, NULL); VRE_free(&vre); ret = (i >= 0 && *cmp == '~') || (i < 0 && *cmp == '!'); if (!ret) vtc_fatal(hc->vl, "CLI expect failed %s \"%s\"", cmp, spec); else vtc_log(hc->vl, 4, "CLI expect match %s \"%s\"", cmp, spec); } static const struct cmds haproxy_cli_cmds[] = { #define CMD_HAPROXY_CLI(n) { #n, cmd_haproxy_cli_##n }, CMD_HAPROXY_CLI(send) CMD_HAPROXY_CLI(expect) #undef CMD_HAPROXY_CLI { NULL, NULL } }; /********************************************************************** * HAProxy CLI client thread */ static void * haproxy_cli_thread(void *priv) { struct haproxy_cli *hc; struct vsb *vsb; int fd; const char *err; CAST_OBJ_NOTNULL(hc, priv, HAPROXY_CLI_MAGIC); AN(*hc->connect); vsb = macro_expand(hc->vl, hc->connect); AN(vsb); fd = haproxy_cli_tcp_connect(hc->vl, VSB_data(vsb), 10., &err); if (fd < 0) vtc_fatal(hc->vl, "CLI failed to open %s: %s", VSB_data(vsb), err); VTCP_blocking(fd); hc->sock = fd; parse_string(hc->vl, hc, hc->spec); vtc_log(hc->vl, 2, "CLI ending"); VSB_destroy(&vsb); return (NULL); } /********************************************************************** * Wait for the CLI client thread to stop */ static void haproxy_cli_wait(struct haproxy_cli *hc) { void *res; CHECK_OBJ_NOTNULL(hc, HAPROXY_CLI_MAGIC); vtc_log(hc->vl, 2, "CLI waiting"); PTOK(pthread_join(hc->tp, &res)); if (res != NULL) vtc_fatal(hc->vl, "CLI returned \"%s\"", (char *)res); REPLACE(hc->spec, NULL); hc->tp = 0; hc->running = 0; } /********************************************************************** * Start the CLI client thread */ static void haproxy_cli_start(struct haproxy_cli *hc) { CHECK_OBJ_NOTNULL(hc, HAPROXY_CLI_MAGIC); vtc_log(hc->vl, 2, "CLI starting"); PTOK(pthread_create(&hc->tp, NULL, haproxy_cli_thread, hc)); hc->running = 1; } /********************************************************************** * Run the CLI client thread */ static void haproxy_cli_run(struct haproxy_cli *hc) { haproxy_cli_start(hc); haproxy_cli_wait(hc); } /********************************************************************** * */ static void haproxy_wait_pidfile(struct haproxy *h) { char buf_err[1024] = {0}; int usleep_time = 1000; double t0; pid_t pid; vtc_log(h->vl, 3, "wait-pid-file"); for (t0 = VTIM_mono(); VTIM_mono() - t0 < 3;) { if (vtc_error) return; if (VPF_Read(h->pid_fn, &pid) != 0) { bprintf(buf_err, "Could not read PID file '%s'", h->pid_fn); usleep(usleep_time); continue; } if (!h->opt_daemon && pid != h->pid) { bprintf(buf_err, "PID file has different PID (%ld != %lld)", (long)pid, (long long)h->pid); usleep(usleep_time); continue; } if (kill(pid, 0) < 0) { bprintf(buf_err, "Could not find PID %ld process", (long)pid); usleep(usleep_time); continue; } h->pid = pid; vtc_log(h->vl, 2, "haproxy PID %ld successfully started", (long)pid); return; } vtc_fatal(h->vl, "haproxy %s PID file check failed:\n\t%s\n", h->name, buf_err); } /********************************************************************** * Allocate and initialize a CLI client */ static struct haproxy_cli * haproxy_cli_new(struct haproxy *h) { struct haproxy_cli *hc; ALLOC_OBJ(hc, HAPROXY_CLI_MAGIC); AN(hc); hc->vl = h->vl; vtc_log_set_cmd(hc->vl, haproxy_cli_cmds); hc->sock = -1; bprintf(hc->connect, "${%s_cli_sock}", h->name); hc->txbuf_sz = hc->rxbuf_sz = 2048 * 1024; hc->txbuf = malloc(hc->txbuf_sz); AN(hc->txbuf); hc->rxbuf = malloc(hc->rxbuf_sz); AN(hc->rxbuf); return (hc); } /* creates a master CLI client (-mcli) */ static struct haproxy_cli * haproxy_mcli_new(struct haproxy *h) { struct haproxy_cli *hc; ALLOC_OBJ(hc, HAPROXY_CLI_MAGIC); AN(hc); hc->vl = h->vl; vtc_log_set_cmd(hc->vl, haproxy_cli_cmds); hc->sock = -1; bprintf(hc->connect, "${%s_mcli_sock}", h->name); hc->txbuf_sz = hc->rxbuf_sz = 2048 * 1024; hc->txbuf = malloc(hc->txbuf_sz); AN(hc->txbuf); hc->rxbuf = malloc(hc->rxbuf_sz); AN(hc->rxbuf); return (hc); } /* Bind an address/port for the master CLI (-mcli) */ static int haproxy_create_mcli(struct haproxy *h) { int sock; const char *err; char buf[128], addr[128], port[128]; char vsabuf[vsa_suckaddr_len]; const struct suckaddr *sua; sock = VTCP_listen_on(default_listen_addr, NULL, 100, &err); if (err != NULL) vtc_fatal(h->vl, "Create listen socket failed: %s", err); assert(sock > 0); sua = VSA_getsockname(sock, vsabuf, sizeof vsabuf); AN(sua); VTCP_name(sua, addr, sizeof addr, port, sizeof port); bprintf(buf, "%s_mcli", h->name); if (VSA_Get_Proto(sua) == AF_INET) macro_def(h->vl, buf, "sock", "%s:%s", addr, port); else macro_def(h->vl, buf, "sock", "[%s]:%s", addr, port); macro_def(h->vl, buf, "addr", "%s", addr); macro_def(h->vl, buf, "port", "%s", port); return (sock); } static void haproxy_cli_delete(struct haproxy_cli *hc) { CHECK_OBJ_NOTNULL(hc, HAPROXY_CLI_MAGIC); REPLACE(hc->spec, NULL); REPLACE(hc->txbuf, NULL); REPLACE(hc->rxbuf, NULL); FREE_OBJ(hc); } /********************************************************************** * Allocate and initialize a haproxy */ static struct haproxy * haproxy_new(const char *name) { struct haproxy *h; struct vsb *vsb; char buf[PATH_MAX]; int closed_sock; char addr[128], port[128]; const char *err; const char *env_args; char vsabuf[vsa_suckaddr_len]; const struct suckaddr *sua; ALLOC_OBJ(h, HAPROXY_MAGIC); AN(h); REPLACE(h->name, name); h->args = VSB_new_auto(); env_args = getenv(HAPROXY_ARGS_ENV_VAR); if (env_args) { VSB_cat(h->args, env_args); VSB_cat(h->args, " "); } h->vl = vtc_logopen("%s", name); vtc_log_set_cmd(h->vl, haproxy_cli_cmds); AN(h->vl); h->filename = getenv(HAPROXY_PROGRAM_ENV_VAR); if (h->filename == NULL) h->filename = "haproxy"; bprintf(buf, "${tmpdir}/%s", name); vsb = macro_expand(h->vl, buf); AN(vsb); h->workdir = strdup(VSB_data(vsb)); AN(h->workdir); VSB_destroy(&vsb); bprintf(buf, "%s/stats.sock", h->workdir); h->cli_fn = strdup(buf); AN(h->cli_fn); bprintf(buf, "%s/cfg", h->workdir); h->cfg_fn = strdup(buf); AN(h->cfg_fn); /* Create a new TCP socket to reserve an IP:port and close it asap. * May be useful to simulate an unreachable server. */ bprintf(h->closed_sock, "%s_closed", h->name); closed_sock = VTCP_listen_on("127.0.0.1:0", NULL, 100, &err); if (err != NULL) vtc_fatal(h->vl, "Create listen socket failed: %s", err); assert(closed_sock > 0); sua = VSA_getsockname(closed_sock, vsabuf, sizeof vsabuf); AN(sua); VTCP_name(sua, addr, sizeof addr, port, sizeof port); if (VSA_Get_Proto(sua) == AF_INET) macro_def(h->vl, h->closed_sock, "sock", "%s:%s", addr, port); else macro_def(h->vl, h->closed_sock, "sock", "[%s]:%s", addr, port); macro_def(h->vl, h->closed_sock, "addr", "%s", addr); macro_def(h->vl, h->closed_sock, "port", "%s", port); VTCP_close(&closed_sock); h->cli = haproxy_cli_new(h); AN(h->cli); h->mcli = haproxy_mcli_new(h); AN(h->mcli); bprintf(buf, "rm -rf \"%s\" ; mkdir -p \"%s\"", h->workdir, h->workdir); AZ(system(buf)); VTAILQ_INIT(&h->envars); VTAILQ_INSERT_TAIL(&haproxies, h, list); return (h); } /********************************************************************** * Delete a haproxy instance */ static void haproxy_delete(struct haproxy *h) { char buf[PATH_MAX]; CHECK_OBJ_NOTNULL(h, HAPROXY_MAGIC); vtc_logclose(h->vl); if (!leave_temp) { bprintf(buf, "rm -rf \"%s\"", h->workdir); AZ(system(buf)); } free(h->name); free(h->workdir); free(h->cli_fn); free(h->cfg_fn); free(h->pid_fn); VSB_destroy(&h->args); haproxy_cli_delete(h->cli); haproxy_cli_delete(h->mcli); /* XXX: MEMLEAK (?) */ FREE_OBJ(h); } /********************************************************************** * HAProxy listener */ static void * haproxy_thread(void *priv) { struct haproxy *h; CAST_OBJ_NOTNULL(h, priv, HAPROXY_MAGIC); (void)vtc_record(h->vl, h->fds[0], h->msgs); h->its_dead_jim = 1; return (NULL); } /********************************************************************** * Start a HAProxy instance. */ static void haproxy_start(struct haproxy *h) { char buf[PATH_MAX]; struct vsb *vsb; vtc_log(h->vl, 2, "%s", __func__); AZ(VSB_finish(h->args)); vtc_log(h->vl, 4, "opt_worker %d opt_daemon %d opt_check_mode %d opt_mcli %d", h->opt_worker, h->opt_daemon, h->opt_check_mode, h->opt_mcli); vsb = VSB_new_auto(); AN(vsb); VSB_printf(vsb, "exec \"%s\"", h->filename); if (h->opt_check_mode) VSB_cat(vsb, " -c"); else if (h->opt_daemon) VSB_cat(vsb, " -D"); else VSB_cat(vsb, " -d"); if (h->opt_worker) { VSB_cat(vsb, " -W"); if (h->opt_mcli) { int sock; sock = haproxy_create_mcli(h); VSB_printf(vsb, " -S \"fd@%d\"", sock); } } VSB_printf(vsb, " %s", VSB_data(h->args)); VSB_printf(vsb, " -f \"%s\" ", h->cfg_fn); if (h->opt_worker || h->opt_daemon) { bprintf(buf, "%s/pid", h->workdir); h->pid_fn = strdup(buf); AN(h->pid_fn); VSB_printf(vsb, " -p \"%s\"", h->pid_fn); } AZ(VSB_finish(vsb)); vtc_dump(h->vl, 4, "argv", VSB_data(vsb), -1); if (h->opt_worker && !h->opt_daemon) { /* * HAProxy master process must exit with status 128 + * if signaled by signal. */ h->expect_exit = HAPROXY_EXPECT_EXIT; } haproxy_write_conf(h); AZ(pipe(&h->fds[0])); vtc_log(h->vl, 4, "XXX %d @%d", h->fds[1], __LINE__); AZ(pipe(&h->fds[2])); h->pid = h->ppid = fork(); assert(h->pid >= 0); if (h->pid == 0) { haproxy_build_env(h); haproxy_delete_envars(h); AZ(chdir(h->name)); AZ(dup2(h->fds[0], 0)); assert(dup2(h->fds[3], 1) == 1); assert(dup2(1, 2) == 2); closefd(&h->fds[0]); closefd(&h->fds[1]); closefd(&h->fds[2]); closefd(&h->fds[3]); AZ(execl("/bin/sh", "/bin/sh", "-c", VSB_data(vsb), (char*)0)); exit(1); } VSB_destroy(&vsb); vtc_log(h->vl, 3, "PID: %ld", (long)h->pid); macro_def(h->vl, h->name, "pid", "%ld", (long)h->pid); macro_def(h->vl, h->name, "name", "%s", h->workdir); closefd(&h->fds[0]); closefd(&h->fds[3]); h->fds[0] = h->fds[2]; h->fds[2] = h->fds[3] = -1; PTOK(pthread_create(&h->tp, NULL, haproxy_thread, h)); if (h->pid_fn != NULL) haproxy_wait_pidfile(h); } /********************************************************************** * Wait for a HAProxy instance. */ static void haproxy_wait(struct haproxy *h) { void *p; int i, n, sig; vtc_log(h->vl, 2, "Wait"); if (h->pid < 0) haproxy_start(h); if (h->cli->spec) haproxy_cli_run(h->cli); if (h->mcli->spec) haproxy_cli_run(h->mcli); closefd(&h->fds[1]); sig = SIGINT; n = 0; vtc_log(h->vl, 2, "Stop HAproxy pid=%ld", (long)h->pid); while (h->opt_daemon || (!h->opt_check_mode && !h->its_dead_jim)) { assert(h->pid > 0); if (n == 0) { i = kill(h->pid, sig); if (i == 0) h->expect_signal = -sig; if (i && errno == ESRCH) break; vtc_log(h->vl, 4, "Kill(%d)=%d: %s", sig, i, strerror(errno)); } usleep(100000); if (++n == 20) { switch (sig) { case SIGINT: sig = SIGTERM ; break; case SIGTERM: sig = SIGKILL ; break; default: break; } n = 0; } } PTOK(pthread_join(h->tp, &p)); AZ(p); closefd(&h->fds[0]); if (!h->opt_daemon) { vtc_wait4(h->vl, h->ppid, h->expect_exit, h->expect_signal, 0); h->ppid = -1; } h->pid = -1; } #define HAPROXY_BE_FD_STR "fd@${" #define HAPROXY_BE_FD_STRLEN strlen(HAPROXY_BE_FD_STR) static int haproxy_build_backends(struct haproxy *h, const char *vsb_data) { char *s, *p, *q; s = strdup(vsb_data); if (!s) return (-1); p = s; while (1) { int sock; char buf[128], addr[128], port[128]; const char *err; char vsabuf[vsa_suckaddr_len]; const struct suckaddr *sua; p = strstr(p, HAPROXY_BE_FD_STR); if (!p) break; q = p += HAPROXY_BE_FD_STRLEN; while (*q && *q != '}') q++; if (*q != '}') break; *q++ = '\0'; sock = VTCP_listen_on("127.0.0.1:0", NULL, 100, &err); if (err != NULL) vtc_fatal(h->vl, "Create listen socket failed: %s", err); assert(sock > 0); sua = VSA_getsockname(sock, vsabuf, sizeof vsabuf); AN(sua); VTCP_name(sua, addr, sizeof addr, port, sizeof port); bprintf(buf, "%s_%s", h->name, p); if (VSA_Get_Proto(sua) == AF_INET) macro_def(h->vl, buf, "sock", "%s:%s", addr, port); else macro_def(h->vl, buf, "sock", "[%s]:%s", addr, port); macro_def(h->vl, buf, "addr", "%s", addr); macro_def(h->vl, buf, "port", "%s", port); bprintf(buf, "%d", sock); vtc_log(h->vl, 4, "setenv(%s, %s)", p, buf); haproxy_add_envar(h, p, buf); p = q; } free(s); return (0); } static void haproxy_check_conf(struct haproxy *h, const char *expect) { h->msgs = VSB_new_auto(); AN(h->msgs); h->opt_check_mode = 1; haproxy_start(h); haproxy_wait(h); AZ(VSB_finish(h->msgs)); if (strstr(VSB_data(h->msgs), expect) == NULL) vtc_fatal(h->vl, "Did not find expected string '%s'", expect); vtc_log(h->vl, 2, "Found expected '%s'", expect); VSB_destroy(&h->msgs); } /********************************************************************** * Write a configuration for HAProxy instance. */ static void haproxy_store_conf(struct haproxy *h, const char *cfg, int auto_be) { struct vsb *vsb, *vsb2; vsb = VSB_new_auto(); AN(vsb); vsb2 = VSB_new_auto(); AN(vsb2); VSB_printf(vsb, " global\n\tstats socket \"%s\" " "level admin mode 600\n", h->cli_fn); VSB_cat(vsb, " stats socket \"fd@${cli}\" level admin\n"); AZ(VSB_cat(vsb, cfg)); if (auto_be) cmd_server_gen_haproxy_conf(vsb); AZ(VSB_finish(vsb)); AZ(haproxy_build_backends(h, VSB_data(vsb))); h->cfg_vsb = macro_expand(h->vl, VSB_data(vsb)); AN(h->cfg_vsb); VSB_destroy(&vsb2); VSB_destroy(&vsb); } static void haproxy_write_conf(struct haproxy *h) { struct vsb *vsb; vsb = macro_expand(h->vl, VSB_data(h->cfg_vsb)); AN(vsb); assert(VSB_len(vsb) >= 0); vtc_dump(h->vl, 4, "conf", VSB_data(vsb), VSB_len(vsb)); if (VFIL_writefile(h->workdir, h->cfg_fn, VSB_data(vsb), VSB_len(vsb)) != 0) vtc_fatal(h->vl, "failed to write haproxy configuration file: %s (%d)", strerror(errno), errno); VSB_destroy(&vsb); } /* SECTION: haproxy haproxy * * Define and interact with haproxy instances. * * To define a haproxy server, you'll use this syntax:: * * haproxy hNAME -conf-OK CONFIG * haproxy hNAME -conf-BAD ERROR CONFIG * haproxy hNAME [-D] [-W] [-arg STRING] [-conf[+vcl] STRING] * * The first ``haproxy hNAME`` invocation will start the haproxy master * process in the background, waiting for the ``-start`` switch to actually * start the child. * * Arguments: * * hNAME * Identify the HAProxy server with a string, it must starts with 'h'. * * \-conf-OK CONFIG * Run haproxy in '-c' mode to check config is OK * stdout/stderr should contain 'Configuration file is valid' * The exit code should be 0. * * \-conf-BAD ERROR CONFIG * Run haproxy in '-c' mode to check config is BAD. * "ERROR" should be part of the diagnostics on stdout/stderr. * The exit code should be 1. * * \-D * Run HAproxy in daemon mode. If not given '-d' mode used. * * \-W * Enable HAproxy in Worker mode. * * \-S * Enable HAproxy Master CLI in Worker mode * * \-arg STRING * Pass an argument to haproxy, for example "-h simple_list". * * \-cli STRING * Specify the spec to be run by the command line interface (CLI). * * \-mcli STRING * Specify the spec to be run by the command line interface (CLI) * of the Master process. * * \-conf STRING * Specify the configuration to be loaded by this HAProxy instance. * * \-conf+backend STRING * Specify the configuration to be loaded by this HAProxy instance, * all server instances will be automatically appended * * \-start * Start this HAProxy instance. * * \-wait * Stop this HAProxy instance. * * \-expectexit NUMBER * Expect haproxy to exit(3) with this value * */ void cmd_haproxy(CMD_ARGS) { struct haproxy *h, *h2; (void)priv; if (av == NULL) { /* Reset and free */ VTAILQ_FOREACH_SAFE(h, &haproxies, list, h2) { vtc_log(h->vl, 2, "Reset and free %s haproxy %ld", h->name, (long)h->pid); if (h->pid >= 0) haproxy_wait(h); VTAILQ_REMOVE(&haproxies, h, list); haproxy_delete(h); } return; } AZ(strcmp(av[0], "haproxy")); av++; VTC_CHECK_NAME(vl, av[0], "haproxy", 'h'); VTAILQ_FOREACH(h, &haproxies, list) if (!strcmp(h->name, av[0])) break; if (h == NULL) h = haproxy_new(av[0]); av++; for (; *av != NULL; av++) { if (vtc_error) break; if (!strcmp(*av, "-conf-OK")) { AN(av[1]); haproxy_store_conf(h, av[1], 0); h->expect_exit = 0; haproxy_check_conf(h, ""); av++; continue; } if (!strcmp(*av, "-conf-BAD")) { AN(av[1]); AN(av[2]); haproxy_store_conf(h, av[2], 0); h->expect_exit = 1; haproxy_check_conf(h, av[1]); av += 2; continue; } if (!strcmp(*av, HAPROXY_OPT_DAEMON)) { h->opt_daemon = 1; continue; } if (!strcmp(*av, HAPROXY_OPT_WORKER)) { h->opt_worker = 1; continue; } if (!strcmp(*av, HAPROXY_OPT_MCLI)) { h->opt_mcli = 1; continue; } if (!strcmp(*av, "-arg")) { AN(av[1]); AZ(h->pid); VSB_cat(h->args, " "); VSB_cat(h->args, av[1]); av++; continue; } if (!strcmp(*av, "-cli")) { REPLACE(h->cli->spec, av[1]); if (h->tp) haproxy_cli_run(h->cli); av++; continue; } if (!strcmp(*av, "-mcli")) { REPLACE(h->mcli->spec, av[1]); if (h->tp) haproxy_cli_run(h->mcli); av++; continue; } if (!strcmp(*av, "-conf")) { AN(av[1]); haproxy_store_conf(h, av[1], 0); av++; continue; } if (!strcmp(*av, "-conf+backend")) { AN(av[1]); haproxy_store_conf(h, av[1], 1); av++; continue; } if (!strcmp(*av, "-expectexit")) { h->expect_exit = strtoul(av[1], NULL, 0); av++; continue; } if (!strcmp(*av, "-start")) { haproxy_start(h); continue; } if (!strcmp(*av, "-wait")) { haproxy_wait(h); continue; } vtc_fatal(h->vl, "Unknown haproxy argument: %s", *av); } } varnish-7.5.0/bin/varnishtest/vtc_http.c000066400000000000000000001313171457605730600203410ustar00rootroot00000000000000/*- * Copyright (c) 2008-2019 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include "config.h" #include #include #include #include #include #include #include #include "vtc.h" #include "vtc_http.h" #include "vct.h" #include "vfil.h" #include "vnum.h" #include "vrnd.h" #include "vtcp.h" #include "hpack.h" extern const struct cmds http_cmds[]; /* SECTION: client-server client/server * * Client and server threads are fake HTTP entities used to test your Varnish * and VCL. They take any number of arguments, and the one that are not * recognized, assuming they don't start with '-', are treated as * specifications, laying out the actions to undertake:: * * client cNAME [...] * server sNAME [...] * * Clients and server are identified by a string that's the first argument, * clients' names start with 'c' and servers' names start with 's'. * * As the client and server commands share a good deal of arguments and * specification actions, they are grouped in this single section, specific * items will be explicitly marked as such. * * SECTION: client-server.macros Macros and automatic behaviour * * To make things easier in the general case, clients will connect by default * to a Varnish server called v1. To connect to a different Varnish server, use * '-connect ${vNAME_sock}'. * * The -vcl+backend switch of the ``varnish`` command will add all the declared * servers as backends. Be careful though, servers will by default listen to * the 127.0.0.1 IP and will pick a random port, and publish 3 macros: * sNAME_addr, sNAME_port and sNAME_sock, but only once they are started. For * 'varnish -vcl+backend' to create the vcl with the correct values, the server * must be started first. * * SECTION: client-server.args Arguments * * \-start * Start the thread in background, processing the last given * specification. * * \-wait * Block until the thread finishes. * * \-run (client only) * Equivalent to "-start -wait". * * \-repeat NUMBER * Instead of processing the specification only once, do it NUMBER times. * * \-keepalive * For repeat, do not open new connections but rather run all * iterations in the same connection * * \-break (server only) * Stop the server. * * \-listen STRING (server only) * Dictate the listening socket for the server. STRING is of the form * "IP PORT", or "/PATH/TO/SOCKET" for a Unix domain socket. In the * latter case, the path must begin with '/', and the server must be * able to create it. * * \-connect STRING (client only) * Indicate the server to connect to. STRING is also of the form * "IP PORT", or "/PATH/TO/SOCKET". As with "server -listen", a * Unix domain socket is recognized when STRING begins with a '/'. * * \-dispatch (server only, s0 only) * Normally, to keep things simple, server threads only handle one * connection at a time, but the -dispatch switch allows to accept * any number of connection and handle them following the given spec. * * However, -dispatch is only allowed for the server name "s0". * * \-proxy1 STRING (client only) * Use the PROXY protocol version 1 for this connection. STRING * is of the form "CLIENTIP:PORT SERVERIP:PORT". * * \-proxy2 STRING (client only) * Use the PROXY protocol version 2 for this connection. STRING * is of the form "CLIENTIP:PORT SERVERIP:PORT". * * SECTION: client-server.spec Specification * * It's a string, either double-quoted "like this", but most of the time * enclosed in curly brackets, allowing multilining. Write a command per line in * it, empty line are ignored, and long line can be wrapped by using a * backslash. For example:: * * client c1 { * txreq -url /foo \ * -hdr "bar: baz" * * rxresp * } -run */ #define ONLY_CLIENT(hp, av) \ do { \ if (hp->h2) \ vtc_fatal(hp->vl, \ "\"%s\" only possible before H/2 upgrade", \ av[0]); \ if (hp->sfd != NULL) \ vtc_fatal(hp->vl, \ "\"%s\" only possible in client", av[0]); \ } while (0) #define ONLY_SERVER(hp, av) \ do { \ if (hp->h2) \ vtc_fatal(hp->vl, \ "\"%s\" only possible before H/2 upgrade", \ av[0]); \ if (hp->sfd == NULL) \ vtc_fatal(hp->vl, \ "\"%s\" only possible in server", av[0]); \ } while (0) /* XXX: we may want to vary this */ static const char * const nl = "\r\n"; /********************************************************************** * Generate a synthetic body */ char * synth_body(const char *len, int rnd) { int i, j, k, l; char *b; AN(len); i = strtoul(len, NULL, 0); assert(i > 0); b = malloc(i + 1L); AN(b); l = k = '!'; for (j = 0; j < i; j++) { if ((j % 64) == 63) { b[j] = '\n'; k++; if (k == '~') k = '!'; l = k; } else if (rnd) { b[j] = (VRND_RandomTestable() % 95) + ' '; } else { b[j] = (char)l; if (++l == '~') l = '!'; } } b[i - 1] = '\n'; b[i] = '\0'; return (b); } /********************************************************************** * Finish and write the vsb to the fd */ static void http_write(const struct http *hp, int lvl, const char *pfx) { AZ(VSB_finish(hp->vsb)); vtc_dump(hp->vl, lvl, pfx, VSB_data(hp->vsb), VSB_len(hp->vsb)); if (VSB_tofile(hp->vsb, hp->sess->fd)) vtc_log(hp->vl, hp->fatal, "Write failed: %s", strerror(errno)); } /********************************************************************** * find header */ static char * http_find_header(char * const *hh, const char *hdr) { int n, l; char *r; l = strlen(hdr); for (n = 3; hh[n] != NULL; n++) { if (strncasecmp(hdr, hh[n], l) || hh[n][l] != ':') continue; for (r = hh[n] + l + 1; vct_issp(*r); r++) continue; return (r); } return (NULL); } /********************************************************************** * count header */ static int http_count_header(char * const *hh, const char *hdr) { int n, l, r = 0; l = strlen(hdr); for (n = 3; hh[n] != NULL; n++) { if (strncasecmp(hdr, hh[n], l) || hh[n][l] != ':') continue; r++; } return (r); } /* SECTION: client-server.spec.expect * * expect STRING1 OP STRING2 * Test if "STRING1 OP STRING2" is true, and if not, fails the test. * OP can be ==, <, <=, >, >= when STRING1 and STRING2 represent numbers * in which case it's an order operator. If STRING1 and STRING2 are * meant as strings OP is a matching operator, either == (exact match) * or ~ (regex match). * * varnishtest will first try to resolve STRING1 and STRING2 by looking * if they have special meanings, in which case, the resolved value is * use for the test. Note that this value can be a string representing a * number, allowing for tests such as:: * * expect req.http.x-num > 2 * * Here's the list of recognized strings, most should be obvious as they * either match VCL logic, or the txreq/txresp options: * * - remote.ip * - remote.port * - remote.path * - req.method * - req.url * - req.proto * - resp.proto * - resp.status * - resp.reason * - resp.chunklen * - req.bodylen * - req.body * - resp.bodylen * - resp.body * - req.http.NAME * - resp.http.NAME */ static const char * cmd_var_resolve(struct http *hp, char *spec) { char **hh, *hdr; if (!strcmp(spec, "remote.ip")) return (hp->rem_ip); if (!strcmp(spec, "remote.port")) return (hp->rem_port); if (!strcmp(spec, "remote.path")) return (hp->rem_path); if (!strcmp(spec, "req.method")) return (hp->req[0]); if (!strcmp(spec, "req.url")) return (hp->req[1]); if (!strcmp(spec, "req.proto")) return (hp->req[2]); if (!strcmp(spec, "resp.proto")) return (hp->resp[0]); if (!strcmp(spec, "resp.status")) return (hp->resp[1]); if (!strcmp(spec, "resp.reason")) return (hp->resp[2]); if (!strcmp(spec, "resp.chunklen")) return (hp->chunklen); if (!strcmp(spec, "req.bodylen")) return (hp->bodylen); if (!strcmp(spec, "req.body")) return (hp->body != NULL ? hp->body : spec); if (!strcmp(spec, "resp.bodylen")) return (hp->bodylen); if (!strcmp(spec, "resp.body")) return (hp->body != NULL ? hp->body : spec); if (!strncmp(spec, "req.http.", 9)) { hh = hp->req; hdr = spec + 9; } else if (!strncmp(spec, "resp.http.", 10)) { hh = hp->resp; hdr = spec + 10; } else if (!strcmp(spec, "h2.state")) { if (hp->h2) return ("true"); else return ("false"); } else return (spec); hdr = http_find_header(hh, hdr); return (hdr); } static void cmd_http_expect(CMD_ARGS) { struct http *hp; const char *lhs; char *cmp; const char *rhs; (void)vl; CAST_OBJ_NOTNULL(hp, priv, HTTP_MAGIC); AZ(strcmp(av[0], "expect")); av++; AN(av[0]); AN(av[1]); AN(av[2]); AZ(av[3]); lhs = cmd_var_resolve(hp, av[0]); cmp = av[1]; rhs = cmd_var_resolve(hp, av[2]); vtc_expect(vl, av[0], lhs, cmp, av[2], rhs); } static void cmd_http_expect_pattern(CMD_ARGS) { char *p; struct http *hp; char t = '0'; (void)vl; CAST_OBJ_NOTNULL(hp, priv, HTTP_MAGIC); AZ(strcmp(av[0], "expect_pattern")); av++; AZ(av[0]); for (p = hp->body; *p != '\0'; p++) { if (*p != t) vtc_fatal(hp->vl, "EXPECT PATTERN FAIL @%zd should 0x%02x is 0x%02x", (ssize_t) (p - hp->body), t, *p); t += 1; t &= ~0x08; } vtc_log(hp->vl, 4, "EXPECT PATTERN SUCCESS"); } /********************************************************************** * Split a HTTP protocol header */ static void http_splitheader(struct http *hp, int req) { char *p, *q, **hh; int n; char buf[20]; CHECK_OBJ_NOTNULL(hp, HTTP_MAGIC); if (req) { memset(hp->req, 0, sizeof hp->req); hh = hp->req; } else { memset(hp->resp, 0, sizeof hp->resp); hh = hp->resp; } n = 0; p = hp->rx_b; if (*p == '\0') { vtc_log(hp->vl, 4, "No headers"); return; } /* REQ/PROTO */ while (vct_islws(*p)) p++; hh[n++] = p; while (!vct_islws(*p)) p++; AZ(vct_iscrlf(p, hp->rx_e)); *p++ = '\0'; /* URL/STATUS */ while (vct_issp(*p)) /* XXX: H space only */ p++; AZ(vct_iscrlf(p, hp->rx_e)); hh[n++] = p; while (!vct_islws(*p)) p++; if (vct_iscrlf(p, hp->rx_e)) { hh[n++] = NULL; q = p; p = vct_skipcrlf(p, hp->rx_e); *q = '\0'; } else { *p++ = '\0'; /* PROTO/MSG */ while (vct_issp(*p)) /* XXX: H space only */ p++; hh[n++] = p; while (!vct_iscrlf(p, hp->rx_e)) p++; q = p; p = vct_skipcrlf(p, hp->rx_e); *q = '\0'; } assert(n == 3); while (*p != '\0') { assert(n < MAX_HDR); if (vct_iscrlf(p, hp->rx_e)) break; hh[n++] = p++; while (*p != '\0' && !vct_iscrlf(p, hp->rx_e)) p++; if (*p == '\0') { break; } q = p; p = vct_skipcrlf(p, hp->rx_e); *q = '\0'; } p = vct_skipcrlf(p, hp->rx_e); assert(*p == '\0'); for (n = 0; n < 3 || hh[n] != NULL; n++) { bprintf(buf, "http[%2d] ", n); vtc_dump(hp->vl, 4, buf, hh[n], -1); } } /********************************************************************** * Receive another character */ static int http_rxchar(struct http *hp, int n, int eof) { int i; struct pollfd pfd[1]; while (n > 0) { pfd[0].fd = hp->sess->fd; pfd[0].events = POLLIN; pfd[0].revents = 0; i = poll(pfd, 1, (int)(hp->timeout * 1000)); if (i < 0 && errno == EINTR) continue; if (i == 0) { vtc_log(hp->vl, hp->fatal, "HTTP rx timeout (fd:%d %.3fs)", hp->sess->fd, hp->timeout); continue; } if (i < 0) { vtc_log(hp->vl, hp->fatal, "HTTP rx failed (fd:%d poll: %s)", hp->sess->fd, strerror(errno)); continue; } assert(i > 0); assert(hp->rx_p + n < hp->rx_e); i = read(hp->sess->fd, hp->rx_p, n); if (!(pfd[0].revents & POLLIN)) vtc_log(hp->vl, 4, "HTTP rx poll (fd:%d revents: %x n=%d, i=%d)", hp->sess->fd, pfd[0].revents, n, i); if (i == 0 && eof) return (i); if (i == 0) { vtc_log(hp->vl, hp->fatal, "HTTP rx EOF (fd:%d read: %s) %d", hp->sess->fd, strerror(errno), n); return (-1); } if (i < 0) { vtc_log(hp->vl, hp->fatal, "HTTP rx failed (fd:%d read: %s)", hp->sess->fd, strerror(errno)); return (-1); } hp->rx_p += i; *hp->rx_p = '\0'; n -= i; } return (1); } static int http_rxchunk(struct http *hp) { char *q, *old; int i; old = hp->rx_p; do { if (http_rxchar(hp, 1, 0) < 0) return (-1); } while (hp->rx_p[-1] != '\n'); vtc_dump(hp->vl, 4, "len", old, -1); i = strtoul(old, &q, 16); bprintf(hp->chunklen, "%d", i); if ((q == old) || (q == hp->rx_p) || (*q != '\0' && !vct_islws(*q))) { vtc_log(hp->vl, hp->fatal, "Chunklen fail (%02x @ %td)", (*q & 0xff), q - old); return (-1); } assert(*q == '\0' || vct_islws(*q)); hp->rx_p = old; if (i > 0) { if (http_rxchar(hp, i, 0) < 0) return (-1); vtc_dump(hp->vl, 4, "chunk", old, i); } old = hp->rx_p; if (http_rxchar(hp, 2, 0) < 0) return (-1); if (!vct_iscrlf(old, hp->rx_e)) { vtc_log(hp->vl, hp->fatal, "Chunklen without CRLF"); return (-1); } hp->rx_p = old; *hp->rx_p = '\0'; return (i); } /********************************************************************** * Swallow a HTTP message body * * max: 0 is all */ static void http_swallow_body(struct http *hp, char * const *hh, int body, int max) { const char *p, *q; int i, l, ll; l = hp->rx_p - hp->body; p = http_find_header(hh, "transfer-encoding"); q = http_find_header(hh, "content-length"); if (p != NULL && !strcasecmp(p, "chunked")) { if (q != NULL) { vtc_log(hp->vl, hp->fatal, "Both C-E: Chunked and C-L"); return; } ll = 0; while (http_rxchunk(hp) > 0) { ll = (hp->rx_p - hp->body) - l; if (max && ll >= max) break; } p = "chunked"; } else if (q != NULL) { ll = strtoul(q, NULL, 10); if (max && ll > l + max) ll = max; else ll -= l; i = http_rxchar(hp, ll, 0); if (i < 0) return; p = "c-l"; } else if (body) { ll = 0; do { i = http_rxchar(hp, 1, 1); if (i < 0) return; ll += i; if (max && ll >= max) break; } while (i > 0); p = "eof"; } else { p = "none"; ll = l = 0; } vtc_dump(hp->vl, 4, p, hp->body + l, ll); l += ll; hp->bodyl = l; bprintf(hp->bodylen, "%d", l); } /********************************************************************** * Receive a HTTP protocol header */ static void http_rxhdr(struct http *hp) { int i, s = 0; char *p; ssize_t l; CHECK_OBJ_NOTNULL(hp, HTTP_MAGIC); hp->rx_p = hp->rx_b; *hp->rx_p = '\0'; hp->body = NULL; bprintf(hp->bodylen, "%s", ""); while (1) { p = hp->rx_p; i = http_rxchar(hp, 1, 1); if (i < 1) break; if (s == 0 && *p == '\r') s = 1; else if ((s == 0 || s == 1) && *p == '\n') s = 2; else if (s == 2 && *p == '\r') s = 3; else if ((s == 2 || s == 3) && *p == '\n') break; else s = 0; } l = hp->rx_p - hp->rx_b; vtc_dump(hp->vl, 4, "rxhdr", hp->rx_b, l); vtc_log(hp->vl, 4, "rxhdrlen = %zd", l); if (i < 1) vtc_log(hp->vl, hp->fatal, "HTTP header is incomplete"); *hp->rx_p = '\0'; hp->body = hp->rx_p; } /* SECTION: client-server.spec.rxresp * * rxresp [-no_obj] (client only) * Receive and parse a response's headers and body. If -no_obj is * present, only get the headers. */ static void cmd_http_rxresp(CMD_ARGS) { struct http *hp; int has_obj = 1; (void)vl; CAST_OBJ_NOTNULL(hp, priv, HTTP_MAGIC); ONLY_CLIENT(hp, av); AZ(strcmp(av[0], "rxresp")); av++; for (; *av != NULL; av++) if (!strcmp(*av, "-no_obj")) has_obj = 0; else vtc_fatal(hp->vl, "Unknown http rxresp spec: %s\n", *av); http_rxhdr(hp); http_splitheader(hp, 0); if (http_count_header(hp->resp, "Content-Length") > 1) vtc_fatal(hp->vl, "Multiple Content-Length headers.\n"); if (!has_obj) return; if (!hp->resp[0] || !hp->resp[1]) return; if (hp->head_method) return; if (!strcmp(hp->resp[1], "200")) http_swallow_body(hp, hp->resp, 1, 0); else http_swallow_body(hp, hp->resp, 0, 0); vtc_log(hp->vl, 4, "bodylen = %s", hp->bodylen); } /* SECTION: client-server.spec.rxresphdrs * * rxresphdrs (client only) * Receive and parse a response's headers. */ static void cmd_http_rxresphdrs(CMD_ARGS) { struct http *hp; (void)vl; CAST_OBJ_NOTNULL(hp, priv, HTTP_MAGIC); ONLY_CLIENT(hp, av); AZ(strcmp(av[0], "rxresphdrs")); av++; for (; *av != NULL; av++) vtc_fatal(hp->vl, "Unknown http rxresp spec: %s\n", *av); http_rxhdr(hp); http_splitheader(hp, 0); if (http_count_header(hp->resp, "Content-Length") > 1) vtc_fatal(hp->vl, "Multiple Content-Length headers.\n"); } /* SECTION: client-server.spec.gunzip * * gunzip * Gunzip the body in place. */ static void cmd_http_gunzip(CMD_ARGS) { struct http *hp; (void)av; (void)vl; CAST_OBJ_NOTNULL(hp, priv, HTTP_MAGIC); vtc_gunzip(hp, hp->body, &hp->bodyl); } /********************************************************************** * Handle common arguments of a transmited request or response */ static char* const * http_tx_parse_args(char * const *av, struct vtclog *vl, struct http *hp, char *body, unsigned nohost, unsigned nodate, unsigned noserver, unsigned nouseragent) { long bodylen = 0; char *b, *c; char *nullbody; ssize_t len; int nolen = 0; int l; nullbody = body; for (; *av != NULL; av++) { if (!strcmp(*av, "-nolen")) { nolen = 1; } else if (!strcmp(*av, "-nohost")) { nohost = 1; } else if (!strcmp(*av, "-nodate")) { nodate = 1; } else if (!strcmp(*av, "-hdr")) { if (!strncasecmp(av[1], "Content-Length:", 15) || !strncasecmp(av[1], "Transfer-Encoding:", 18)) nolen = 1; if (!strncasecmp(av[1], "Host:", 5)) nohost = 1; if (!strncasecmp(av[1], "Date:", 5)) nodate = 1; if (!strncasecmp(av[1], "Server:", 7)) noserver = 1; if (!strncasecmp(av[1], "User-Agent:", 11)) nouseragent = 1; VSB_printf(hp->vsb, "%s%s", av[1], nl); av++; } else if (!strcmp(*av, "-hdrlen")) { VSB_printf(hp->vsb, "%s: ", av[1]); l = atoi(av[2]); while (l-- > 0) VSB_putc(hp->vsb, '0' + (l % 10)); VSB_printf(hp->vsb, "%s", nl); av+=2; } else break; } for (; *av != NULL; av++) { if (!strcmp(*av, "-body")) { assert(body == nullbody); REPLACE(body, av[1]); AN(body); av++; bodylen = strlen(body); for (b = body; *b != '\0'; b++) { if (*b == '\\' && b[1] == '0') { *b = '\0'; for (c = b+1; *c != '\0'; c++) { *c = c[1]; } b++; bodylen--; } } } else if (!strcmp(*av, "-bodyfrom")) { assert(body == nullbody); free(body); body = VFIL_readfile(NULL, av[1], &len); AN(body); assert(len < INT_MAX); bodylen = len; av++; } else if (!strcmp(*av, "-bodylen")) { assert(body == nullbody); free(body); body = synth_body(av[1], 0); bodylen = strlen(body); av++; } else if (!strncmp(*av, "-gzip", 5)) { l = vtc_gzip_cmd(hp, av, &body, &bodylen); if (l == 0) break; av += l; if (l > 1) VSB_printf(hp->vsb, "Content-Encoding: gzip%s", nl); } else break; } if (!nohost) { VSB_cat(hp->vsb, "Host: "); macro_cat(vl, hp->vsb, "localhost", NULL); VSB_cat(hp->vsb, nl); } if (!nodate) { VSB_cat(hp->vsb, "Date: "); macro_cat(vl, hp->vsb, "date", NULL); VSB_cat(hp->vsb, nl); } if (!noserver) VSB_printf(hp->vsb, "Server: %s%s", hp->sess->name, nl); if (!nouseragent) VSB_printf(hp->vsb, "User-Agent: %s%s", hp->sess->name, nl); if (body != NULL && !nolen) VSB_printf(hp->vsb, "Content-Length: %ld%s", bodylen, nl); VSB_cat(hp->vsb, nl); if (body != NULL) { VSB_bcat(hp->vsb, body, bodylen); free(body); } return (av); } /* SECTION: client-server.spec.txreq * * txreq|txresp [...] * Send a minimal request or response, but overload it if necessary. * * txreq is client-specific and txresp is server-specific. * * The only thing different between a request and a response, apart * from who can send them is that the first line (request line vs * status line), so all the options are prety much the same. * * \-method STRING (txreq only) * What method to use (default: "GET"). * * \-req STRING (txreq only) * Alias for -method. * * \-url STRING (txreq only) * What location to use (default "/"). * * \-proto STRING * What protocol use in the status line. * (default: "HTTP/1.1"). * * \-status NUMBER (txresp only) * What status code to return (default 200). * * \-reason STRING (txresp only) * What message to put in the status line (default: "OK"). * * \-noserver (txresp only) * Don't include a Server header with the id of the server. * * \-nouseragent (txreq only) * Don't include a User-Agent header with the id of the client. * * These three switches can appear in any order but must come before the * following ones. * * \-nohost * Don't include a Host header in the request. Also Implied * by the addition of a Host header with ``-hdr``. * * \-nolen * Don't include a Content-Length header. Also implied by the * addition of a Content-Length or Transfer-Encoding header * with ``-hdr``. * * \-nodate * Don't include a Date header in the response. Also implied * by the addition of a Date header with ``-hdr``. * * \-hdr STRING * Add STRING as a header, it must follow this format: * "name: value". It can be called multiple times. * * \-hdrlen STRING NUMBER * Add STRING as a header with NUMBER bytes of content. * * You can then use the arguments related to the body: * * \-body STRING * Input STRING as body. * * \-bodyfrom FILE * Same as -body but content is read from FILE. * * \-bodylen NUMBER * Generate and input a body that is NUMBER bytes-long. * * \-gziplevel NUMBER * Set the gzip level (call it before any of the other gzip * switches). * * \-gzipresidual NUMBER * Add extra gzip bits. You should never need it. * * \-gzipbody STRING * Gzip STRING and send it as body. * * \-gziplen NUMBER * Combine -bodylen and -gzipbody: generate a string of length * NUMBER, gzip it and send as body. */ /********************************************************************** * Transmit a response */ static void cmd_http_txresp(CMD_ARGS) { struct http *hp; const char *proto = "HTTP/1.1"; const char *status = "200"; const char *reason = "OK"; char* body = NULL; unsigned noserver = 0; (void)vl; CAST_OBJ_NOTNULL(hp, priv, HTTP_MAGIC); ONLY_SERVER(hp, av); AZ(strcmp(av[0], "txresp")); av++; VSB_clear(hp->vsb); for (; *av != NULL; av++) { if (!strcmp(*av, "-proto")) { proto = av[1]; av++; } else if (!strcmp(*av, "-status")) { status = av[1]; av++; } else if (!strcmp(*av, "-reason")) { reason = av[1]; av++; continue; } else if (!strcmp(*av, "-noserver")) { noserver = 1; continue; } else break; } VSB_printf(hp->vsb, "%s %s %s%s", proto, status, reason, nl); /* send a "Content-Length: 0" header unless something else happens */ REPLACE(body, ""); av = http_tx_parse_args(av, vl, hp, body, 1, 0, noserver, 1); if (*av != NULL) vtc_fatal(hp->vl, "Unknown http txresp spec: %s\n", *av); http_write(hp, 4, "txresp"); } static void cmd_http_upgrade(CMD_ARGS) { char *h; struct http *hp; CAST_OBJ_NOTNULL(hp, priv, HTTP_MAGIC); ONLY_SERVER(hp, av); AN(hp->sfd); h = http_find_header(hp->req, "Upgrade"); if (!h || strcmp(h, "h2c")) vtc_fatal(vl, "Req misses \"Upgrade: h2c\" header"); h = http_find_header(hp->req, "Connection"); if (!h || strcmp(h, "Upgrade, HTTP2-Settings")) vtc_fatal(vl, "Req misses \"Connection: " "Upgrade, HTTP2-Settings\" header"); h = http_find_header(hp->req, "HTTP2-Settings"); if (!h) vtc_fatal(vl, "Req misses \"HTTP2-Settings\" header"); parse_string(vl, hp, "txresp -status 101" " -hdr \"Connection: Upgrade\"" " -hdr \"Upgrade: h2c\"\n" ); b64_settings(hp, h); parse_string(vl, hp, "rxpri\n" "stream 0 {\n" " txsettings\n" " rxsettings\n" " txsettings -ack\n" " rxsettings\n" " expect settings.ack == true\n" "} -start\n" ); } /********************************************************************** * Receive a request */ /* SECTION: client-server.spec.rxreq * * rxreq (server only) * Receive and parse a request's headers and body. */ static void cmd_http_rxreq(CMD_ARGS) { struct http *hp; CAST_OBJ_NOTNULL(hp, priv, HTTP_MAGIC); ONLY_SERVER(hp, av); AZ(strcmp(av[0], "rxreq")); av++; for (; *av != NULL; av++) vtc_fatal(vl, "Unknown http rxreq spec: %s\n", *av); http_rxhdr(hp); http_splitheader(hp, 1); if (http_count_header(hp->req, "Content-Length") > 1) vtc_fatal(vl, "Multiple Content-Length headers.\n"); http_swallow_body(hp, hp->req, 0, 0); vtc_log(vl, 4, "bodylen = %s", hp->bodylen); } /* SECTION: client-server.spec.rxreqhdrs * * rxreqhdrs (server only) * Receive and parse a request's headers (but not the body). */ static void cmd_http_rxreqhdrs(CMD_ARGS) { struct http *hp; (void)vl; CAST_OBJ_NOTNULL(hp, priv, HTTP_MAGIC); AZ(strcmp(av[0], "rxreqhdrs")); av++; for (; *av != NULL; av++) vtc_fatal(hp->vl, "Unknown http rxreq spec: %s\n", *av); http_rxhdr(hp); http_splitheader(hp, 1); if (http_count_header(hp->req, "Content-Length") > 1) vtc_fatal(hp->vl, "Multiple Content-Length headers.\n"); } /* SECTION: client-server.spec.rxreqbody * * rxreqbody (server only) * Receive a request's body. */ static void cmd_http_rxreqbody(CMD_ARGS) { struct http *hp; (void)vl; CAST_OBJ_NOTNULL(hp, priv, HTTP_MAGIC); ONLY_SERVER(hp, av); AZ(strcmp(av[0], "rxreqbody")); av++; for (; *av != NULL; av++) vtc_fatal(hp->vl, "Unknown http rxreq spec: %s\n", *av); http_swallow_body(hp, hp->req, 0, 0); vtc_log(hp->vl, 4, "bodylen = %s", hp->bodylen); } /* SECTION: client-server.spec.rxrespbody * * rxrespbody (client only) * Receive (part of) a response's body. * * -max : max length of this receive, 0 for all */ static void cmd_http_rxrespbody(CMD_ARGS) { struct http *hp; int max = 0; (void)vl; CAST_OBJ_NOTNULL(hp, priv, HTTP_MAGIC); ONLY_CLIENT(hp, av); AZ(strcmp(av[0], "rxrespbody")); av++; for (; *av != NULL; av++) if (!strcmp(*av, "-max")) { max = atoi(av[1]); av++; } else vtc_fatal(hp->vl, "Unknown http rxrespbody spec: %s\n", *av); http_swallow_body(hp, hp->resp, 1, max); vtc_log(hp->vl, 4, "bodylen = %s", hp->bodylen); } /* SECTION: client-server.spec.rxchunk * * rxchunk * Receive an HTTP chunk. */ static void cmd_http_rxchunk(CMD_ARGS) { struct http *hp; int ll, i; (void)vl; CAST_OBJ_NOTNULL(hp, priv, HTTP_MAGIC); ONLY_CLIENT(hp, av); i = http_rxchunk(hp); if (i == 0) { ll = hp->rx_p - hp->body; hp->bodyl = ll; bprintf(hp->bodylen, "%d", ll); vtc_log(hp->vl, 4, "bodylen = %s", hp->bodylen); } } /********************************************************************** * Transmit a request */ static void cmd_http_txreq(CMD_ARGS) { struct http *hp; const char *req = "GET"; const char *url = "/"; const char *proto = "HTTP/1.1"; const char *up = NULL; unsigned nohost; unsigned nouseragent = 0; (void)vl; CAST_OBJ_NOTNULL(hp, priv, HTTP_MAGIC); ONLY_CLIENT(hp, av); AZ(strcmp(av[0], "txreq")); av++; VSB_clear(hp->vsb); hp->head_method = 0; for (; *av != NULL; av++) { if (!strcmp(*av, "-url")) { url = av[1]; av++; } else if (!strcmp(*av, "-proto")) { proto = av[1]; av++; } else if (!strcmp(*av, "-method") || !strcmp(*av, "-req")) { req = av[1]; hp->head_method = !strcmp(av[1], "HEAD") ; av++; } else if (!hp->sfd && !strcmp(*av, "-up")) { up = av[1]; av++; } else if (!strcmp(*av, "-nouseragent")) { nouseragent = 1; } else break; } VSB_printf(hp->vsb, "%s %s %s%s", req, url, proto, nl); if (up) VSB_printf(hp->vsb, "Connection: Upgrade, HTTP2-Settings%s" "Upgrade: h2c%s" "HTTP2-Settings: %s%s", nl, nl, up, nl); nohost = strcmp(proto, "HTTP/1.1") != 0; av = http_tx_parse_args(av, vl, hp, NULL, nohost, 1, 1, nouseragent); if (*av != NULL) vtc_fatal(hp->vl, "Unknown http txreq spec: %s\n", *av); http_write(hp, 4, "txreq"); if (up) { parse_string(vl, hp, "rxresp\n" "expect resp.status == 101\n" "expect resp.http.connection == Upgrade\n" "expect resp.http.upgrade == h2c\n" "txpri\n" ); b64_settings(hp, up); parse_string(vl, hp, "stream 0 {\n" " txsettings\n" " rxsettings\n" " txsettings -ack\n" " rxsettings\n" " expect settings.ack == true" "} -start\n" ); } } /* SECTION: client-server.spec.recv * * recv NUMBER * Read NUMBER bytes from the connection. */ static void cmd_http_recv(CMD_ARGS) { struct http *hp; int i, n; char u[32]; (void)vl; CAST_OBJ_NOTNULL(hp, priv, HTTP_MAGIC); AN(av[1]); AZ(av[2]); n = strtoul(av[1], NULL, 0); while (n > 0) { i = read(hp->sess->fd, u, n > 32 ? 32 : n); if (i > 0) vtc_dump(hp->vl, 4, "recv", u, i); else vtc_log(hp->vl, hp->fatal, "recv() got %d (%s)", i, strerror(errno)); n -= i; } } /* SECTION: client-server.spec.send * * send STRING * Push STRING on the connection. */ static void cmd_http_send(CMD_ARGS) { struct http *hp; int i; (void)vl; CAST_OBJ_NOTNULL(hp, priv, HTTP_MAGIC); AN(av[1]); AZ(av[2]); vtc_dump(hp->vl, 4, "send", av[1], -1); i = write(hp->sess->fd, av[1], strlen(av[1])); if (i != strlen(av[1])) vtc_log(hp->vl, hp->fatal, "Write error in http_send(): %s", strerror(errno)); } /* SECTION: client-server.spec.send_n * * send_n NUMBER STRING * Write STRING on the socket NUMBER times. */ static void cmd_http_send_n(CMD_ARGS) { struct http *hp; int i, n, l; (void)vl; CAST_OBJ_NOTNULL(hp, priv, HTTP_MAGIC); AN(av[1]); AN(av[2]); AZ(av[3]); n = strtoul(av[1], NULL, 0); vtc_dump(hp->vl, 4, "send_n", av[2], -1); l = strlen(av[2]); while (n--) { i = write(hp->sess->fd, av[2], l); if (i != l) vtc_log(hp->vl, hp->fatal, "Write error in http_send(): %s", strerror(errno)); } } /* SECTION: client-server.spec.send_urgent * * send_urgent STRING * Send string as TCP OOB urgent data. You will never need this. */ static void cmd_http_send_urgent(CMD_ARGS) { struct http *hp; int i; (void)vl; CAST_OBJ_NOTNULL(hp, priv, HTTP_MAGIC); AN(av[1]); AZ(av[2]); vtc_dump(hp->vl, 4, "send_urgent", av[1], -1); i = send(hp->sess->fd, av[1], strlen(av[1]), MSG_OOB); if (i != strlen(av[1])) vtc_log(hp->vl, hp->fatal, "Write error in http_send_urgent(): %s", strerror(errno)); } /* SECTION: client-server.spec.sendhex * * sendhex STRING * Send bytes as described by STRING. STRING should consist of hex pairs * possibly separated by whitespace or newlines. For example: * "0F EE a5 3df2". */ static void cmd_http_sendhex(CMD_ARGS) { struct vsb *vsb; struct http *hp; (void)vl; CAST_OBJ_NOTNULL(hp, priv, HTTP_MAGIC); AN(av[1]); AZ(av[2]); vsb = vtc_hex_to_bin(hp->vl, av[1]); assert(VSB_len(vsb) >= 0); vtc_hexdump(hp->vl, 4, "sendhex", VSB_data(vsb), VSB_len(vsb)); if (VSB_tofile(vsb, hp->sess->fd)) vtc_log(hp->vl, hp->fatal, "Write failed: %s", strerror(errno)); VSB_destroy(&vsb); } /* SECTION: client-server.spec.chunked * * chunked STRING * Send STRING as chunked encoding. */ static void cmd_http_chunked(CMD_ARGS) { struct http *hp; (void)vl; CAST_OBJ_NOTNULL(hp, priv, HTTP_MAGIC); AN(av[1]); AZ(av[2]); VSB_clear(hp->vsb); VSB_printf(hp->vsb, "%jx%s%s%s", (uintmax_t)strlen(av[1]), nl, av[1], nl); http_write(hp, 4, "chunked"); } /* SECTION: client-server.spec.chunkedlen * * chunkedlen NUMBER * Do as ``chunked`` except that the string will be generated * for you, with a length of NUMBER characters. */ static void cmd_http_chunkedlen(CMD_ARGS) { unsigned len; unsigned u, v; char buf[16384]; struct http *hp; (void)vl; CAST_OBJ_NOTNULL(hp, priv, HTTP_MAGIC); AN(av[1]); AZ(av[2]); VSB_clear(hp->vsb); len = atoi(av[1]); if (len == 0) { VSB_printf(hp->vsb, "0%s%s", nl, nl); } else { for (u = 0; u < sizeof buf; u++) buf[u] = (u & 7) + '0'; VSB_printf(hp->vsb, "%x%s", len, nl); for (u = 0; u < len; u += v) { v = vmin_t(unsigned, len - u, sizeof buf); VSB_bcat(hp->vsb, buf, v); } VSB_printf(hp->vsb, "%s", nl); } http_write(hp, 4, "chunked"); } /* SECTION: client-server.spec.timeout * * timeout NUMBER * Set the TCP timeout for this entity. */ static void cmd_http_timeout(CMD_ARGS) { struct http *hp; double d; (void)vl; CAST_OBJ_NOTNULL(hp, priv, HTTP_MAGIC); AN(av[1]); AZ(av[2]); d = VNUM(av[1]); if (isnan(d)) vtc_fatal(vl, "timeout is not a number (%s)", av[1]); hp->timeout = d; } /* SECTION: client-server.spec.expect_close * * expect_close * Reads from the connection, expecting nothing to read but an EOF. */ static void cmd_http_expect_close(CMD_ARGS) { struct http *hp; struct pollfd fds[1]; char c; int i; (void)vl; CAST_OBJ_NOTNULL(hp, priv, HTTP_MAGIC); AZ(av[1]); vtc_log(vl, 4, "Expecting close (fd = %d)", hp->sess->fd); if (hp->h2) stop_h2(hp); while (1) { fds[0].fd = hp->sess->fd; fds[0].events = POLLIN; fds[0].revents = 0; i = poll(fds, 1, (int)(hp->timeout * 1000)); if (i < 0 && errno == EINTR) continue; if (i == 0) vtc_log(vl, hp->fatal, "Expected close: timeout"); if (i != 1 || !(fds[0].revents & (POLLIN|POLLERR|POLLHUP))) vtc_log(vl, hp->fatal, "Expected close: poll = %d, revents = 0x%x", i, fds[0].revents); i = read(hp->sess->fd, &c, 1); if (i <= 0 && VTCP_Check(i)) break; if (i == 1 && vct_islws(c)) continue; vtc_log(vl, hp->fatal, "Expecting close: read = %d, c = 0x%02x", i, c); } vtc_log(vl, 4, "fd=%d EOF, as expected", hp->sess->fd); } /* SECTION: client-server.spec.close * * close (server only) * Close the connection. Note that if operating in HTTP/2 mode no * extra (GOAWAY) frame is sent, it's simply a TCP close. */ static void cmd_http_close(CMD_ARGS) { struct http *hp; (void)vl; CAST_OBJ_NOTNULL(hp, priv, HTTP_MAGIC); ONLY_SERVER(hp, av); AZ(av[1]); assert(hp->sfd != NULL); assert(*hp->sfd >= 0); if (hp->h2) stop_h2(hp); VTCP_close(&hp->sess->fd); vtc_log(vl, 4, "Closed"); } /* SECTION: client-server.spec.accept * * accept (server only) * Close the current connection, if any, and accept a new one. Note * that this new connection is HTTP/1.x. */ static void cmd_http_accept(CMD_ARGS) { struct http *hp; (void)vl; CAST_OBJ_NOTNULL(hp, priv, HTTP_MAGIC); ONLY_SERVER(hp, av); AZ(av[1]); assert(hp->sfd != NULL); assert(*hp->sfd >= 0); if (hp->h2) stop_h2(hp); if (hp->sess->fd >= 0) VTCP_close(&hp->sess->fd); vtc_log(vl, 4, "Accepting"); hp->sess->fd = accept(*hp->sfd, NULL, NULL); if (hp->sess->fd < 0) vtc_log(vl, hp->fatal, "Accepted failed: %s", strerror(errno)); vtc_log(vl, 3, "Accepted socket fd is %d", hp->sess->fd); } /* SECTION: client-server.spec.fatal * * fatal|non_fatal * Control whether a failure of this entity should stop the test. */ static void cmd_http_fatal(CMD_ARGS) { struct http *hp; CAST_OBJ_NOTNULL(hp, priv, HTTP_MAGIC); (void)vl; AZ(av[1]); if (!strcmp(av[0], "fatal")) { hp->fatal = 0; } else { assert(!strcmp(av[0], "non_fatal")); hp->fatal = -1; } } #define cmd_http_non_fatal cmd_http_fatal static const char PREFACE[24] = { 0x50, 0x52, 0x49, 0x20, 0x2a, 0x20, 0x48, 0x54, 0x54, 0x50, 0x2f, 0x32, 0x2e, 0x30, 0x0d, 0x0a, 0x0d, 0x0a, 0x53, 0x4d, 0x0d, 0x0a, 0x0d, 0x0a }; /* SECTION: client-server.spec.txpri * * txpri (client only) * Send an HTTP/2 preface ("PRI * HTTP/2.0\\r\\n\\r\\nSM\\r\\n\\r\\n") * and set client to HTTP/2. */ static void cmd_http_txpri(CMD_ARGS) { size_t l; struct http *hp; CAST_OBJ_NOTNULL(hp, priv, HTTP_MAGIC); ONLY_CLIENT(hp, av); vtc_dump(hp->vl, 4, "txpri", PREFACE, sizeof(PREFACE)); /* Dribble out the preface */ l = write(hp->sess->fd, PREFACE, 18); if (l != 18) vtc_log(vl, hp->fatal, "Write failed: (%zd vs %zd) %s", l, sizeof(PREFACE), strerror(errno)); usleep(10000); l = write(hp->sess->fd, PREFACE + 18, sizeof(PREFACE) - 18); if (l != sizeof(PREFACE) - 18) vtc_log(vl, hp->fatal, "Write failed: (%zd vs %zd) %s", l, sizeof(PREFACE), strerror(errno)); start_h2(hp); AN(hp->h2); } /* SECTION: client-server.spec.rxpri * * rxpri (server only) * Receive a preface. If valid set the server to HTTP/2, abort * otherwise. */ static void cmd_http_rxpri(CMD_ARGS) { struct http *hp; CAST_OBJ_NOTNULL(hp, priv, HTTP_MAGIC); ONLY_SERVER(hp, av); hp->rx_p = hp->rx_b; if (!http_rxchar(hp, sizeof(PREFACE), 0)) vtc_fatal(vl, "Couldn't retrieve connection preface"); if (memcmp(hp->rx_b, PREFACE, sizeof(PREFACE))) vtc_fatal(vl, "Received invalid preface\n"); start_h2(hp); AN(hp->h2); } /* SECTION: client-server.spec.settings * * settings -dectbl INT * Force internal HTTP/2 settings to certain values. Currently only * support setting the decoding table size. */ static void cmd_http_settings(CMD_ARGS) { uint32_t n; char *p; struct http *hp; CAST_OBJ_NOTNULL(hp, priv, HTTP_MAGIC); if (!hp->h2) vtc_fatal(hp->vl, "Only possible in H/2 mode"); CAST_OBJ_NOTNULL(hp, priv, HTTP_MAGIC); for (; *av != NULL; av++) { if (!strcmp(*av, "-dectbl")) { n = strtoul(av[1], &p, 0); if (*p != '\0') vtc_fatal(hp->vl, "-dectbl takes an integer as " "argument (found %s)", av[1]); assert(HPK_ResizeTbl(hp->decctx, n) != hpk_err); av++; } else vtc_fatal(vl, "Unknown settings spec: %s\n", *av); } } static void cmd_http_stream(CMD_ARGS) { struct http *hp; CAST_OBJ_NOTNULL(hp, priv, HTTP_MAGIC); if (!hp->h2) { vtc_log(hp->vl, 4, "Not in H/2 mode, do what's needed"); if (hp->sfd) parse_string(vl, hp, "rxpri"); else parse_string(vl, hp, "txpri"); parse_string(vl, hp, "stream 0 {\n" " txsettings\n" " rxsettings\n" " txsettings -ack\n" " rxsettings\n" " expect settings.ack == true" "} -run\n" ); } cmd_stream(av, hp, vl); } /* SECTION: client-server.spec.write_body * * write_body STRING * Write the body of a request or a response to a file. By using the * shell command, higher-level checks on the body can be performed * (eg. XML, JSON, ...) provided that such checks can be delegated * to an external program. */ static void cmd_http_write_body(CMD_ARGS) { struct http *hp; (void)vl; CAST_OBJ_NOTNULL(hp, priv, HTTP_MAGIC); AN(av[0]); AN(av[1]); AZ(av[2]); AZ(strcmp(av[0], "write_body")); if (VFIL_writefile(NULL, av[1], hp->body, hp->bodyl) != 0) vtc_fatal(hp->vl, "failed to write body: %s (%d)", strerror(errno), errno); } /********************************************************************** * Execute HTTP specifications */ const struct cmds http_cmds[] = { #define CMD_HTTP(n) { #n, cmd_http_##n }, /* session */ CMD_HTTP(accept) CMD_HTTP(close) CMD_HTTP(recv) CMD_HTTP(send) CMD_HTTP(send_n) CMD_HTTP(send_urgent) CMD_HTTP(sendhex) CMD_HTTP(timeout) /* spec */ CMD_HTTP(fatal) CMD_HTTP(non_fatal) /* body */ CMD_HTTP(gunzip) CMD_HTTP(write_body) /* HTTP/1.x */ CMD_HTTP(chunked) CMD_HTTP(chunkedlen) CMD_HTTP(rxchunk) /* HTTP/2 */ CMD_HTTP(stream) CMD_HTTP(settings) /* client */ CMD_HTTP(rxresp) CMD_HTTP(rxrespbody) CMD_HTTP(rxresphdrs) CMD_HTTP(txpri) CMD_HTTP(txreq) /* server */ CMD_HTTP(rxpri) CMD_HTTP(rxreq) CMD_HTTP(rxreqbody) CMD_HTTP(rxreqhdrs) CMD_HTTP(txresp) CMD_HTTP(upgrade) /* expect */ CMD_HTTP(expect) CMD_HTTP(expect_close) CMD_HTTP(expect_pattern) #undef CMD_HTTP { NULL, NULL } }; static void http_process_cleanup(void *arg) { struct http *hp; CAST_OBJ_NOTNULL(hp, arg, HTTP_MAGIC); if (hp->h2) stop_h2(hp); VSB_destroy(&hp->vsb); free(hp->rx_b); free(hp->rem_ip); free(hp->rem_port); free(hp->rem_path); FREE_OBJ(hp); } int http_process(struct vtclog *vl, struct vtc_sess *vsp, const char *spec, int sock, int *sfd, const char *addr, int rcvbuf) { struct http *hp; int retval, oldbuf; socklen_t intlen = sizeof(int); (void)sfd; ALLOC_OBJ(hp, HTTP_MAGIC); AN(hp); hp->sess = vsp; hp->sess->fd = sock; hp->timeout = vtc_maxdur * .5; if (rcvbuf) { // XXX setsockopt() too late on SunOS // https://github.com/varnishcache/varnish-cache/pull/2980#issuecomment-486214661 hp->rcvbuf = rcvbuf; oldbuf = 0; AZ(getsockopt(hp->sess->fd, SOL_SOCKET, SO_RCVBUF, &oldbuf, &intlen)); AZ(setsockopt(hp->sess->fd, SOL_SOCKET, SO_RCVBUF, &rcvbuf, intlen)); AZ(getsockopt(hp->sess->fd, SOL_SOCKET, SO_RCVBUF, &rcvbuf, &intlen)); vtc_log(vl, 3, "-rcvbuf fd=%d old=%d new=%d actual=%d", hp->sess->fd, oldbuf, hp->rcvbuf, rcvbuf); } hp->nrxbuf = 2048*1024; hp->rx_b = malloc(hp->nrxbuf); AN(hp->rx_b); hp->rx_e = hp->rx_b + hp->nrxbuf; hp->rx_p = hp->rx_b; *hp->rx_p = '\0'; hp->vsb = VSB_new_auto(); AN(hp->vsb); hp->sfd = sfd; hp->rem_ip = malloc(VTCP_ADDRBUFSIZE); AN(hp->rem_ip); hp->rem_port = malloc(VTCP_PORTBUFSIZE); AN(hp->rem_port); hp->vl = vl; vtc_log_set_cmd(hp->vl, http_cmds); hp->gziplevel = 0; hp->gzipresidual = -1; if (*addr != '/') { VTCP_hisname(sock, hp->rem_ip, VTCP_ADDRBUFSIZE, hp->rem_port, VTCP_PORTBUFSIZE); hp->rem_path = NULL; } else { strcpy(hp->rem_ip, "0.0.0.0"); strcpy(hp->rem_port, "0"); hp->rem_path = strdup(addr); } /* XXX: After an upgrade to HTTP/2 the cleanup of a server that is * not -wait'ed before the test resets is subject to a race where the * cleanup does not happen, so ASAN reports leaks despite the push * of a cleanup handler. To easily reproduce, remove the server wait * from a02022.vtc and run with ASAN enabled. */ pthread_cleanup_push(http_process_cleanup, hp); parse_string(vl, hp, spec); retval = hp->sess->fd; pthread_cleanup_pop(0); http_process_cleanup(hp); return (retval); } /********************************************************************** * Magic test routine * * This function brute-forces some short strings through gzip(9) to * find candidates for all possible 8 bit positions of the stopbit. * * Here is some good short output strings: * * 0 184 * 1 257 <1ea86e6cf31bf4ec3d7a86> * 2 106 <10> * 3 163 * 4 180 <71c5d18ec5d5d1> * 5 189 <39886d28a6d2988> * 6 118 <80000> * 7 151 <386811868> * */ #if 0 void xxx(void); void xxx(void) { z_stream vz; int n; char ibuf[200]; char obuf[200]; int fl[8]; int i, j; for (n = 0; n < 8; n++) fl[n] = 9999; memset(&vz, 0, sizeof vz); for (n = 0; n < 999999999; n++) { *ibuf = 0; for (j = 0; j < 7; j++) { snprintf(strchr(ibuf, 0), 5, "%x", (unsigned)VRND_RandomTestable() & 0xffff); vz.next_in = TRUST_ME(ibuf); vz.avail_in = strlen(ibuf); vz.next_out = TRUST_ME(obuf); vz.avail_out = sizeof obuf; assert(Z_OK == deflateInit2(&vz, 9, Z_DEFLATED, 31, 9, Z_DEFAULT_STRATEGY)); assert(Z_STREAM_END == deflate(&vz, Z_FINISH)); i = vz.stop_bit & 7; if (fl[i] > strlen(ibuf)) { printf("%d %jd <%s>\n", i, vz.stop_bit, ibuf); fl[i] = strlen(ibuf); } assert(Z_OK == deflateEnd(&vz)); } } printf("FOO\n"); } #endif varnish-7.5.0/bin/varnishtest/vtc_http.h000066400000000000000000000051021457605730600203360ustar00rootroot00000000000000/*- * Copyright (c) 2008-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #define MAX_HDR 64 struct vtc_sess { unsigned magic; #define VTC_SESS_MAGIC 0x932bd565 struct vtclog *vl; char *name; int repeat; int keepalive; int fd; ssize_t rcvbuf; }; struct h2_window { uint64_t init; int64_t size; }; struct http { unsigned magic; #define HTTP_MAGIC 0x2f02169c int *sfd; struct vtc_sess *sess; vtim_dur timeout; struct vtclog *vl; struct vsb *vsb; int rcvbuf; int nrxbuf; char *rx_b; char *rx_p; char *rx_e; char *rem_ip; char *rem_port; char *rem_path; char *body; long bodyl; char bodylen[20]; char chunklen[20]; char *req[MAX_HDR]; char *resp[MAX_HDR]; int gziplevel; int gzipresidual; int head_method; int fatal; /* H/2 */ unsigned h2; int wf; pthread_t tp; VTAILQ_HEAD(, stream) streams; unsigned last_stream; pthread_mutex_t mtx; pthread_cond_t cond; struct hpk_ctx *encctx; struct hpk_ctx *decctx; struct h2_window h2_win_self[1]; struct h2_window h2_win_peer[1]; }; int http_process(struct vtclog *vl, struct vtc_sess *vsp, const char *spec, int sock, int *sfd, const char *addr, int rcvbuf); varnish-7.5.0/bin/varnishtest/vtc_http2.c000066400000000000000000002050351457605730600204220ustar00rootroot00000000000000/*- * Copyright (c) 2008-2016 Varnish Software AS * All rights reserved. * * Author: Guillaume Quintard * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include "config.h" #include #include #include #include #include #include #include #include #include #include "vtc.h" #include "vtc_http.h" #include "vfil.h" #include "hpack.h" #include "vend.h" #define ERR_MAX 13 #define BUF_SIZE (1024*2048) static const char *const h2_errs[] = { #define H2_ERROR(n,v,sc,g,r,t) [v] = #n, #include NULL }; static const char *const h2_types[] = { #define H2_FRAME(l,u,t,f,...) [t] = #u, #include NULL }; static const char * const h2_settings[] = { [0] = "unknown", #define H2_SETTING(U,l,v,...) [v] = #U, #include NULL }; enum h2_settings_e { #define H2_SETTING(U,l,v,...) SETTINGS_##U = v, #include SETTINGS_MAX }; enum h2_type_e { #define H2_FRAME(l,u,t,f,...) TYPE_##u = t, #include TYPE_MAX }; //lint -save -e849 Same enum value enum { ACK = 0x1, END_STREAM = 0x1, PADDED = 0x8, END_HEADERS = 0x4, PRIORITY = 0x20, }; //lint -restore struct stream { unsigned magic; #define STREAM_MAGIC 0x63f1fac2 uint32_t id; struct vtclog *vl; char *spec; char *name; VTAILQ_ENTRY(stream) list; unsigned running; pthread_cond_t cond; struct frame *frame; pthread_t tp; struct http *hp; int64_t win_self; int64_t win_peer; int wf; VTAILQ_HEAD(, frame) fq; char *body; long bodylen; struct hpk_hdr req[MAX_HDR]; struct hpk_hdr resp[MAX_HDR]; int dependency; int weight; }; static void clean_headers(struct hpk_hdr *h) { unsigned n = MAX_HDR; while (h->t && n > 0) { if (h->key.len) free(h->key.ptr); if (h->value.len) free(h->value.ptr); memset(h, 0, sizeof(*h)); h++; n--; } } #define ONLY_H2_CLIENT(hp, av) \ do { \ if (hp->sfd != NULL) \ vtc_fatal(s->vl, \ "\"%s\" only possible in client", av[0]); \ } while (0) #define ONLY_H2_SERVER(hp, av) \ do { \ if (hp->sfd == NULL) \ vtc_fatal(s->vl, \ "\"%s\" only possible in server", av[0]); \ } while (0) static void http_write(const struct http *hp, int lvl, const char *buf, int s, const char *pfx) { ssize_t l; CHECK_OBJ_NOTNULL(hp, HTTP_MAGIC); AN(buf); AN(pfx); vtc_dump(hp->vl, lvl, pfx, buf, s); l = write(hp->sess->fd, buf, s); if (l != s) vtc_log(hp->vl, hp->fatal, "Write failed: (%zd vs %d) %s", l, s, strerror(errno)); } static int get_bytes(const struct http *hp, char *buf, int n) { int i; struct pollfd pfd[1]; CHECK_OBJ_NOTNULL(hp, HTTP_MAGIC); AN(buf); while (n > 0) { pfd[0].fd = hp->sess->fd; pfd[0].events = POLLIN; pfd[0].revents = 0; i = poll(pfd, 1, (int)(hp->timeout * 1000)); if (i < 0 && errno == EINTR) continue; if (i == 0) vtc_log(hp->vl, 3, "HTTP2 rx timeout (fd:%d %.3fs)", hp->sess->fd, hp->timeout); if (i < 0) vtc_log(hp->vl, 3, "HTTP2 rx failed (fd:%d poll: %s)", hp->sess->fd, strerror(errno)); if (i <= 0) return (i); i = read(hp->sess->fd, buf, n); if (!(pfd[0].revents & POLLIN)) vtc_log(hp->vl, 4, "HTTP2 rx poll (fd:%d revents: %x n=%d, i=%d)", hp->sess->fd, pfd[0].revents, n, i); if (i == 0) vtc_log(hp->vl, 3, "HTTP2 rx EOF (fd:%d read: %s)", hp->sess->fd, strerror(errno)); if (i < 0) vtc_log(hp->vl, 3, "HTTP2 rx failed (fd:%d read: %s)", hp->sess->fd, strerror(errno)); if (i <= 0) return (i); n -= i; } return (1); } VTAILQ_HEAD(fq_head, frame); struct frame { unsigned magic; #define FRAME_MAGIC 0x5dd3ec4 uint32_t size; uint32_t stid; uint8_t type; uint8_t flags; char *data; VTAILQ_ENTRY(frame) list; union { struct { uint32_t stream; uint8_t exclusive; uint8_t weight; } prio; uint32_t rst_err; double settings[SETTINGS_MAX+1]; struct { char data[9]; int ack; } ping; struct { uint32_t err; uint32_t stream; char *debug; } goaway; uint32_t winup_size; uint32_t promised; uint8_t padded; } md; }; static void readFrameHeader(struct frame *f, const char *buf) { CHECK_OBJ_NOTNULL(f, FRAME_MAGIC); AN(buf); f->size = (unsigned char)buf[0] << 16; f->size += (unsigned char)buf[1] << 8; f->size += (unsigned char)buf[2]; f->type = (unsigned char)buf[3]; f->flags = (unsigned char)buf[4]; f->stid = vbe32dec(buf+5); } static void writeFrameHeader(char *buf, const struct frame *f) { CHECK_OBJ_NOTNULL(f, FRAME_MAGIC); AN(buf); buf[0] = (f->size >> 16) & 0xff; buf[1] = (f->size >> 8) & 0xff; buf[2] = (f->size ) & 0xff; buf[3] = f->type; buf[4] = f->flags; vbe32enc(buf + 5, f->stid); } #define INIT_FRAME(f, ty, sz, id, fl) \ do { \ f.magic = FRAME_MAGIC; \ f.type = TYPE_ ## ty; \ f.size = sz; \ f.stid = id; \ f.flags = fl; \ f.data = NULL; \ } while(0) static void replace_frame(struct frame **fp, struct frame *new) { struct frame *old; AN(fp); CHECK_OBJ_ORNULL(new, FRAME_MAGIC); old = *fp; *fp = new; if (old == NULL) return; CHECK_OBJ(old, FRAME_MAGIC); if (old->type == TYPE_GOAWAY) free(old->md.goaway.debug); free(old->data); FREE_OBJ(old); } static void clean_frame(struct frame **fp) { replace_frame(fp, NULL); } static void write_frame(struct stream *sp, const struct frame *f, const unsigned lock) { struct http *hp; ssize_t l; char hdr[9]; CHECK_OBJ_NOTNULL(sp, STREAM_MAGIC); hp = sp->hp; CHECK_OBJ_NOTNULL(hp, HTTP_MAGIC); CHECK_OBJ_NOTNULL(f, FRAME_MAGIC); writeFrameHeader(hdr, f); vtc_log(sp->vl, 3, "tx: stream: %d, type: %s (%d), flags: 0x%02x, size: %d", f->stid, f->type < TYPE_MAX ? h2_types[f->type] : "?", f->type, f->flags, f->size); if (f->type == TYPE_DATA) { sp->win_peer -= f->size; hp->h2_win_peer->size -= f->size; } if (lock) PTOK(pthread_mutex_lock(&hp->mtx)); l = write(hp->sess->fd, hdr, sizeof(hdr)); if (l != sizeof(hdr)) vtc_log(sp->vl, hp->fatal, "Write failed: (%zd vs %zd) %s", l, sizeof(hdr), strerror(errno)); if (f->size) { AN(f->data); l = write(hp->sess->fd, f->data, f->size); if (l != f->size) vtc_log(sp->vl, hp->fatal, "Write failed: (%zd vs %d) %s", l, f->size, strerror(errno)); } if (lock) PTOK(pthread_mutex_unlock(&hp->mtx)); } static void exclusive_stream_dependency(const struct stream *s) { struct stream *target; struct http *hp = s->hp; if (s->id == 0) return; VTAILQ_FOREACH(target, &hp->streams, list) { if (target->id != s->id && target->dependency == s->dependency) target->dependency = s->id; } } static void explain_flags(uint8_t flags, uint8_t type, struct vtclog *vl) { if (flags & ACK && (type == TYPE_PING || type == TYPE_SETTINGS)) { vtc_log(vl, 3, "flag: ACK"); } else if (flags & END_STREAM && (type == TYPE_HEADERS || type == TYPE_PUSH_PROMISE || type == TYPE_DATA)) { vtc_log(vl, 3, "flag: END_STREAM"); } else if (flags & END_HEADERS && (type == TYPE_HEADERS || type == TYPE_PUSH_PROMISE || type == TYPE_CONTINUATION)) { vtc_log(vl, 3, "flag: END_TYPE_HEADERS"); } else if (flags & PRIORITY && (type == TYPE_HEADERS || type == TYPE_PUSH_PROMISE)) { vtc_log(vl, 3, "flag: END_PRIORITY"); } else if (flags & PADDED && (type == TYPE_DATA || type == TYPE_HEADERS || type == TYPE_PUSH_PROMISE)) { vtc_log(vl, 3, "flag: PADDED"); } else if (flags) vtc_log(vl, 3, "UNKNOWN FLAG(S): 0x%02x", flags); } static void parse_data(struct stream *s, struct frame *f) { struct http *hp; uint32_t size = f->size; char *data = f->data; CHECK_OBJ_NOTNULL(f, FRAME_MAGIC); CHECK_OBJ_NOTNULL(s, STREAM_MAGIC); CAST_OBJ_NOTNULL(hp, s->hp, HTTP_MAGIC);; if (f->flags & PADDED) { f->md.padded = *((uint8_t *)data); if (f->md.padded >= size) { vtc_log(s->vl, hp->fatal, "invalid padding: %d reported," "but size is only %d", f->md.padded, size); size = 0; f->md.padded = 0; } data++; size -= f->md.padded + 1; vtc_log(s->vl, 4, "padding: %3d", f->md.padded); } if (s->id) s->win_self -= size; s->hp->h2_win_self->size -= size; if (!size) { AZ(data); vtc_log(s->vl, 4, "s%u - no data", s->id); return; } s->body = realloc(s->body, s->bodylen + size + 1L); AN(s->body); memcpy(s->body + s->bodylen, data, size); s->bodylen += size; s->body[s->bodylen] = '\0'; } static void decode_hdr(struct http *hp, struct hpk_hdr *h, const struct vsb *vsb) { struct hpk_iter *iter; enum hpk_result r = hpk_err; int n; CHECK_OBJ_NOTNULL(vsb, VSB_MAGIC); CAST_OBJ_NOTNULL(hp, hp, HTTP_MAGIC);; if (VSB_len(vsb) == 0) return; iter = HPK_NewIter(hp->decctx, VSB_data(vsb), VSB_len(vsb)); n = 0; while (n < MAX_HDR && h[n].t) n++; while (n < MAX_HDR) { r = HPK_DecHdr(iter, h + n); if (r == hpk_err ) break; vtc_log(hp->vl, 4, "header[%2d]: %s : %s", n, h[n].key.ptr, h[n].value.ptr); n++; if (r == hpk_done) break; } if (r != hpk_done) vtc_log(hp->vl, hp->fatal ? 4 : 0, "Header decoding failed (%d) %d", r, hp->fatal); else if (n == MAX_HDR) vtc_log(hp->vl, hp->fatal, "Max number of headers reached (%d)", MAX_HDR); HPK_FreeIter(iter); } static void parse_hdr(struct stream *s, struct frame *f, struct vsb *vsb) { int shift = 0; int exclusive = 0; uint32_t size = f->size; char *data = f->data; struct http *hp; uint32_t n; CHECK_OBJ_NOTNULL(f, FRAME_MAGIC); CHECK_OBJ_NOTNULL(s, STREAM_MAGIC); CHECK_OBJ_NOTNULL(vsb, VSB_MAGIC); CAST_OBJ_NOTNULL(hp, s->hp, HTTP_MAGIC);; if (f->flags & PADDED && f->type != TYPE_CONTINUATION) { f->md.padded = *((uint8_t *)data); if (f->md.padded >= size) { vtc_log(s->vl, hp->fatal, "invalid padding: %d reported," "but size is only %d", f->md.padded, size); size = 0; f->md.padded = 0; } shift += 1; size -= f->md.padded; vtc_log(s->vl, 4, "padding: %3d", f->md.padded); } if (f->type == TYPE_HEADERS && f->flags & PRIORITY){ shift += 5; n = vbe32dec(f->data); s->dependency = n & ~(1U << 31); exclusive = n >> 31; s->weight = f->data[4]; if (exclusive) exclusive_stream_dependency(s); vtc_log(s->vl, 4, "stream->dependency: %u", s->dependency); vtc_log(s->vl, 4, "stream->weight: %u", s->weight); } else if (f->type == TYPE_PUSH_PROMISE){ shift += 4; n = vbe32dec(f->data); f->md.promised = n & ~(1U << 31); } AZ(VSB_bcat(vsb, data + shift, size - shift)); } static void parse_prio(struct stream *s, struct frame *f) { struct http *hp; char *buf; uint32_t n; CHECK_OBJ_NOTNULL(f, FRAME_MAGIC); CHECK_OBJ_NOTNULL(s, STREAM_MAGIC); CAST_OBJ_NOTNULL(hp, s->hp, HTTP_MAGIC);; if (f->size != 5) vtc_fatal(s->vl, "Size should be 5, but isn't (%d)", f->size); buf = f->data; AN(buf); n = vbe32dec(f->data); f->md.prio.stream = n & ~(1U << 31); s->dependency = f->md.prio.stream; if (n >> 31){ f->md.prio.exclusive = 1; exclusive_stream_dependency(s); } buf += 4; f->md.prio.weight = *buf; s->weight = f->md.prio.weight; vtc_log(s->vl, 3, "prio->stream: %u", f->md.prio.stream); vtc_log(s->vl, 3, "prio->weight: %u", f->md.prio.weight); } static void parse_rst(const struct stream *s, struct frame *f) { struct http *hp; uint32_t err; const char *buf; CHECK_OBJ_NOTNULL(f, FRAME_MAGIC); CHECK_OBJ_NOTNULL(s, STREAM_MAGIC); CAST_OBJ_NOTNULL(hp, s->hp, HTTP_MAGIC);; if (f->size != 4) vtc_fatal(s->vl, "Size should be 4, but isn't (%d)", f->size); err = vbe32dec(f->data); f->md.rst_err = err; vtc_log(s->vl, 2, "ouch"); if (err <= ERR_MAX) buf = h2_errs[err]; else buf = "unknown"; vtc_log(s->vl, 4, "rst->err: %s (%d)", buf, err); } static void parse_settings(const struct stream *s, struct frame *f) { struct http *hp; int v; unsigned u, t; const char *buf; enum hpk_result r; CHECK_OBJ_NOTNULL(f, FRAME_MAGIC); CHECK_OBJ_NOTNULL(s, STREAM_MAGIC); CAST_OBJ_NOTNULL(hp, s->hp, HTTP_MAGIC);; if (f->size % 6) vtc_fatal(s->vl, "Size should be a multiple of 6, but isn't (%d)", f->size); if (s->id != 0) vtc_fatal(s->vl, "Setting frames should only be on stream 0, but received on stream: %d", s->id); for (u = 0; u <= SETTINGS_MAX; u++) f->md.settings[u] = NAN; for (u = 0; u < f->size;) { t = vbe16dec(f->data + u); u += 2; v = vbe32dec(f->data + u); if (t <= SETTINGS_MAX) { buf = h2_settings[t]; f->md.settings[t] = v; } else buf = "unknown"; u += 4; if (t == 1) { r = HPK_ResizeTbl(s->hp->encctx, v); assert(r == hpk_done); } vtc_log(s->vl, 4, "settings->%s (%u): %d", buf, t, v); } } static void parse_ping(const struct stream *s, struct frame *f) { struct http *hp; CHECK_OBJ_NOTNULL(f, FRAME_MAGIC); CHECK_OBJ_NOTNULL(s, STREAM_MAGIC); CAST_OBJ_NOTNULL(hp, s->hp, HTTP_MAGIC);; if (f->size != 8) vtc_fatal(s->vl, "Size should be 8, but isn't (%d)", f->size); f->md.ping.ack = f->flags & ACK; memcpy(f->md.ping.data, f->data, 8); f->md.ping.data[8] = '\0'; vtc_log(s->vl, 4, "ping->data: %s", f->md.ping.data); } static void parse_goaway(const struct stream *s, struct frame *f) { struct http *hp; const char *err_buf; uint32_t err, stid; CHECK_OBJ_NOTNULL(f, FRAME_MAGIC); CHECK_OBJ_NOTNULL(s, STREAM_MAGIC); CAST_OBJ_NOTNULL(hp, s->hp, HTTP_MAGIC);; if (f->size < 8) vtc_fatal(s->vl, "Size should be at least 8, but isn't (%d)", f->size); if (f->data[0] & (1<<7)) vtc_fatal(s->vl, "First bit of data is reserved and should be 0"); stid = vbe32dec(f->data); err = vbe32dec(f->data + 4); f->md.goaway.err = err; f->md.goaway.stream = stid; if (err <= ERR_MAX) err_buf = h2_errs[err]; else err_buf = "unknown"; if (f->size > 8) { f->md.goaway.debug = malloc((f->size - 8) + 1L); AN(f->md.goaway.debug); f->md.goaway.debug[f->size - 8] = '\0'; memcpy(f->md.goaway.debug, f->data + 8, f->size - 8); } vtc_log(s->vl, 3, "goaway->laststream: %d", stid); vtc_log(s->vl, 3, "goaway->err: %s (%d)", err_buf, err); if (f->md.goaway.debug) vtc_log(s->vl, 3, "goaway->debug: %s", f->md.goaway.debug); } static void parse_winup(const struct stream *s, struct frame *f) { struct http *hp; uint32_t size; CHECK_OBJ_NOTNULL(f, FRAME_MAGIC); CHECK_OBJ_NOTNULL(s, STREAM_MAGIC); CAST_OBJ_NOTNULL(hp, s->hp, HTTP_MAGIC);; if (f->size != 4) vtc_fatal(s->vl, "Size should be 4, but isn't (%d)", f->size); if (f->data[0] & (1<<7)) vtc_log(s->vl, s->hp->fatal, "First bit of data is reserved and should be 0"); size = vbe32dec(f->data); f->md.winup_size = size; vtc_log(s->vl, 3, "winup->size: %d", size); } /* read a frame and queue it in the relevant stream, wait if not present yet. */ static void * receive_frame(void *priv) { struct http *hp; char hdr[9]; struct frame *f; struct stream *s; int expect_cont = 0; struct vsb *vsb = NULL; struct hpk_hdr *hdrs = NULL; CAST_OBJ_NOTNULL(hp, priv, HTTP_MAGIC); PTOK(pthread_mutex_lock(&hp->mtx)); while (hp->h2) { /*no wanted frames? */ assert(hp->wf >= 0); if (hp->wf == 0) { PTOK(pthread_cond_wait(&hp->cond, &hp->mtx)); continue; } PTOK(pthread_mutex_unlock(&hp->mtx)); if (get_bytes(hp, hdr, sizeof hdr) <= 0) { PTOK(pthread_mutex_lock(&hp->mtx)); VTAILQ_FOREACH(s, &hp->streams, list) PTOK(pthread_cond_signal(&s->cond)); PTOK(pthread_mutex_unlock(&hp->mtx)); vtc_log(hp->vl, hp->fatal, "could not get frame header"); return (NULL); } ALLOC_OBJ(f, FRAME_MAGIC); AN(f); readFrameHeader(f, hdr); vtc_log(hp->vl, 3, "rx: stream: %d, type: %s (%d), " "flags: 0x%02x, size: %d", f->stid, f->type < TYPE_MAX ? h2_types[f->type] : "?", f->type, f->flags, f->size); explain_flags(f->flags, f->type, hp->vl); if (f->size) { f->data = malloc(f->size + 1L); AN(f->data); f->data[f->size] = '\0'; if (get_bytes(hp, f->data, f->size) <= 0) { PTOK(pthread_mutex_lock(&hp->mtx)); VTAILQ_FOREACH(s, &hp->streams, list) PTOK(pthread_cond_signal(&s->cond)); clean_frame(&f); PTOK(pthread_mutex_unlock(&hp->mtx)); vtc_log(hp->vl, hp->fatal, "could not get frame body"); return (NULL); } } /* is the corresponding stream waiting? */ PTOK(pthread_mutex_lock(&hp->mtx)); s = NULL; while (!s) { VTAILQ_FOREACH(s, &hp->streams, list) if (s->id == f->stid) break; if (!s) PTOK(pthread_cond_wait(&hp->cond, &hp->mtx)); if (!hp->h2) { clean_frame(&f); PTOK(pthread_mutex_unlock(&hp->mtx)); return (NULL); } } PTOK(pthread_mutex_unlock(&hp->mtx)); AN(s); if (expect_cont && (f->type != TYPE_CONTINUATION || expect_cont != s->id)) vtc_fatal(s->vl, "Expected CONTINUATION frame for " "stream %u", expect_cont); /* parse the frame according to it type, and fill the metada */ switch (f->type) { case TYPE_DATA: parse_data(s, f); break; case TYPE_PUSH_PROMISE: hdrs = s->req; /*FALLTHROUGH*/ case TYPE_HEADERS: if (!hdrs) { if (hp->sfd) hdrs = s->req; else hdrs = s->resp; } clean_headers(hdrs); hdrs[0].t = hpk_unset; AZ(vsb); vsb = VSB_new_auto(); /*FALLTHROUGH*/ case TYPE_CONTINUATION: AN(hdrs); expect_cont = s->id; parse_hdr(s, f, vsb); if (f->flags & END_HEADERS) { expect_cont = 0; AZ(VSB_finish(vsb)); decode_hdr(hp, hdrs, vsb); VSB_destroy(&vsb); hdrs = NULL; } break; case TYPE_PRIORITY: parse_prio(s, f); break; case TYPE_RST_STREAM: parse_rst(s, f); break; case TYPE_SETTINGS: parse_settings(s, f); break; case TYPE_PING: parse_ping(s, f); break; case TYPE_GOAWAY: parse_goaway(s, f); break; case TYPE_WINDOW_UPDATE: parse_winup(s, f); break; default: WRONG("wrong frame type"); } PTOK(pthread_mutex_lock(&hp->mtx)); VTAILQ_INSERT_HEAD(&s->fq, f, list); if (s->wf) { assert(hp->wf > 0); hp->wf--; s->wf = 0; PTOK(pthread_cond_signal(&s->cond)); } continue; } PTOK(pthread_mutex_unlock(&hp->mtx)); if (vsb != NULL) VSB_destroy(&vsb); return (NULL); } #define STRTOU32(n, ss, p, v, c) \ do { \ n = strtoul(ss, &p, 0); \ if (*p != '\0') \ vtc_fatal(v, "%s takes an integer as argument " \ "(found %s)", c, ss); \ } while (0) #define STRTOU32_CHECK(n, sp, p, v, c, l) \ do { \ sp++; \ AN(*sp); \ STRTOU32(n, *sp, p, v, c); \ if (l && n >= (1U << l)) \ vtc_fatal(v, \ c " must be a %d-bits integer (found %s)", l, *sp); \ } while (0) #define CHECK_LAST_FRAME(TYPE) \ if (!f || f->type != TYPE_ ## TYPE) { \ vtc_fatal(s->vl, "Last frame was not of type " #TYPE); \ } #define RETURN_SETTINGS(idx) \ do { \ if (isnan(f->md.settings[idx])) { \ return (NULL); \ } \ snprintf(buf, 20, "%.0f", f->md.settings[idx]); \ return (buf); \ } while (0) #define RETURN_BUFFED(val) \ do { \ snprintf(buf, 20, "%ld", (long)val); \ return (buf); \ } while (0) static char * find_header(const struct hpk_hdr *h, const char *k) { AN(k); int kl = strlen(k); while (h->t) { if (kl == h->key.len && !strncasecmp(h->key.ptr, k, kl)) return (h->value.ptr); h++; } return (NULL); } /* SECTION: stream.spec.zexpect expect * * expect in stream works as it does in client or server, except that the * elements compared will be different. * * Most of these elements will be frame specific, meaning that the last frame * received on that stream must of the correct type. * * Here the list of keywords you can look at. */ static const char * cmd_var_resolve(const struct stream *s, const char *spec, char *buf) { uint32_t idx; int n; const struct hpk_hdr *h; struct hpk_ctx *ctx; struct frame *f = s->frame; CHECK_OBJ_NOTNULL(s, STREAM_MAGIC); CHECK_OBJ_NOTNULL(s->hp, HTTP_MAGIC); AN(spec); AN(buf); n = 0; /* SECTION: stream.spec.zexpect.ping PING specific * * ping.data * The 8-bytes string of the PING frame payload. * ping.ack (PING) * "true" if the ACK flag was set, "false" otherwise. */ if (!strcmp(spec, "ping.data")) { CHECK_LAST_FRAME(PING); return (f->md.ping.data); } if (!strcmp(spec, "ping.ack")) { CHECK_LAST_FRAME(PING); snprintf(buf, 20, (f->flags & ACK) ? "true" : "false"); return (buf); } /* SECTION: stream.spec.zexpect.winup WINDOW_UPDATE specific * * winup.size * The size of the upgrade given by the WINDOW_UPDATE frame. */ if (!strcmp(spec, "winup.size")) { CHECK_LAST_FRAME(WINDOW_UPDATE); RETURN_BUFFED(f->md.winup_size); } /* SECTION: stream.spec.zexpect.prio PRIORITY specific * * prio.stream * The stream ID announced. * * prio.exclusive * "true" if the priority is exclusive, else "false". * * prio.weight * The dependency weight. */ if (!strcmp(spec, "prio.stream")) { CHECK_LAST_FRAME(PRIORITY); RETURN_BUFFED(f->md.prio.stream); } if (!strcmp(spec, "prio.exclusive")) { CHECK_LAST_FRAME(PRIORITY); snprintf(buf, 20, f->md.prio.exclusive ? "true" : "false"); return (buf); } if (!strcmp(spec, "prio.weight")) { CHECK_LAST_FRAME(PRIORITY); RETURN_BUFFED(f->md.prio.weight); } /* SECTION: stream.spec.zexpect.rst RESET_STREAM specific * * rst.err * The error code (as integer) of the RESET_STREAM frame. */ if (!strcmp(spec, "rst.err")) { CHECK_LAST_FRAME(RST_STREAM); RETURN_BUFFED(f->md.rst_err); } /* SECTION: stream.spec.zexpect.settings SETTINGS specific * * settings.ack * "true" if the ACK flag was set, else ""false. * * settings.push * "true" if the push settings was set to yes, "false" if set to * no, and if not present. * * settings.hdrtbl * Value of HEADER_TABLE_SIZE if set, otherwise. * * settings.maxstreams * Value of MAX_CONCURRENT_STREAMS if set, otherwise. * * settings.winsize * Value of INITIAL_WINDOW_SIZE if set, otherwise. * * setting.framesize * Value of MAX_FRAME_SIZE if set, otherwise. * * settings.hdrsize * Value of MAX_HEADER_LIST_SIZE if set, otherwise. */ if (!strncmp(spec, "settings.", 9)) { CHECK_LAST_FRAME(SETTINGS); spec += 9; if (!strcmp(spec, "ack")) { snprintf(buf, 20, (f->flags & ACK) ? "true" : "false"); return (buf); } if (!strcmp(spec, "push")) { if (isnan(f->md.settings[SETTINGS_ENABLE_PUSH])) return (NULL); else if (f->md.settings[SETTINGS_ENABLE_PUSH] == 1) snprintf(buf, 20, "true"); else snprintf(buf, 20, "false"); return (buf); } if (!strcmp(spec, "hdrtbl")) { RETURN_SETTINGS(1); } if (!strcmp(spec, "maxstreams")) { RETURN_SETTINGS(3); } if (!strcmp(spec, "winsize")) { RETURN_SETTINGS(4); } if (!strcmp(spec, "framesize")) { RETURN_SETTINGS(5); } if (!strcmp(spec, "hdrsize")) { RETURN_SETTINGS(6); } } /* SECTION: stream.spec.zexpect.push PUSH_PROMISE specific * * push.id * The id of the promised stream. */ if (!strcmp(spec, "push.id")) { CHECK_LAST_FRAME(PUSH_PROMISE); RETURN_BUFFED(f->md.promised); } /* SECTION: stream.spec.zexpect.goaway GOAWAY specific * * goaway.err * The error code (as integer) of the GOAWAY frame. * * goaway.laststream * Last-Stream-ID * * goaway.debug * Debug data, if any. */ if (!strncmp(spec, "goaway.", 7)) { spec += 7; CHECK_LAST_FRAME(GOAWAY); if (!strcmp(spec, "err")) RETURN_BUFFED(f->md.goaway.err); else if (!strcmp(spec, "laststream")) RETURN_BUFFED(f->md.goaway.stream); else if (!strcmp(spec, "debug")) return (f->md.goaway.debug); } /* SECTION: stream.spec.zexpect.zframe Generic frame * * frame.data * Payload of the last frame * * frame.type * Type of the frame, as integer. * * frame.size * Size of the frame. * * frame.stream * Stream of the frame (correspond to the one you are executing * this from, obviously). * * frame.padding (for DATA, HEADERS, PUSH_PROMISE frames) * Number of padded bytes. */ if (!strncmp(spec, "frame.", 6)) { spec += 6; if (!f) vtc_fatal(s->vl, "No frame received yet."); if (!strcmp(spec, "data")) { return (f->data); } else if (!strcmp(spec, "type")) { RETURN_BUFFED(f->type); } else if (!strcmp(spec, "size")) { RETURN_BUFFED(f->size); } else if (!strcmp(spec, "stream")) { RETURN_BUFFED(f->stid); } else if (!strcmp(spec, "padding")) { if (f->type != TYPE_DATA && f->type != TYPE_HEADERS && f->type != TYPE_PUSH_PROMISE) vtc_fatal(s->vl, "Last frame was not of type " "DATA, HEADERS or PUSH"); RETURN_BUFFED(f->md.padded); } } /* SECTION: stream.spec.zexpect.zstream Stream * * stream.window * The current local window size of the stream, or, if on stream 0, * of the connection. * * stream.peer_window * The current peer window size of the stream, or, if on stream 0, * of the connection. * * stream.weight * Weight of the stream * * stream.dependency * Id of the stream this one depends on. */ if (!strcmp(spec, "stream.window")) { snprintf(buf, 20, "%jd", (intmax_t)(s->id ? s->win_self : s->hp->h2_win_self->size)); return (buf); } if (!strcmp(spec, "stream.peer_window")) { snprintf(buf, 20, "%jd", (intmax_t)(s->id ? s->win_peer : s->hp->h2_win_peer->size)); return (buf); } if (!strcmp(spec, "stream.weight")) { if (s->id) { snprintf(buf, 20, "%d", s->weight); return (buf); } else return (NULL); } if (!strcmp(spec, "stream.dependency")) { if (s->id) { snprintf(buf, 20, "%d", s->dependency); return (buf); } else return (NULL); } /* SECTION: stream.spec.zexpect.ztable Index tables * * tbl.dec.size / tbl.enc.size * Size (bytes) of the decoding/encoding table. * * tbl.dec.size / tbl.enc.maxsize * Maximum size (bytes) of the decoding/encoding table. * * tbl.dec.length / tbl.enc.length * Number of headers in decoding/encoding table. * * tbl.dec[INT].key / tbl.enc[INT].key * Name of the header at index INT of the decoding/encoding * table. * * tbl.dec[INT].value / tbl.enc[INT].value * Value of the header at index INT of the decoding/encoding * table. */ if (!strncmp(spec, "tbl.dec", 7) || !strncmp(spec, "tbl.enc", 7)) { if (spec[4] == 'd') ctx = s->hp->decctx; else ctx = s->hp->encctx; spec += 7; if (1 == sscanf(spec, "[%u].key%n", &idx, &n) && spec[n] == '\0') { h = HPK_GetHdr(ctx, idx + 61); return (h ? h->key.ptr : NULL); } else if (1 == sscanf(spec, "[%u].value%n", &idx, &n) && spec[n] == '\0') { h = HPK_GetHdr(ctx, idx + 61); return (h ? h->value.ptr : NULL); } else if (!strcmp(spec, ".size")) RETURN_BUFFED(HPK_GetTblSize(ctx)); else if (!strcmp(spec, ".maxsize")) RETURN_BUFFED(HPK_GetTblMaxSize(ctx)); else if (!strcmp(spec, ".length")) RETURN_BUFFED(HPK_GetTblLength(ctx)); } /* SECTION: stream.spec.zexpect.zre Request and response * * Note: it's possible to inspect a request or response while it is * still being construct (in-between two frames for example). * * req.bodylen / resp.bodylen * Length in bytes of the request/response so far. * * req.body / resp.body * Body of the request/response so far. * * req.http.STRING / resp.http.STRING * Value of the header STRING in the request/response. * * req.status / resp.status * :status pseudo-header's value. * * req.url / resp.url * :path pseudo-header's value. * * req.method / resp.method * :method pseudo-header's value. * * req.authority / resp.authority * :method pseudo-header's value. * * req.scheme / resp.scheme * :method pseudo-header's value. */ if (!strncmp(spec, "req.", 4) || !strncmp(spec, "resp.", 5)) { if (spec[2] == 'q') { h = s->req; spec += 4; } else { h = s->resp; spec += 5; } if (!strcmp(spec, "body")) return (s->body); else if (!strcmp(spec, "bodylen")) RETURN_BUFFED(s->bodylen); else if (!strcmp(spec, "status")) return (find_header(h, ":status")); else if (!strcmp(spec, "url")) return (find_header(h, ":path")); else if (!strcmp(spec, "method")) return (find_header(h, ":method")); else if (!strcmp(spec, "authority")) return (find_header(h, ":authority")); else if (!strcmp(spec, "scheme")) return (find_header(h, ":scheme")); else if (!strncmp(spec, "http.", 5)) return (find_header(h, spec + 5)); else return (NULL); } #define H2_ERROR(U,v,sc,g,r,t) \ if (!strcmp(spec, #U)) { return (#v); } #include "tbl/h2_error.h" return (spec); } /* SECTION: stream.spec.frame_sendhex sendhex * * Push bytes directly on the wire. sendhex takes exactly one argument: a string * describing the bytes, in hex notation, with possible whitespaces between * them. Here's an example:: * * sendhex "00 00 08 00 0900 8d" */ static void cmd_sendhex(CMD_ARGS) { struct http *hp; struct stream *s; struct vsb *vsb; (void)vl; CAST_OBJ_NOTNULL(s, priv, STREAM_MAGIC); CAST_OBJ_NOTNULL(hp, s->hp, HTTP_MAGIC); AN(av[1]); AZ(av[2]); vsb = vtc_hex_to_bin(hp->vl, av[1]); assert(VSB_len(vsb) >= 0); vtc_hexdump(hp->vl, 4, "sendhex", VSB_data(vsb), VSB_len(vsb)); PTOK(pthread_mutex_lock(&hp->mtx)); http_write(hp, 4, VSB_data(vsb), VSB_len(vsb), "sendhex"); PTOK(pthread_mutex_unlock(&hp->mtx)); VSB_destroy(&vsb); } #define ENC(hdr, k, v) \ { \ AN(k); \ hdr.key.ptr = TRUST_ME(k); \ hdr.key.len = strlen(k); \ AN(v); \ hdr.value.ptr = TRUST_ME(v); \ hdr.value.len = strlen(v); \ assert(HPK_EncHdr(iter, &hdr) != hpk_err); \ } #define STR_ENC(av, field, str) \ { \ av++; \ if (AV_IS("plain")) { hdr.field.huff = 0; } \ else if (AV_IS("huf")) { hdr.field.huff = 1; } \ else \ vtc_fatal(vl, str " arg can be huf or plain (got: %s)", *av); \ av++; \ AN(*av); \ hdr.field.ptr = *av; \ hdr.field.len = strlen(*av); \ } /* SECTION: stream.spec.data_0 txreq, txresp, txcont, txpush * * These four commands are about sending headers. txreq and txresp * will send HEADER frames; txcont will send CONTINUATION frames; txpush * PUSH frames. * The only difference between txreq and txresp are the default headers * set by each of them. * * \-noadd * Do not add default headers. Useful to avoid duplicates when sending * default headers using ``-hdr``, ``-idxHdr`` and ``-litIdxHdr``. * * \-status INT (txresp) * Set the :status pseudo-header. * * \-url STRING (txreq, txpush) * Set the :path pseudo-header. * * \-method STRING (txreq, txpush) * Set the :method pseudo-header. * * \-req STRING (txreq, txpush) * Alias for -method. * * \-scheme STRING (txreq, txpush) * Set the :scheme pseudo-header. * * \-hdr STRING1 STRING2 * Insert a header, STRING1 being the name, and STRING2 the value. * * \-idxHdr INT * Insert an indexed header, using INT as index. * * \-litIdxHdr inc|not|never INT huf|plain STRING * Insert an literal, indexed header. The first argument specify if the * header should be added to the table, shouldn't, or mustn't be * compressed if/when retransmitted. * * INT is the index of the header name to use. * * The third argument informs about the Huffman encoding: yes (huf) or * no (plain). * * The last term is the literal value of the header. * * \-litHdr inc|not|never huf|plain STRING1 huf|plain STRING2 * Insert a literal header, with the same first argument as * ``-litIdxHdr``. * * The second and third terms tell what the name of the header is and if * it should be Huffman-encoded, while the last two do the same * regarding the value. * * \-body STRING (txreq, txresp) * Specify a body, effectively putting STRING into a DATA frame after * the HEADER frame is sent. * * \-bodyfrom FILE (txreq, txresp) * Same as ``-body`` but content is read from FILE. * * \-bodylen INT (txreq, txresp) * Do the same thing as ``-body`` but generate a string of INT length * for you. * * \-gzipbody STRING (txreq, txresp) * Gzip STRING and send it as body. * * \-gziplen NUMBER (txreq, txresp) * Combine -bodylen and -gzipbody: generate a string of length NUMBER, * gzip it and send as body. * * \-nostrend (txreq, txresp) * Don't set the END_STREAM flag automatically, making the peer expect * a body after the headers. * * \-nohdrend * Don't set the END_HEADERS flag automatically, making the peer expect * more HEADER frames. * * \-dep INT (txreq, txresp) * Tell the peer that this content depends on the stream with the INT * id. * * \-ex (txreq, txresp) * Make the dependency exclusive (``-dep`` is still needed). * * \-weight (txreq, txresp) * Set the weight for the dependency. * * \-promised INT (txpush) * The id of the promised stream. * * \-pad STRING / -padlen INT (txreq, txresp, txpush) * Add string as padding to the frame, either the one you provided with * \-pad, or one that is generated for you, of length INT is -padlen * case. */ #define cmd_txreq cmd_tx11obj #define cmd_txresp cmd_tx11obj #define cmd_txpush cmd_tx11obj #define cmd_txcont cmd_tx11obj static void cmd_tx11obj(CMD_ARGS) { struct stream *s; int i; int status_done = 1; int method_done = 1; int path_done = 1; int scheme_done = 1; long bodylen = 0; ssize_t len; uint32_t stid = 0, pstid; uint32_t weight = 16; uint32_t exclusive = 0; char *buf; struct hpk_iter *iter; struct frame f; char *body = NULL, *pad = NULL; /*XXX: do we need a better api? yes we do */ struct hpk_hdr hdr; char *cmd_str = *av; char *p; CAST_OBJ_NOTNULL(s, priv, STREAM_MAGIC); INIT_FRAME(f, CONTINUATION, 0, s->id, END_HEADERS); buf = malloc(BUF_SIZE); AN(buf); if (!strcmp(cmd_str, "txreq")) { ONLY_H2_CLIENT(s->hp, av); f.type = TYPE_HEADERS; f.flags |= END_STREAM; method_done = 0; path_done = 0; scheme_done = 0; } else if (!strcmp(cmd_str, "txresp")) { ONLY_H2_SERVER(s->hp, av); f.type = TYPE_HEADERS; f.flags |= END_STREAM; status_done = 0; } else if (!strcmp(cmd_str, "txpush")) { ONLY_H2_SERVER(s->hp, av); f.type = TYPE_PUSH_PROMISE; method_done = 0; path_done = 0; scheme_done = 0; } if (f.type == TYPE_PUSH_PROMISE) { *buf = 0; iter = HPK_NewIter(s->hp->encctx, buf + 4, BUF_SIZE - 4); } else iter = HPK_NewIter(s->hp->encctx, buf, BUF_SIZE); #define AV_IS(str) !strcmp(*av, str) #define CMD_IS(str) !strcmp(cmd_str, str) while (*++av) { memset(&hdr, 0, sizeof(hdr)); hdr.t = hpk_not; if (AV_IS("-noadd")) { path_done = 1; status_done = 1; method_done = 1; scheme_done = 1; } else if (AV_IS("-status") && CMD_IS("txresp")) { ENC(hdr, ":status", av[1]); av++; status_done = 1; } else if (AV_IS("-url") && (CMD_IS("txreq") || CMD_IS("txpush"))) { ENC(hdr, ":path", av[1]); av++; path_done = 1; } else if ((AV_IS("-method") || AV_IS("-req")) && (CMD_IS("txreq") || CMD_IS("txpush"))) { ENC(hdr, ":method", av[1]); av++; method_done = 1; } else if (AV_IS("-scheme") && (CMD_IS("txreq") || CMD_IS("txpush"))) { ENC(hdr, ":scheme", av[1]); av++; scheme_done = 1; } else if (AV_IS("-hdr")) { if (av[2] == NULL) vtc_fatal(vl, "-hdr takes two arguments in http2"); ENC(hdr, av[1], av[2]); av += 2; } else if (AV_IS("-idxHdr")) { hdr.t = hpk_idx; STRTOU32_CHECK(hdr.i, av, p, vl, "-idxHdr", 0); assert(HPK_EncHdr(iter, &hdr) != hpk_err); } else if (AV_IS("-litIdxHdr")) { av++; if (AV_IS("inc")) { hdr.t = hpk_inc; } else if (AV_IS("not")) { hdr.t = hpk_not; } else if (AV_IS("never")) { hdr.t = hpk_never; } else vtc_fatal(vl, "first -litidxHdr arg can be " "inc, not, never (got: %s)", *av); STRTOU32_CHECK(hdr.i, av, p, vl, "second -litidxHdr arg", 0); hdr.key.ptr = NULL; hdr.key.len = 0; STR_ENC(av, value, "third -litHdr"); assert(HPK_EncHdr(iter, &hdr) != hpk_err); } else if (AV_IS("-litHdr")) { av++; if (AV_IS("inc")) { hdr.t = hpk_inc; } else if (AV_IS("not")) { hdr.t = hpk_not; } else if (AV_IS("never")) { hdr.t = hpk_never; } else vtc_fatal(vl, "first -litHdr arg can be inc, " "not, never (got: %s)", *av); STR_ENC(av, key, "second -litHdr"); STR_ENC(av, value, "fourth -litHdr"); assert(HPK_EncHdr(iter, &hdr) != hpk_err); } else if (AV_IS("-nostrend")) { f.flags &= ~END_STREAM; } else if (AV_IS("-nohdrend")) { f.flags &= ~END_HEADERS; } else if (AV_IS("-promised") && CMD_IS("txpush")) { STRTOU32_CHECK(pstid, av, p, vl, "-promised", 31); vbe32enc(buf, pstid); } else if (AV_IS("-pad") && !CMD_IS("txcont")) { AZ(pad); av++; AN(*av); pad = strdup(*av); } else if (AV_IS("-padlen") && !CMD_IS("txcont")) { AZ(pad); av++; pad = synth_body(*av, 0); } else if (CMD_IS("txreq") || CMD_IS("txresp")) { if (AV_IS("-body")) { AZ(body); REPLACE(body, av[1]); AN(body); bodylen = strlen(body); f.flags &= ~END_STREAM; av++; } else if (AV_IS("-bodyfrom")) { AZ(body); body = VFIL_readfile(NULL, av[1], &len); AN(body); assert(len < INT_MAX); bodylen = len; f.flags &= ~END_STREAM; av++; } else if (AV_IS("-bodylen")) { AZ(body); body = synth_body(av[1], 0); bodylen = strlen(body); f.flags &= ~END_STREAM; av++; } else if (!strncmp(*av, "-gzip", 5)) { i = vtc_gzip_cmd(s->hp, av, &body, &bodylen); if (i == 0) break; av += i; if (i > 1) { ENC(hdr, ":content-encoding", "gzip"); f.flags &= ~END_STREAM; } } else if (AV_IS("-dep")) { STRTOU32_CHECK(stid, av, p, vl, "-dep", 0); f.flags |= PRIORITY; } else if (AV_IS("-ex")) { exclusive = 1U << 31; f.flags |= PRIORITY; } else if (AV_IS("-weight")) { STRTOU32_CHECK(weight, av, p, vl, "-weight", 8); f.flags |= PRIORITY; } else break; } else break; } #undef CMD_IS #undef AV_IS if (*av != NULL) vtc_fatal(vl, "Unknown %s spec: %s\n", cmd_str, *av); memset(&hdr, 0, sizeof(hdr)); hdr.t = hpk_not; if (!status_done) { ENC(hdr, ":status", "200"); } if (!path_done) { ENC(hdr, ":path", "/"); } if (!method_done) { ENC(hdr, ":method", "GET"); } if (!scheme_done) { ENC(hdr, ":scheme", "http"); } f.size = gethpk_iterLen(iter); if (f.flags & PRIORITY) { s->weight = weight & 0xff; s->dependency = stid; assert(f.size + 5 < BUF_SIZE); memmove(buf + 5, buf, f.size); vbe32enc(buf, (stid | exclusive)); buf[4] = s->weight; f.size += 5; vtc_log(vl, 4, "stream->dependency: %u", s->dependency); vtc_log(vl, 4, "stream->weight: %u", s->weight); if (exclusive) exclusive_stream_dependency(s); } if (pad) { if (strlen(pad) > 255) vtc_fatal(vl, "Padding is limited to 255 bytes"); f.flags |= PADDED; assert(f.size + strlen(pad) < BUF_SIZE); memmove(buf + 1, buf, f.size); buf[0] = strlen(pad); f.size += 1; memcpy(buf + f.size, pad, strlen(pad)); f.size += strlen(pad); free(pad); } if (f.type == TYPE_PUSH_PROMISE) f.size += 4; f.data = buf; HPK_FreeIter(iter); write_frame(s, &f, 1); free(buf); if (!body) return; INIT_FRAME(f, DATA, bodylen, s->id, END_STREAM); f.data = body; write_frame(s, &f, 1); free(body); } /* SECTION: stream.spec.data_1 txdata * * By default, data frames are empty. The receiving end will know the whole body * has been delivered thanks to the END_STREAM flag set in the last DATA frame, * and txdata automatically set it. * * \-data STRING * Data to be embedded into the frame. * * \-datalen INT * Generate and INT-bytes long string to be sent in the frame. * * \-pad STRING / -padlen INT * Add string as padding to the frame, either the one you provided with * \-pad, or one that is generated for you, of length INT is -padlen * case. * * \-nostrend * Don't set the END_STREAM flag, allowing to send more data on this * stream. */ static void cmd_txdata(CMD_ARGS) { struct stream *s; char *pad = NULL; struct frame f; char *body = NULL; char *data = NULL; CAST_OBJ_NOTNULL(s, priv, STREAM_MAGIC); INIT_FRAME(f, DATA, 0, s->id, END_STREAM); while (*++av) { if (!strcmp(*av, "-data")) { AZ(body); av++; body = strdup(*av); } else if (!strcmp(*av, "-datalen")) { AZ(body); av++; body = synth_body(*av, 0); } else if (!strcmp(*av, "-pad")) { AZ(pad); av++; AN(*av); pad = strdup(*av); } else if (!strcmp(*av, "-padlen")) { AZ(pad); av++; pad = synth_body(*av, 0); } else if (!strcmp(*av, "-nostrend")) f.flags &= ~END_STREAM; else break; } if (*av != NULL) vtc_fatal(vl, "Unknown txdata spec: %s\n", *av); if (!body) body = strdup(""); if (pad) { f.flags |= PADDED; if (strlen(pad) > 255) vtc_fatal(vl, "Padding is limited to 255 bytes"); data = malloc( 1 + strlen(body) + strlen(pad)); AN(data); *((uint8_t *)data) = strlen(pad); f.size = 1; memcpy(data + f.size, body, strlen(body)); f.size += strlen(body); memcpy(data + f.size, pad, strlen(pad)); f.size += strlen(pad); f.data = data; } else { f.size = strlen(body); f.data = body; } write_frame(s, &f, 1); free(body); free(pad); free(data); } /* SECTION: stream.spec.reset_txrst txrst * * Send a RST_STREAM frame. By default, txrst will send a 0 error code * (NO_ERROR). * * \-err STRING|INT * Sets the error code to be sent. The argument can be an integer or a * string describing the error, such as NO_ERROR, or CANCEL (see * rfc7540#11.4 for more strings). */ static void cmd_txrst(CMD_ARGS) { struct stream *s; char *p; uint32_t err = 0; struct frame f; CAST_OBJ_NOTNULL(s, priv, STREAM_MAGIC); INIT_FRAME(f, RST_STREAM, 4, s->id, 0); while (*++av) { if (!strcmp(*av, "-err")) { ++av; for (err = 0; h2_errs[err]; err++) { if (!strcmp(h2_errs[err], *av)) break; } if (h2_errs[err]) continue; STRTOU32(err, *av, p, vl, "-err"); } else break; } if (*av != NULL) vtc_fatal(vl, "Unknown txrst spec: %s\n", *av); err = htonl(err); f.data = (void *)&err; write_frame(s, &f, 1); } /* SECTION: stream.spec.prio_txprio txprio * * Send a PRIORITY frame * * \-stream INT * indicate the id of the stream the sender stream depends on. * * \-ex * the dependency should be made exclusive (only this streams depends on * the parent stream). * * \-weight INT * an 8-bits integer is used to balance priority between streams * depending on the same streams. */ static void cmd_txprio(CMD_ARGS) { struct stream *s; char *p; uint32_t stid = 0; struct frame f; uint32_t weight = 0; uint32_t exclusive = 0; uint8_t buf[5]; CAST_OBJ_NOTNULL(s, priv, STREAM_MAGIC); INIT_FRAME(f, PRIORITY, 5, s->id, 0); f.data = (void *)buf; while (*++av) { if (!strcmp(*av, "-stream")) { STRTOU32_CHECK(stid, av, p, vl, "-stream", 0); } else if (!strcmp(*av, "-ex")) { exclusive = 1U << 31; } else if (!strcmp(*av, "-weight")) { STRTOU32_CHECK(weight, av, p, vl, "-weight", 8); } else break; } if (*av != NULL) vtc_fatal(vl, "Unknown txprio spec: %s\n", *av); s->weight = weight & 0xff; s->dependency = stid; if (exclusive) exclusive_stream_dependency(s); vbe32enc(buf, (stid | exclusive)); buf[4] = s->weight; write_frame(s, &f, 1); } #define PUT_KV(av, vl, name, val, code) \ do {\ STRTOU32_CHECK(val, av, p, vl, #name, 0); \ vbe16enc(cursor, code); \ cursor += sizeof(uint16_t); \ vbe32enc(cursor, val); \ cursor += sizeof(uint32_t); \ f.size += 6; \ } while(0) /* SECTION: stream.spec.settings_txsettings txsettings * * SETTINGS frames must be acknowledge, arguments are as follow (most of them * are from rfc7540#6.5.2): * * \-hdrtbl INT * headers table size * * \-push BOOL * whether push frames are accepted or not * * \-maxstreams INT * maximum concurrent streams allowed * * \-winsize INT * sender's initial window size * * \-framesize INT * largest frame size authorized * * \-hdrsize INT * maximum size of the header list authorized * * \-ack * set the ack bit */ static void cmd_txsettings(CMD_ARGS) { struct stream *s, *s2; struct http *hp; char *p; uint32_t val = 0; struct frame f; //TODO dynamic alloc char buf[512]; char *cursor = buf; CAST_OBJ_NOTNULL(s, priv, STREAM_MAGIC); CAST_OBJ_NOTNULL(hp, s->hp, HTTP_MAGIC); memset(buf, 0, 512); INIT_FRAME(f, SETTINGS, 0, s->id, 0); f.data = buf; PTOK(pthread_mutex_lock(&hp->mtx)); while (*++av) { if (!strcmp(*av, "-push")) { ++av; vbe16enc(cursor, 0x2); cursor += sizeof(uint16_t); if (!strcmp(*av, "false")) vbe32enc(cursor, 0); else if (!strcmp(*av, "true")) vbe32enc(cursor, 1); else vtc_fatal(vl, "Push parameter is either " "\"true\" or \"false\", not %s", *av); cursor += sizeof(uint32_t); f.size += 6; } else if (!strcmp(*av, "-hdrtbl")) { PUT_KV(av, vl, hdrtbl, val, 0x1); assert(HPK_ResizeTbl(s->hp->decctx, val) != hpk_err); } else if (!strcmp(*av, "-maxstreams")) PUT_KV(av, vl, maxstreams, val, 0x3); else if (!strcmp(*av, "-winsize")) { PUT_KV(av, vl, winsize, val, 0x4); VTAILQ_FOREACH(s2, &hp->streams, list) s2->win_self += (val - hp->h2_win_self->init); hp->h2_win_self->init = val; } else if (!strcmp(*av, "-framesize")) PUT_KV(av, vl, framesize, val, 0x5); else if (!strcmp(*av, "-hdrsize")) PUT_KV(av, vl, hdrsize, val, 0x6); else if (!strcmp(*av, "-ack")) f.flags |= 1; else break; } if (*av != NULL) vtc_fatal(vl, "Unknown txsettings spec: %s\n", *av); AN(s->hp); write_frame(s, &f, 0); PTOK(pthread_mutex_unlock(&hp->mtx)); } /* SECTION: stream.spec.ping_txping txping * * Send PING frame. * * \-data STRING * specify the payload of the frame, with STRING being an 8-char string. * * \-ack * set the ACK flag. */ static void cmd_txping(CMD_ARGS) { struct stream *s; struct frame f; char buf[8]; memset(buf, 0, 8); CAST_OBJ_NOTNULL(s, priv, STREAM_MAGIC); INIT_FRAME(f, PING, 8, s->id, 0); while (*++av) { if (!strcmp(*av, "-data")) { av++; if (f.data) vtc_fatal(vl, "this frame already has data"); if (strlen(*av) != 8) vtc_fatal(vl, "data must be a 8-char string, found (%s)", *av); f.data = *av; } else if (!strcmp(*av, "-ack")) f.flags |= 1; else break; } if (*av != NULL) vtc_fatal(vl, "Unknown txping spec: %s\n", *av); if (!f.data) f.data = buf; write_frame(s, &f, 1); } /* * SECTION: stream.spec.goaway_txgoaway txgoaway * * Possible options include: * * \-err STRING|INT * set the error code to explain the termination. The second argument * can be a integer or the string version of the error code as found * in rfc7540#7. * * \-laststream INT * the id of the "highest-numbered stream identifier for which the * sender of the GOAWAY frame might have taken some action on or might * yet take action on". * * \-debug * specify the debug data, if any to append to the frame. */ static void cmd_txgoaway(CMD_ARGS) { struct stream *s; char *p; uint32_t err = 0; uint32_t ls = 0; struct frame f; CAST_OBJ_NOTNULL(s, priv, STREAM_MAGIC); INIT_FRAME(f, GOAWAY, 8, s->id, 0); while (*++av) { if (!strcmp(*av, "-err")) { ++av; for (err = 0; h2_errs[err]; err++) if (!strcmp(h2_errs[err], *av)) break; if (h2_errs[err]) continue; STRTOU32(err, *av, p, vl, "-err"); } else if (!strcmp(*av, "-laststream")) { STRTOU32_CHECK(ls, av, p, vl, "-laststream", 31); } else if (!strcmp(*av, "-debug")) { ++av; if (f.data) vtc_fatal(vl, "this frame already has debug data"); f.size = 8 + strlen(*av); f.data = malloc(f.size); AN(f.data); memcpy(f.data + 8, *av, f.size - 8); } else break; } if (*av != NULL) vtc_fatal(vl, "Unknown txgoaway spec: %s\n", *av); if (!f.data) { f.data = malloc(8); AN(f.data); } vbe32enc(f.data, ls); vbe32enc(f.data + 4, err); write_frame(s, &f, 1); free(f.data); } /* SECTION: stream.spec.winup_txwinup txwinup * * Transmit a WINDOW_UPDATE frame, increasing the amount of credit of the * connection (from stream 0) or of the stream (any other stream). * * \-size INT * give INT credits to the peer. */ static void cmd_txwinup(CMD_ARGS) { struct http *hp; struct stream *s; char *p; struct frame f; char buf[8]; uint32_t size = 0; CAST_OBJ_NOTNULL(s, priv, STREAM_MAGIC); CAST_OBJ_NOTNULL(hp, s->hp, HTTP_MAGIC); memset(buf, 0, 8); AN(av[1]); AN(av[2]); INIT_FRAME(f, WINDOW_UPDATE, 4, s->id, 0); f.data = buf; while (*++av) if (!strcmp(*av, "-size")) { STRTOU32_CHECK(size, av, p, vl, "-size", 0); } else break; if (*av != NULL) vtc_fatal(vl, "Unknown txwinup spec: %s\n", *av); PTOK(pthread_mutex_lock(&hp->mtx)); if (s->id == 0) hp->h2_win_self->size += size; s->win_self += size; PTOK(pthread_mutex_unlock(&hp->mtx)); size = htonl(size); f.data = (void *)&size; write_frame(s, &f, 1); } static struct frame * rxstuff(struct stream *s) { struct frame *f; CHECK_OBJ_NOTNULL(s, STREAM_MAGIC); PTOK(pthread_mutex_lock(&s->hp->mtx)); if (VTAILQ_EMPTY(&s->fq)) { assert(s->hp->wf >= 0); s->hp->wf++; s->wf = 1; PTOK(pthread_cond_signal(&s->hp->cond)); PTOK(pthread_cond_wait(&s->cond, &s->hp->mtx)); } if (VTAILQ_EMPTY(&s->fq)) { PTOK(pthread_mutex_unlock(&s->hp->mtx)); return (NULL); } clean_frame(&s->frame); f = VTAILQ_LAST(&s->fq, fq_head); CHECK_OBJ_NOTNULL(f, FRAME_MAGIC); VTAILQ_REMOVE(&s->fq, f, list); PTOK(pthread_mutex_unlock(&s->hp->mtx)); return (f); } #define CHKFRAME(rt, wt, rcv, func) \ do { \ if (rt != wt) \ vtc_fatal(vl, "Frame #%d for %s was of type %s (%d) " \ "instead of %s (%d)", \ rcv, func, \ rt < TYPE_MAX ? h2_types[rt] : "?", rt, \ wt < TYPE_MAX ? h2_types[wt] : "?", wt); \ } while (0); /* SECTION: stream.spec.data_11 rxhdrs * * ``rxhdrs`` will expect one HEADER frame, then, depending on the arguments, * zero or more CONTINUATION frame. * * \-all * Keep waiting for CONTINUATION frames until END_HEADERS flag is seen. * * \-some INT * Retrieve INT - 1 CONTINUATION frames after the HEADER frame. * */ static void cmd_rxhdrs(CMD_ARGS) { struct stream *s; struct frame *f = NULL; char *p; int loop = 0; unsigned long int times = 1; unsigned rcv = 0; enum h2_type_e expect = TYPE_HEADERS; CAST_OBJ_NOTNULL(s, priv, STREAM_MAGIC); while (*++av) { if (!strcmp(*av, "-some")) { STRTOU32_CHECK(times, av, p, vl, "-some", 0); if (!times) vtc_fatal(vl, "-some argument must be more" "than 0 (found \"%s\")\n", *av); } else if (!strcmp(*av, "-all")) loop = 1; else break; } if (*av != NULL) vtc_fatal(vl, "Unknown rxhdrs spec: %s\n", *av); do { replace_frame(&f, rxstuff(s)); if (f == NULL) break; rcv++; CHKFRAME(f->type, expect, rcv, "rxhdrs"); expect = TYPE_CONTINUATION; } while (rcv < times || (loop && !(f->flags & END_HEADERS))); replace_frame(&s->frame, f); } static void cmd_rxcont(CMD_ARGS) { struct stream *s; struct frame *f = NULL; char *p; int loop = 0; unsigned long int times = 1; unsigned rcv = 0; (void)av; CAST_OBJ_NOTNULL(s, priv, STREAM_MAGIC); while (*++av) if (!strcmp(*av, "-some")) { STRTOU32(times, *av, p, vl, "-some"); if (!times) vtc_fatal(vl, "-some argument must be more" "than 0 (found \"%s\")\n", *av); } else if (!strcmp(*av, "-all")) loop = 1; else break; if (*av != NULL) vtc_fatal(vl, "Unknown rxcont spec: %s\n", *av); do { replace_frame(&f, rxstuff(s)); if (f == NULL) break; rcv++; CHKFRAME(f->type, TYPE_CONTINUATION, rcv, "rxcont"); } while (rcv < times || (loop && !(f->flags & END_HEADERS))); replace_frame(&s->frame, f); } /* SECTION: stream.spec.data_13 rxdata * * Receiving data is done using the ``rxdata`` keywords and will retrieve one * DATA frame, if you wish to receive more, you can use these two convenience * arguments: * * \-all * keep waiting for DATA frame until one sets the END_STREAM flag * * \-some INT * retrieve INT DATA frames. * */ static void cmd_rxdata(CMD_ARGS) { struct stream *s; struct frame *f = NULL; char *p; int loop = 0; unsigned long int times = 1; unsigned rcv = 0; (void)av; CAST_OBJ_NOTNULL(s, priv, STREAM_MAGIC); while (*++av) if (!strcmp(*av, "-some")) { av++; STRTOU32(times, *av, p, vl, "-some"); if (!times) vtc_fatal(vl, "-some argument must be more" "than 0 (found \"%s\")\n", *av); } else if (!strcmp(*av, "-all")) loop = 1; else break; if (*av != NULL) vtc_fatal(vl, "Unknown rxdata spec: %s\n", *av); do { replace_frame(&f, rxstuff(s)); if (f == NULL) break; rcv++; CHKFRAME(f->type, TYPE_DATA, rcv, "rxhdata"); } while (rcv < times || (loop && !(f->flags & END_STREAM))); replace_frame(&s->frame, f); } /* SECTION: stream.spec.data_10 rxreq, rxresp * * These are two convenience functions to receive headers and body of an * incoming request or response. The only difference is that rxreq can only be * by a server, and rxresp by a client. * */ #define cmd_rxreq cmd_rxmsg #define cmd_rxresp cmd_rxmsg static void cmd_rxmsg(CMD_ARGS) { struct stream *s; struct frame *f = NULL; int end_stream; int rcv = 0; CAST_OBJ_NOTNULL(s, priv, STREAM_MAGIC); if (!strcmp(av[0], "rxreq")) ONLY_H2_SERVER(s->hp, av); else ONLY_H2_CLIENT(s->hp, av); do { replace_frame(&f, rxstuff(s)); CHECK_OBJ_ORNULL(f, FRAME_MAGIC); if (f == NULL) return; } while (f->type == TYPE_WINDOW_UPDATE); rcv++; CHKFRAME(f->type, TYPE_HEADERS, rcv, *av); end_stream = f->flags & END_STREAM; while (!(f->flags & END_HEADERS)) { replace_frame(&f, rxstuff(s)); CHECK_OBJ_ORNULL(f, FRAME_MAGIC); if (f == NULL) return; rcv++; CHKFRAME(f->type, TYPE_CONTINUATION, rcv, *av); } while (!end_stream) { replace_frame(&f, rxstuff(s)); CHECK_OBJ_ORNULL(f, FRAME_MAGIC); if (f == NULL) break; rcv++; CHKFRAME(f->type, TYPE_DATA, rcv, *av); end_stream = f->flags & END_STREAM; } replace_frame(&s->frame, f); } /* SECTION: stream.spec.data_12 rxpush * * This works like ``rxhdrs``, expecting a PUSH frame and then zero or more * CONTINUATION frames. * * \-all * Keep waiting for CONTINUATION frames until END_HEADERS flag is seen. * * \-some INT * Retrieve INT - 1 CONTINUATION frames after the PUSH frame. * */ static void cmd_rxpush(CMD_ARGS) { struct stream *s; struct frame *f = NULL; char *p; int loop = 0; unsigned long int times = 1; unsigned rcv = 0; enum h2_type_e expect = TYPE_PUSH_PROMISE; CAST_OBJ_NOTNULL(s, priv, STREAM_MAGIC); while (*++av) { if (!strcmp(*av, "-some")) { STRTOU32_CHECK(times, av, p, vl, "-some", 0); if (!times) vtc_fatal(vl, "-some argument must be more" "than 0 (found \"%s\")\n", *av); } else if (!strcmp(*av, "-all")) { loop = 1; } else break; } if (*av != NULL) vtc_fatal(vl, "Unknown rxpush spec: %s\n", *av); do { f = rxstuff(s); if (!f) return; rcv++; CHKFRAME(f->type, expect, rcv, "rxpush"); expect = TYPE_CONTINUATION; } while (rcv < times || (loop && !(f->flags & END_HEADERS))); s->frame = f; } /* SECTION: stream.spec.winup_rxwinup rxwinup * * Receive a WINDOW_UPDATE frame. */ static void cmd_rxwinup(CMD_ARGS) { struct stream *s; struct frame *f; CAST_OBJ_NOTNULL(s, priv, STREAM_MAGIC); s->frame = rxstuff(s); CAST_OBJ_NOTNULL(f, s->frame, FRAME_MAGIC); CHKFRAME(f->type, TYPE_WINDOW_UPDATE, 0, *av); if (s->id == 0) s->hp->h2_win_peer->size += s->frame->md.winup_size; s->win_peer += s->frame->md.winup_size; } /* SECTION: stream.spec.settings_rxsettings rxsettings * * Receive a SETTINGS frame. */ static void cmd_rxsettings(CMD_ARGS) { struct stream *s, *s2; uint32_t val = 0; struct http *hp; struct frame *f; CAST_OBJ_NOTNULL(s, priv, STREAM_MAGIC); CAST_OBJ_NOTNULL(hp, s->hp, HTTP_MAGIC); s->frame = rxstuff(s); CAST_OBJ_NOTNULL(f, s->frame, FRAME_MAGIC); CHKFRAME(f->type, TYPE_SETTINGS, 0, *av); if (! isnan(f->md.settings[SETTINGS_INITIAL_WINDOW_SIZE])) { val = (uint32_t)f->md.settings[SETTINGS_INITIAL_WINDOW_SIZE]; VTAILQ_FOREACH(s2, &hp->streams, list) s2->win_peer += (val - hp->h2_win_peer->init); hp->h2_win_peer->init = val; } } #define RXFUNC(lctype, upctype) \ static void \ cmd_rx ## lctype(CMD_ARGS) { \ struct stream *s; \ (void)av; \ CAST_OBJ_NOTNULL(s, priv, STREAM_MAGIC); \ s->frame = rxstuff(s); \ if (s->frame != NULL && s->frame->type != TYPE_ ## upctype) \ vtc_fatal(vl, \ "Wrong frame type %s (%d) wanted %s", \ s->frame->type < TYPE_MAX ? \ h2_types[s->frame->type] : "?", \ s->frame->type, #upctype); \ } /* SECTION: stream.spec.prio_rxprio rxprio * * Receive a PRIORITY frame. */ RXFUNC(prio, PRIORITY) /* SECTION: stream.spec.reset_rxrst rxrst * * Receive a RST_STREAM frame. */ RXFUNC(rst, RST_STREAM) /* SECTION: stream.spec.ping_rxping rxping * * Receive a PING frame. */ RXFUNC(ping, PING) /* SECTION: stream.spec.goaway_rxgoaway rxgoaway * * Receive a GOAWAY frame. */ RXFUNC(goaway, GOAWAY) /* SECTION: stream.spec.frame_rxframe * * Receive a frame, any frame. */ static void cmd_rxframe(CMD_ARGS) { struct stream *s; (void)vl; (void)av; CAST_OBJ_NOTNULL(s, priv, STREAM_MAGIC); if (rxstuff(s) == NULL) vtc_fatal(s->vl, "No frame received"); } static void cmd_expect(CMD_ARGS) { struct http *hp; struct stream *s; const char *lhs; char *cmp; const char *rhs; char buf[20]; CAST_OBJ_NOTNULL(s, priv, STREAM_MAGIC); hp = s->hp; CHECK_OBJ_NOTNULL(hp, HTTP_MAGIC); AZ(strcmp(av[0], "expect")); av++; AN(av[0]); AN(av[1]); AN(av[2]); AZ(av[3]); PTOK(pthread_mutex_lock(&s->hp->mtx)); lhs = cmd_var_resolve(s, av[0], buf); cmp = av[1]; rhs = cmd_var_resolve(s, av[2], buf); vtc_expect(vl, av[0], lhs, cmp, av[2], rhs); PTOK(pthread_mutex_unlock(&s->hp->mtx)); } /* SECTION: stream.spec.gunzip gunzip * * Same as the ``gunzip`` command for HTTP/1. */ static void cmd_gunzip(CMD_ARGS) { struct http *hp; struct stream *s; (void)av; (void)vl; CAST_OBJ_NOTNULL(s, priv, STREAM_MAGIC); hp = s->hp; CHECK_OBJ_NOTNULL(hp, HTTP_MAGIC); vtc_gunzip(s->hp, s->body, &s->bodylen); } /* SECTION: stream.spec.write_body * * write_body STRING * Same as the ``write_body`` command for HTTP/1. */ static void cmd_write_body(CMD_ARGS) { struct stream *s; (void)vl; CAST_OBJ_NOTNULL(s, priv, STREAM_MAGIC); AN(av[0]); AN(av[1]); AZ(av[2]); AZ(strcmp(av[0], "write_body")); if (VFIL_writefile(NULL, av[1], s->body, s->bodylen) != 0) vtc_fatal(s->vl, "failed to write body: %s (%d)", strerror(errno), errno); } /* SECTION: stream.spec Specification * * The specification of a stream follows the exact same rules as one for a * client or a server. */ static const struct cmds stream_cmds[] = { #define CMD_STREAM(n) { #n, cmd_##n }, /* spec */ CMD_STREAM(expect) CMD_STREAM(gunzip) CMD_STREAM(rxcont) CMD_STREAM(rxdata) CMD_STREAM(rxframe) CMD_STREAM(rxgoaway) CMD_STREAM(rxhdrs) CMD_STREAM(rxping) CMD_STREAM(rxprio) CMD_STREAM(rxpush) CMD_STREAM(rxreq) CMD_STREAM(rxresp) CMD_STREAM(rxrst) CMD_STREAM(rxsettings) CMD_STREAM(rxwinup) CMD_STREAM(sendhex) CMD_STREAM(txcont) CMD_STREAM(txdata) CMD_STREAM(txgoaway) CMD_STREAM(txping) CMD_STREAM(txprio) CMD_STREAM(txpush) CMD_STREAM(txreq) CMD_STREAM(txresp) CMD_STREAM(txrst) CMD_STREAM(txsettings) CMD_STREAM(txwinup) CMD_STREAM(write_body) { NULL, NULL } #undef CMD_STREAM }; static void * stream_thread(void *priv) { struct stream *s; CAST_OBJ_NOTNULL(s, priv, STREAM_MAGIC); parse_string(s->vl, s, s->spec); vtc_log(s->vl, 2, "Ending stream %u", s->id); return (NULL); } /********************************************************************** * Allocate and initialize a stream */ static struct stream * stream_new(const char *name, struct http *h) { char *p, buf[20]; struct stream *s; if (!strcmp("next", name)) { if (h->last_stream > 0) bprintf(buf, "%d", h->last_stream + 2); else bprintf(buf, "%d", 1); name = buf; } ALLOC_OBJ(s, STREAM_MAGIC); AN(s); PTOK(pthread_cond_init(&s->cond, NULL)); REPLACE(s->name, name); AN(s->name); VTAILQ_INIT(&s->fq); s->win_self = h->h2_win_self->init; s->win_peer = h->h2_win_peer->init; s->vl = vtc_logopen("%s.%s", h->sess->name, name); vtc_log_set_cmd(s->vl, stream_cmds); s->weight = 16; s->dependency = 0; STRTOU32(s->id, name, p, s->vl, "stream"); if (s->id & (1U << 31)) vtc_fatal(s->vl, "Stream id must be a 31-bits integer " "(found %s)", name); CHECK_OBJ_NOTNULL(h, HTTP_MAGIC); s->hp = h; h->last_stream = s->id; //bprintf(s->connect, "%s", "${v1_sock}"); PTOK(pthread_mutex_lock(&h->mtx)); VTAILQ_INSERT_HEAD(&h->streams, s, list); PTOK(pthread_mutex_unlock(&h->mtx)); return (s); } /********************************************************************** * Clean up stream */ static void stream_delete(struct stream *s) { struct frame *f, *f2; CHECK_OBJ_NOTNULL(s, STREAM_MAGIC); VTAILQ_FOREACH_SAFE(f, &s->fq, list, f2) { VTAILQ_REMOVE(&s->fq, f, list); clean_frame(&f); } vtc_logclose(s->vl); clean_headers(s->req); clean_headers(s->resp); AZ(s->frame); free(s->body); free(s->spec); free(s->name); FREE_OBJ(s); } /********************************************************************** * Start the stream thread */ static void stream_start(struct stream *s) { CHECK_OBJ_NOTNULL(s, STREAM_MAGIC); vtc_log(s->hp->vl, 2, "Starting stream %s (%p)", s->name, s); PTOK(pthread_create(&s->tp, NULL, stream_thread, s)); s->running = 1; } /********************************************************************** * Wait for stream thread to stop */ static void stream_wait(struct stream *s) { void *res; struct frame *f, *f2; CHECK_OBJ_NOTNULL(s, STREAM_MAGIC); vtc_log(s->hp->vl, 2, "Waiting for stream %u", s->id); PTOK(pthread_join(s->tp, &res)); if (res != NULL) vtc_fatal(s->hp->vl, "Stream %u returned \"%s\"", s->id, (char *)res); VTAILQ_FOREACH_SAFE(f, &s->fq, list, f2) { VTAILQ_REMOVE(&s->fq, f, list); clean_frame(&f); } clean_frame(&s->frame); s->tp = 0; s->running = 0; } /********************************************************************** * Run the stream thread */ static void stream_run(struct stream *s) { stream_start(s); stream_wait(s); } /* SECTION: client-server.spec.stream * * stream * HTTP/2 introduces the concept of streams, and these come with * their own specification, and as it's quite big, have been moved * to their own chapter. * * SECTION: stream stream * * (note: this section is at the top-level for easier navigation, but * it's part of the client/server specification) * * Streams map roughly to a request in HTTP/2, a request is sent on * stream N, the response too, then the stream is discarded. The main * exception is the first stream, 0, that serves as coordinator. * * Stream syntax follow the client/server one:: * * stream ID [SPEC] [ACTION] * * ID is the HTTP/2 stream number, while SPEC describes what will be * done in that stream. If ID has the value ``next``, the actual stream * number is computed based on the last one. * * Note that, when parsing a stream action, if the entity isn't operating * in HTTP/2 mode, these spec is ran before:: * * txpri/rxpri # client/server * stream 0 { * txsettings * rxsettings * txsettings -ack * rxsettings * expect settings.ack == true * } -run * * And HTTP/2 mode is then activated before parsing the specification. * * SECTION: stream.actions Actions * * \-start * Run the specification in a thread, giving back control immediately. * * \-wait * Wait for the started thread to finish running the spec. * * \-run * equivalent to calling ``-start`` then ``-wait``. */ void cmd_stream(CMD_ARGS) { struct stream *s; struct http *h; (void)vl; CAST_OBJ_NOTNULL(h, priv, HTTP_MAGIC); AZ(strcmp(av[0], "stream")); av++; VTAILQ_FOREACH(s, &h->streams, list) if (!strcmp(s->name, av[0])) break; if (s == NULL) s = stream_new(av[0], h); av++; for (; *av != NULL; av++) { if (vtc_error) break; if (!strcmp(*av, "-wait")) { stream_wait(s); continue; } /* Don't muck about with a running client */ if (s->running) stream_wait(s); if (!strcmp(*av, "-start")) { stream_start(s); continue; } if (!strcmp(*av, "-run")) { stream_run(s); continue; } if (**av == '-') vtc_fatal(vl, "Unknown stream argument: %s", *av); REPLACE(s->spec, *av); } } void b64_settings(const struct http *hp, const char *s) { uint16_t i; uint64_t v, vv; const char *buf; int shift; while (*s) { v = 0; for (shift = 42; shift >= 0; shift -= 6) { if (*s >= 'A' && *s <= 'Z') vv = (*s - 'A'); else if (*s >= 'a' && *s <= 'z') vv = (*s - 'a') + 26; else if (*s >= '0' && *s <= '9') vv = (*s - '0') + 52; else if (*s == '-') vv = 62; else if (*s == '_') vv = 63; else vtc_fatal(hp->vl, "Bad \"HTTP2-Settings\" header"); v |= vv << shift; s++; } i = v >> 32; v &= 0xffff; if (i <= SETTINGS_MAX) buf = h2_settings[i]; else buf = "unknown"; if (v == 1) { if (hp->sfd) assert(HPK_ResizeTbl(hp->encctx, v) != hpk_err); else assert(HPK_ResizeTbl(hp->decctx, v) != hpk_err); } vtc_log(hp->vl, 4, "Upgrade: %s (%d): %ju", buf, i, (intmax_t)v); } } void start_h2(struct http *hp) { CHECK_OBJ_NOTNULL(hp, HTTP_MAGIC); PTOK(pthread_mutex_init(&hp->mtx, NULL)); PTOK(pthread_cond_init(&hp->cond, NULL)); VTAILQ_INIT(&hp->streams); hp->h2_win_self->init = 0xffff; hp->h2_win_self->size = 0xffff; hp->h2_win_peer->init = 0xffff; hp->h2_win_peer->size = 0xffff; hp->h2 = 1; hp->decctx = HPK_NewCtx(4096); hp->encctx = HPK_NewCtx(4096); PTOK(pthread_create(&hp->tp, NULL, receive_frame, hp)); } void stop_h2(struct http *hp) { struct stream *s, *s2; CHECK_OBJ_NOTNULL(hp, HTTP_MAGIC); VTAILQ_FOREACH_SAFE(s, &hp->streams, list, s2) { if (s->running) stream_wait(s); PTOK(pthread_mutex_lock(&hp->mtx)); VTAILQ_REMOVE(&hp->streams, s, list); PTOK(pthread_mutex_unlock(&hp->mtx)); stream_delete(s); } PTOK(pthread_mutex_lock(&hp->mtx)); hp->h2 = 0; PTOK(pthread_cond_signal(&hp->cond)); PTOK(pthread_mutex_unlock(&hp->mtx)); PTOK(pthread_join(hp->tp, NULL)); HPK_FreeCtx(hp->decctx); HPK_FreeCtx(hp->encctx); PTOK(pthread_mutex_destroy(&hp->mtx)); PTOK(pthread_cond_destroy(&hp->cond)); } varnish-7.5.0/bin/varnishtest/vtc_log.c000066400000000000000000000161761457605730600201500ustar00rootroot00000000000000/*- * Copyright (c) 2008-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include "config.h" #include #include #include #include #include #include "vtc.h" #include "vtc_log.h" #include "vtim.h" static pthread_mutex_t vtclog_mtx; static char *vtclog_buf; static unsigned vtclog_left; static pthread_key_t log_key; static double t0; void vtc_log_set_cmd(struct vtclog *vl, const struct cmds *cmds) { AN(cmds); vl->cmds = cmds; } /**********************************************************************/ #define GET_VL(vl) \ do { \ CHECK_OBJ_NOTNULL(vl, VTCLOG_MAGIC); \ PTOK(pthread_mutex_lock(&vl->mtx)); \ vl->act = 1; \ VSB_clear(vl->vsb); \ } while(0) #define REL_VL(vl) \ do { \ AZ(VSB_finish(vl->vsb)); \ vtc_log_emit(vl); \ VSB_clear(vl->vsb); \ vl->act = 0; \ PTOK(pthread_mutex_unlock(&vl->mtx)); \ } while(0) struct vtclog * vtc_logopen(const char *fmt, ...) { struct vtclog *vl; va_list ap; char buf[BUFSIZ]; va_start(ap, fmt); vbprintf(buf, fmt, ap); va_end(ap); ALLOC_OBJ(vl, VTCLOG_MAGIC); AN(vl); REPLACE(vl->id, buf); vl->vsb = VSB_new_auto(); PTOK(pthread_mutex_init(&vl->mtx, NULL)); PTOK(pthread_setspecific(log_key, vl)); return (vl); } void vtc_logclose(void *arg) { struct vtclog *vl; CAST_OBJ_NOTNULL(vl, arg, VTCLOG_MAGIC); if (pthread_getspecific(log_key) == vl) PTOK(pthread_setspecific(log_key, NULL)); VSB_destroy(&vl->vsb); PTOK(pthread_mutex_destroy(&vl->mtx)); REPLACE(vl->id, NULL); FREE_OBJ(vl); } static void v_noreturn_ vtc_logfail(void) { vtc_error = 2; if (!pthread_equal(pthread_self(), vtc_thread)) pthread_exit(NULL); else exit(fail_out()); } static const char * const lead[] = { "----", "* ", "** ", "*** ", "****" }; #define NLEAD (sizeof(lead)/sizeof(lead[0])) static void vtc_leadinv(const struct vtclog *vl, int lvl, const char *fmt, va_list ap) { assert(lvl < (int)NLEAD); assert(lvl >= 0); VSB_printf(vl->vsb, "%s %-5s ", lead[lvl < 0 ? 1: lvl], vl->id); if (fmt != NULL) (void)VSB_vprintf(vl->vsb, fmt, ap); } static void vtc_leadin(const struct vtclog *vl, int lvl, const char *fmt, ...) { va_list ap; va_start(ap, fmt); vtc_leadinv(vl, lvl, fmt, ap); va_end(ap); } static void vtc_log_emit(const struct vtclog *vl) { unsigned l; int i; int t_this; static int t_last = -1; l = VSB_len(vl->vsb); if (l == 0) return; t_this = (int)round((VTIM_mono() - t0) * 1000); PTOK(pthread_mutex_lock(&vtclog_mtx)); if (t_last != t_this) { assert(vtclog_left > 25); i = snprintf(vtclog_buf, vtclog_left, "**** dT %d.%03d\n", t_this / 1000, t_this % 1000); t_last = t_this; vtclog_buf += i; vtclog_left -= i; } assert(vtclog_left > l); memcpy(vtclog_buf, VSB_data(vl->vsb), l); vtclog_buf += l; *vtclog_buf = '\0'; vtclog_left -= l; PTOK(pthread_mutex_unlock(&vtclog_mtx)); } void vtc_fatal(struct vtclog *vl, const char *fmt, ...) { GET_VL(vl); va_list ap; va_start(ap, fmt); vtc_leadinv(vl, 0, fmt, ap); VSB_putc(vl->vsb, '\n'); va_end(ap); REL_VL(vl); vtc_logfail(); } void vtc_log(struct vtclog *vl, int lvl, const char *fmt, ...) { GET_VL(vl); va_list ap; va_start(ap, fmt); if (lvl >= 0) { vtc_leadinv(vl, lvl, fmt, ap); VSB_putc(vl->vsb, '\n'); } va_end(ap); REL_VL(vl); if (lvl == 0) vtc_logfail(); } /********************************************************************** * Dump a string */ #define MAX_DUMP 8192 void vtc_dump(struct vtclog *vl, int lvl, const char *pfx, const char *str, int len) { char buf[64]; int quote = VSB_QUOTE_UNSAFE | VSB_QUOTE_ESCHEX; AN(pfx); GET_VL(vl); if (str == NULL) vtc_leadin(vl, lvl, "%s(null)\n", pfx); else { bprintf(buf, "%s %-5s %s|", lead[lvl < 0 ? 1: lvl], vl->id, pfx); if (len < 0) len = strlen(str); else if (str[0] == 0x1f && (uint8_t)str[1] == 0x8b) quote = VSB_QUOTE_HEX; // Dump gzip data in HEX VSB_quote_pfx(vl->vsb, buf, str, len > MAX_DUMP ? MAX_DUMP : len, quote); if (quote == VSB_QUOTE_HEX) VSB_putc(vl->vsb, '\n'); if (len > MAX_DUMP) VSB_printf(vl->vsb, "%s [...] (%d)\n", buf, len - MAX_DUMP); } REL_VL(vl); if (lvl == 0) vtc_logfail(); } /********************************************************************** * Hexdump */ void vtc_hexdump(struct vtclog *vl, int lvl, const char *pfx, const void *ptr, unsigned len) { int nl = 1; unsigned l; const uint8_t *ss = ptr; AN(pfx); GET_VL(vl); if (ss == NULL) vtc_leadin(vl, lvl, "%s(null)\n", pfx); else { for (l = 0; l < len; l++, ss++) { if (l > 512) { VSB_cat(vl->vsb, "..."); break; } if (nl) { vtc_leadin(vl, lvl, "%s| ", pfx); nl = 0; } VSB_printf(vl->vsb, " %02x", *ss); if ((l & 0xf) == 0xf) { VSB_cat(vl->vsb, "\n"); nl = 1; } } } if (!nl) VSB_cat(vl->vsb, "\n"); REL_VL(vl); if (lvl == 0) vtc_logfail(); } /**********************************************************************/ static void v_noreturn_ vtc_log_VAS_Fail(const char *func, const char *file, int line, const char *cond, enum vas_e why) { struct vtclog *vl; int e = errno; (void)why; vl = pthread_getspecific(log_key); if (vl == NULL || vl->act) { fprintf(stderr, "Assert error in %s(), %s line %d:\n" " Condition(%s) not true. (errno=%d %s)\n", func, file, line, cond, e, strerror(e)); } else vtc_fatal(vl, "Assert error in %s(), %s line %d:" " Condition(%s) not true." " Errno=%d %s", func, file, line, cond, e, strerror(e)); abort(); } /**********************************************************************/ void vtc_loginit(char *buf, unsigned buflen) { VAS_Fail_Func = vtc_log_VAS_Fail; t0 = VTIM_mono(); vtclog_buf = buf; vtclog_left = buflen; PTOK(pthread_mutex_init(&vtclog_mtx, NULL)); PTOK(pthread_key_create(&log_key, NULL)); } varnish-7.5.0/bin/varnishtest/vtc_log.h000066400000000000000000000031301457605730600201370ustar00rootroot00000000000000/*- * Copyright (c) 2008-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ struct vtclog { unsigned magic; #define VTCLOG_MAGIC 0x82731202 char *id; struct vsb *vsb; pthread_mutex_t mtx; int act; const struct cmds *cmds; }; varnish-7.5.0/bin/varnishtest/vtc_logexp.c000066400000000000000000000502431457605730600206560ustar00rootroot00000000000000/*- * Copyright (c) 2008-2015 Varnish Software AS * All rights reserved. * * Author: Martin Blix Grydeland * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #ifdef VTEST_WITH_VTC_LOGEXPECT /* SECTION: logexpect logexpect * * Reads the VSL and looks for records matching a given specification. It will * process records trying to match the first pattern, and when done, will * continue processing, trying to match the following pattern. If a pattern * isn't matched, the test will fail. * * logexpect threads are declared this way:: * * logexpect lNAME -v [-g ] [-d 0|1] [-q query] \ * [vsl arguments] { * expect * expect * fail add * fail clear * abort * ... * } [-start|-wait|-run] * * And once declared, you can start them, or wait on them:: * * logexpect lNAME <-start|-wait> * * With: * * lNAME * Name the logexpect thread, it must start with 'l'. * * \-v id * Specify the varnish instance to use (most of the time, id=v1). * * \-g * Start processing log records at the head of the log instead of the * tail. * * \-q query * Filter records using a query expression, see ``man vsl-query`` for * more information. Multiple -q options are not supported. * * \-m * Also emit log records for misses (only for debugging) * * \-err * Invert the meaning of success. Usually called once to expect the * logexpect to fail * * \-start * Start the logexpect thread in the background. * * \-wait * Wait for the logexpect thread to finish * * \-run * Equivalent to "-start -wait". * * VSL arguments (similar to the varnishlog options): * * \-C * Use caseless regex * * \-i * Include tags * * \-I <[taglist:]regex> * Include by regex * * \-T * Transaction end timeout * * expect specification: * * skip: [uint|*|?] * Max number of record to skip * * vxid: [uint|*|=] * vxid to match * * tag: [tagname|*|=] * Tag to match against * * regex: * regular expression to match against (optional) * * For skip, vxid and tag, '*' matches anything, '=' expects the value of the * previous matched record. The '?' marker is equivalent to zero, expecting a * match on the next record. The difference is that '?' can be used when the * order of individual consecutive logs is not deterministic. In other words, * lines from a block of alternatives marked by '?' can be matched in any order, * but all need to match eventually. * * fail specification: * * add: Add to the fail list * * Arguments are equivalent to expect, except for skip missing * * clear: Clear the fail list * * Any number of fail specifications can be active during execution of * a logexpect. All active fail specifications are matched against every * log line and, if any match, the logexpect fails immediately. * * For a logexpect to end successfully, there must be no specs on the fail list, * so logexpects should always end with * * expect * fail clear * * .. XXX can we come up with a better solution which is still safe? * * abort specification: * * abort(3) varnishtest, intended to help debugging of the VSL client library * itself. */ #include "config.h" #include #include #include #include #include "vapi/vsm.h" #include "vapi/vsl.h" #include "vtc.h" #include "vtim.h" #include "vre.h" #define LE_ANY (-1) #define LE_LAST (-2) #define LE_ALT (-3) #define LE_SEEN (-4) #define LE_FAIL (-5) #define LE_CLEAR (-6) // clear fail list #define LE_ABORT (-7) struct logexp_test { unsigned magic; #define LOGEXP_TEST_MAGIC 0x6F62B350 VTAILQ_ENTRY(logexp_test) list; VTAILQ_ENTRY(logexp_test) faillist; struct vsb *str; int64_t vxid; int tag; vre_t *vre; int skip_max; }; VTAILQ_HEAD(tests_head,logexp_test); struct logexp { unsigned magic; #define LOGEXP_MAGIC 0xE81D9F1B VTAILQ_ENTRY(logexp) list; char *name; char *vname; struct vtclog *vl; char run; struct tests_head tests; struct logexp_test *test; int skip_cnt; int64_t vxid_last; int tag_last; struct tests_head fail; int m_arg; int err_arg; int d_arg; enum VSL_grouping_e g_arg; char *query; struct vsm *vsm; struct VSL_data *vsl; struct VSLQ *vslq; pthread_t tp; }; static VTAILQ_HEAD(, logexp) logexps = VTAILQ_HEAD_INITIALIZER(logexps); static cmd_f cmd_logexp_expect; static cmd_f cmd_logexp_fail; static cmd_f cmd_logexp_abort; static const struct cmds logexp_cmds[] = { { "expect", cmd_logexp_expect }, { "fail", cmd_logexp_fail }, { "abort", cmd_logexp_abort }, { NULL, NULL }, }; static void logexp_delete_tests(struct logexp *le) { struct logexp_test *test; CHECK_OBJ_NOTNULL(le, LOGEXP_MAGIC); while (!VTAILQ_EMPTY(&le->tests)) { test = VTAILQ_FIRST(&le->tests); CHECK_OBJ_NOTNULL(test, LOGEXP_TEST_MAGIC); VTAILQ_REMOVE(&le->tests, test, list); VSB_destroy(&test->str); if (test->vre) VRE_free(&test->vre); FREE_OBJ(test); } } static void logexp_delete(struct logexp *le) { CHECK_OBJ_NOTNULL(le, LOGEXP_MAGIC); AZ(le->run); AN(le->vsl); VSL_Delete(le->vsl); AZ(le->vslq); logexp_delete_tests(le); free(le->name); free(le->vname); free(le->query); VSM_Destroy(&le->vsm); vtc_logclose(le->vl); FREE_OBJ(le); } static struct logexp * logexp_new(const char *name, const char *varg) { struct logexp *le; struct vsb *n_arg; ALLOC_OBJ(le, LOGEXP_MAGIC); AN(le); REPLACE(le->name, name); le->vl = vtc_logopen("%s", name); vtc_log_set_cmd(le->vl, logexp_cmds); VTAILQ_INIT(&le->tests); le->d_arg = 0; le->g_arg = VSL_g_vxid; le->vsm = VSM_New(); le->vsl = VSL_New(); AN(le->vsm); AN(le->vsl); VTAILQ_INSERT_TAIL(&logexps, le, list); REPLACE(le->vname, varg); n_arg = macro_expandf(le->vl, "${tmpdir}/%s", varg); if (n_arg == NULL) vtc_fatal(le->vl, "-v argument problems"); if (VSM_Arg(le->vsm, 'n', VSB_data(n_arg)) <= 0) vtc_fatal(le->vl, "-v argument error: %s", VSM_Error(le->vsm)); VSB_destroy(&n_arg); if (VSM_Attach(le->vsm, -1)) vtc_fatal(le->vl, "VSM_Attach: %s", VSM_Error(le->vsm)); return (le); } static void logexp_clean(const struct tests_head *head) { struct logexp_test *test; VTAILQ_FOREACH(test, head, list) if (test->skip_max == LE_SEEN) test->skip_max = LE_ALT; } static struct logexp_test * logexp_alt(struct logexp_test *test) { assert(test->skip_max == LE_ALT); do test = VTAILQ_NEXT(test, list); while (test != NULL && test->skip_max == LE_SEEN); if (test == NULL || test->skip_max != LE_ALT) return (NULL); return (test); } static void logexp_next(struct logexp *le) { CHECK_OBJ_NOTNULL(le, LOGEXP_MAGIC); if (le->test && le->test->skip_max == LE_ALT) { /* * if an alternative was not seen, continue at this expection * with the next vsl */ (void)0; } else if (le->test) { CHECK_OBJ_NOTNULL(le->test, LOGEXP_TEST_MAGIC); le->test = VTAILQ_NEXT(le->test, list); } else { logexp_clean(&le->tests); VTAILQ_INIT(&le->fail); le->test = VTAILQ_FIRST(&le->tests); } if (le->test == NULL) return; CHECK_OBJ(le->test, LOGEXP_TEST_MAGIC); switch (le->test->skip_max) { case LE_SEEN: logexp_next(le); return; case LE_CLEAR: vtc_log(le->vl, 3, "cond | fail clear"); VTAILQ_INIT(&le->fail); logexp_next(le); return; case LE_FAIL: vtc_log(le->vl, 3, "cond | %s", VSB_data(le->test->str)); VTAILQ_INSERT_TAIL(&le->fail, le->test, faillist); logexp_next(le); return; case LE_ABORT: abort(); NEEDLESS(return); default: vtc_log(le->vl, 3, "test | %s", VSB_data(le->test->str)); } } enum le_match_e { LEM_OK, LEM_SKIP, LEM_FAIL }; static enum le_match_e logexp_match(const struct logexp *le, struct logexp_test *test, const char *data, int64_t vxid, int tag, int type, int len) { const char *legend; int ok = 1, skip = 0, alt, fail, vxid_ok = 0; AN(le); AN(test); assert(test->skip_max != LE_SEEN); assert(test->skip_max != LE_CLEAR); if (test->vxid == LE_LAST) { if (le->vxid_last != vxid) ok = 0; vxid_ok = ok; } else if (test->vxid >= 0) { if (test->vxid != vxid) ok = 0; vxid_ok = ok; } if (test->tag == LE_LAST) { if (le->tag_last != tag) ok = 0; } else if (test->tag >= 0) { if (test->tag != tag) ok = 0; } if (test->vre && test->tag >= 0 && test->tag == tag && VRE_ERROR_NOMATCH == VRE_match(test->vre, data, len, 0, NULL)) ok = 0; alt = (test->skip_max == LE_ALT); fail = (test->skip_max == LE_FAIL); if (!ok && !alt && (test->skip_max == LE_ANY || test->skip_max > le->skip_cnt)) skip = 1; if (skip && vxid_ok && tag == SLT_End) fail = 1; if (fail) { if (ok) { legend = "fail"; } else if (skip) { legend = "end"; skip = 0; } else if (le->m_arg) { legend = "fmiss"; } else { legend = NULL; } } else if (ok) legend = "match"; else if (skip && le->m_arg) legend = "miss"; else if (skip || alt) legend = NULL; else legend = "err"; if (legend != NULL) vtc_log(le->vl, 4, "%-5s| %10ju %-15s %c %.*s", legend, (intmax_t)vxid, VSL_tags[tag], type, len, data); if (ok) { if (alt) test->skip_max = LE_SEEN; return (LEM_OK); } if (alt) { test = logexp_alt(test); if (test == NULL) return (LEM_FAIL); vtc_log(le->vl, 3, "alt | %s", VSB_data(test->str)); return (logexp_match(le, test, data, vxid, tag, type, len)); } if (skip) return (LEM_SKIP); return (LEM_FAIL); } static enum le_match_e logexp_failchk(const struct logexp *le, const char *data, int64_t vxid, int tag, int type, int len) { struct logexp_test *test; static enum le_match_e r; if (VTAILQ_FIRST(&le->fail) == NULL) return (LEM_SKIP); VTAILQ_FOREACH(test, &le->fail, faillist) { r = logexp_match(le, test, data, vxid, tag, type, len); if (r == LEM_OK) return (LEM_FAIL); assert (r == LEM_FAIL); } return (LEM_OK); } static int logexp_done(const struct logexp *le) { return ((VTAILQ_FIRST(&le->fail) == NULL) && le->test == NULL); } static int v_matchproto_(VSLQ_dispatch_f) logexp_dispatch(struct VSL_data *vsl, struct VSL_transaction * const pt[], void *priv) { struct logexp *le; struct VSL_transaction *t; int i; enum le_match_e r; int64_t vxid; int tag, type, len; const char *data; CAST_OBJ_NOTNULL(le, priv, LOGEXP_MAGIC); for (i = 0; (t = pt[i]) != NULL; i++) { while (1 == VSL_Next(t->c)) { if (!VSL_Match(vsl, t->c)) continue; AN(t->c->rec.ptr); tag = VSL_TAG(t->c->rec.ptr); if (tag == SLT__Batch || tag == SLT_Witness) continue; vxid = VSL_ID(t->c->rec.ptr); data = VSL_CDATA(t->c->rec.ptr); len = VSL_LEN(t->c->rec.ptr) - 1; type = VSL_CLIENT(t->c->rec.ptr) ? 'c' : VSL_BACKEND(t->c->rec.ptr) ? 'b' : '-'; r = logexp_failchk(le, data, vxid, tag, type, len); if (r == LEM_FAIL) return (r); if (le->test == NULL) { assert (r == LEM_OK); continue; } CHECK_OBJ_NOTNULL(le->test, LOGEXP_TEST_MAGIC); r = logexp_match(le, le->test, data, vxid, tag, type, len); if (r == LEM_FAIL) return (r); if (r == LEM_SKIP) { le->skip_cnt++; continue; } assert(r == LEM_OK); le->vxid_last = vxid; le->tag_last = tag; le->skip_cnt = 0; logexp_next(le); if (logexp_done(le)) return (1); } } return (0); } static void * logexp_thread(void *priv) { struct logexp *le; int i; CAST_OBJ_NOTNULL(le, priv, LOGEXP_MAGIC); AN(le->run); AN(le->vsm); AN(le->vslq); AZ(le->test); vtc_log(le->vl, 4, "begin|"); if (le->query != NULL) vtc_log(le->vl, 4, "qry | %s", le->query); logexp_next(le); while (!logexp_done(le) && !vtc_stop && !vtc_error) { i = VSLQ_Dispatch(le->vslq, logexp_dispatch, le); if (i == 2 && le->err_arg) { vtc_log(le->vl, 4, "done | failed as expected"); return (NULL); } if (i == 2) vtc_fatal(le->vl, "bad | expectation failed"); else if (i < 0) vtc_fatal(le->vl, "bad | dispatch failed (%d)", i); else if (i == 0 && ! logexp_done(le)) VTIM_sleep(0.01); } if (!logexp_done(le)) vtc_fatal(le->vl, "bad | outstanding expectations"); vtc_log(le->vl, 4, "done |"); return (NULL); } static void logexp_close(struct logexp *le) { CHECK_OBJ_NOTNULL(le, LOGEXP_MAGIC); AN(le->vsm); if (le->vslq) VSLQ_Delete(&le->vslq); AZ(le->vslq); } static void logexp_start(struct logexp *le) { struct VSL_cursor *c; CHECK_OBJ_NOTNULL(le, LOGEXP_MAGIC); AN(le->vsl); AZ(le->vslq); AN(le->vsl); (void)VSM_Status(le->vsm); c = VSL_CursorVSM(le->vsl, le->vsm, (le->d_arg ? 0 : VSL_COPT_TAIL) | VSL_COPT_BATCH); if (c == NULL) vtc_fatal(le->vl, "VSL_CursorVSM: %s", VSL_Error(le->vsl)); le->vslq = VSLQ_New(le->vsl, &c, le->g_arg, le->query); if (le->vslq == NULL) { VSL_DeleteCursor(c); vtc_fatal(le->vl, "VSLQ_New: %s", VSL_Error(le->vsl)); } AZ(c); le->test = NULL; le->skip_cnt = 0; le->vxid_last = le->tag_last = -1; le->run = 1; PTOK(pthread_create(&le->tp, NULL, logexp_thread, le)); } static void logexp_wait(struct logexp *le) { void *res; CHECK_OBJ_NOTNULL(le, LOGEXP_MAGIC); vtc_log(le->vl, 2, "Waiting for logexp"); PTOK(pthread_join(le->tp, &res)); logexp_close(le); if (res != NULL && !vtc_stop) vtc_fatal(le->vl, "logexp returned \"%p\"", (char *)res); le->run = 0; } /* shared by expect and fail: parse from av[2] (vxid) onwards */ static void cmd_logexp_common(struct logexp *le, struct vtclog *vl, int skip_max, char * const *av) { vre_t *vre; struct vsb vsb[1]; int64_t vxid; int err, pos, tag; struct logexp_test *test; char *end, errbuf[VRE_ERROR_LEN]; if (!strcmp(av[2], "*")) vxid = LE_ANY; else if (!strcmp(av[2], "=")) vxid = LE_LAST; else { vxid = strtoll(av[2], &end, 10); if (*end != '\0' || vxid < 0) vtc_fatal(vl, "Not a positive integer: '%s'", av[2]); } if (!strcmp(av[3], "*")) tag = LE_ANY; else if (!strcmp(av[3], "=")) tag = LE_LAST; else { tag = VSL_Name2Tag(av[3], strlen(av[3])); if (tag < 0) vtc_fatal(vl, "Unknown tag name: '%s'", av[3]); } vre = NULL; if (av[4]) { vre = VRE_compile(av[4], 0, &err, &pos, 1); if (vre == NULL) { AN(VSB_init(vsb, errbuf, sizeof errbuf)); AZ(VRE_error(vsb, err)); AZ(VSB_finish(vsb)); VSB_fini(vsb); vtc_fatal(vl, "Regex error (%s): '%s' pos %d", errbuf, av[4], pos); } } ALLOC_OBJ(test, LOGEXP_TEST_MAGIC); AN(test); test->str = VSB_new_auto(); AN(test->str); AZ(VSB_printf(test->str, "%s %s %s %s ", av[0], av[1], av[2], av[3])); if (av[4]) VSB_quote(test->str, av[4], -1, 0); AZ(VSB_finish(test->str)); test->skip_max = skip_max; test->vxid = vxid; test->tag = tag; test->vre = vre; VTAILQ_INSERT_TAIL(&le->tests, test, list); } static void cmd_logexp_expect(CMD_ARGS) { struct logexp *le; int skip_max; char *end; CAST_OBJ_NOTNULL(le, priv, LOGEXP_MAGIC); if (av[1] == NULL || av[2] == NULL || av[3] == NULL) vtc_fatal(vl, "Syntax error"); if (av[4] != NULL && av[5] != NULL) vtc_fatal(vl, "Syntax error"); if (!strcmp(av[1], "*")) skip_max = LE_ANY; else if (!strcmp(av[1], "?")) skip_max = LE_ALT; else { skip_max = (int)strtol(av[1], &end, 10); if (*end != '\0' || skip_max < 0) vtc_fatal(vl, "Not a positive integer: '%s'", av[1]); } cmd_logexp_common(le, vl, skip_max, av); } static void cmd_logexp_fail(CMD_ARGS) { struct logexp *le; struct logexp_test *test; CAST_OBJ_NOTNULL(le, priv, LOGEXP_MAGIC); if (av[1] == NULL) vtc_fatal(vl, "Syntax error"); if (!strcmp(av[1], "clear")) { ALLOC_OBJ(test, LOGEXP_TEST_MAGIC); AN(test); test->skip_max = LE_CLEAR; test->str = VSB_new_auto(); AN(test->str); AZ(VSB_printf(test->str, "%s %s", av[0], av[1])); AZ(VSB_finish(test->str)); VTAILQ_INSERT_TAIL(&le->tests, test, list); return; } if (strcmp(av[1], "add")) vtc_fatal(vl, "Unknown fail argument '%s'", av[1]); if (av[2] == NULL || av[3] == NULL) vtc_fatal(vl, "Syntax error"); cmd_logexp_common(le, vl, LE_FAIL, av); } /* aid vsl debugging */ static void cmd_logexp_abort(CMD_ARGS) { struct logexp *le; CAST_OBJ_NOTNULL(le, priv, LOGEXP_MAGIC); cmd_logexp_common(le, vl, LE_ABORT, av); } static void logexp_spec(struct logexp *le, const char *spec) { CHECK_OBJ_NOTNULL(le, LOGEXP_MAGIC); logexp_delete_tests(le); parse_string(le->vl, le, spec); } void cmd_logexpect(CMD_ARGS) { struct logexp *le, *le2; int i; (void)priv; if (av == NULL) { /* Reset and free */ VTAILQ_FOREACH_SAFE(le, &logexps, list, le2) { CHECK_OBJ_NOTNULL(le, LOGEXP_MAGIC); VTAILQ_REMOVE(&logexps, le, list); if (le->run) { (void)pthread_cancel(le->tp); logexp_wait(le); } logexp_delete(le); } return; } AZ(strcmp(av[0], "logexpect")); av++; VTC_CHECK_NAME(vl, av[0], "Logexpect", 'l'); VTAILQ_FOREACH(le, &logexps, list) { if (!strcmp(le->name, av[0])) break; } if (le == NULL) { if (strcmp(av[1], "-v") || av[2] == NULL) vtc_fatal(vl, "new logexp lacks -v"); le = logexp_new(av[0], av[2]); av += 2; } av++; for (; *av != NULL; av++) { if (vtc_error) break; if (!strcmp(*av, "-wait")) { if (!le->run) vtc_fatal(le->vl, "logexp not -started '%s'", *av); logexp_wait(le); continue; } /* * We do an implict -wait if people muck about with a * running logexp. */ if (le->run) logexp_wait(le); AZ(le->run); if (!strcmp(*av, "-v")) { if (av[1] == NULL || strcmp(av[1], le->vname)) vtc_fatal(le->vl, "-v argument cannot change"); av++; continue; } if (!strcmp(*av, "-d")) { if (av[1] == NULL) vtc_fatal(le->vl, "Missing -d argument"); le->d_arg = atoi(av[1]); av++; continue; } if (!strcmp(*av, "-g")) { if (av[1] == NULL) vtc_fatal(le->vl, "Missing -g argument"); i = VSLQ_Name2Grouping(av[1], strlen(av[1])); if (i < 0) vtc_fatal(le->vl, "Unknown grouping '%s'", av[1]); le->g_arg = (enum VSL_grouping_e)i; av++; continue; } if (!strcmp(*av, "-q")) { if (av[1] == NULL) vtc_fatal(le->vl, "Missing -q argument"); REPLACE(le->query, av[1]); av++; continue; } if (!strcmp(*av, "-m")) { le->m_arg = !le->m_arg; continue; } if (!strcmp(*av, "-err")) { le->err_arg = !le->err_arg; continue; } if (!strcmp(*av, "-start")) { logexp_start(le); continue; } if (!strcmp(*av, "-run")) { logexp_start(le); logexp_wait(le); continue; } if (**av == '-') { if (av[1] != NULL) { if (VSL_Arg(le->vsl, av[0][1], av[1])) { av++; continue; } vtc_fatal(le->vl, "%s", VSL_Error(le->vsl)); } vtc_fatal(le->vl, "Unknown logexp argument: %s", *av); } logexp_spec(le, *av); } } #endif /* VTEST_WITH_VTC_LOGEXPECT */ varnish-7.5.0/bin/varnishtest/vtc_main.c000066400000000000000000000515021457605730600203030ustar00rootroot00000000000000/*- * Copyright (c) 2008-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include "config.h" #include #include #include #include #include #include #include #include #include #include #include #include "vtc.h" #include "vev.h" #include "vfil.h" #include "vnum.h" #include "vrnd.h" #include "vsa.h" #include "vss.h" #include "vsub.h" #include "vtcp.h" #include "vtim.h" #include "vct.h" static const char *argv0; struct buf { unsigned magic; #define BUF_MAGIC 0x39d1258a VTAILQ_ENTRY(buf) list; char *buf; struct vsb *diag; size_t bufsiz; }; static VTAILQ_HEAD(, buf) free_bufs = VTAILQ_HEAD_INITIALIZER(free_bufs); struct vtc_tst { unsigned magic; #define TST_MAGIC 0x618d8b88 VTAILQ_ENTRY(vtc_tst) list; const char *filename; char *script; unsigned ntodo; unsigned nwait; }; struct vtc_job { unsigned magic; #define JOB_MAGIC 0x1b5fc419 struct vtc_tst *tst; pid_t child; struct vev *ev; struct vev *evt; struct buf *bp; char *tmpdir; double t0; int killed; }; int iflg = 0; vtim_dur vtc_maxdur = 60; static unsigned vtc_bufsiz = 1024 * 1024; static VTAILQ_HEAD(, vtc_tst) tst_head = VTAILQ_HEAD_INITIALIZER(tst_head); static struct vev_root *vb; static int njob = 0; static int npar = 1; /* Number of parallel tests */ static int vtc_continue; /* Continue on error */ static int vtc_verbosity = 1; /* Verbosity Level */ static int vtc_good; static int vtc_fail; static int vtc_skip; static char *tmppath; static char *cwd = NULL; char *vmod_path = NULL; struct vsb *params_vsb = NULL; int leave_temp; static struct vsb *cbvsb; static int bad_backend_fd; static int cleaner_fd = -1; static pid_t cleaner_pid; const char *default_listen_addr; static struct buf * get_buf(void) { struct buf *bp; bp = VTAILQ_FIRST(&free_bufs); CHECK_OBJ_ORNULL(bp, BUF_MAGIC); if (bp != NULL) { VTAILQ_REMOVE(&free_bufs, bp, list); VSB_clear(bp->diag); } else { ALLOC_OBJ(bp, BUF_MAGIC); AN(bp); bp->bufsiz = vtc_bufsiz; bp->buf = mmap(NULL, bp->bufsiz, PROT_READ|PROT_WRITE, MAP_ANON | MAP_SHARED, -1, 0); assert(bp->buf != MAP_FAILED); bp->diag = VSB_new_auto(); AN(bp->diag); } memset(bp->buf, 0, bp->bufsiz); return (bp); } static void rel_buf(struct buf **bp) { CHECK_OBJ_NOTNULL(*bp, BUF_MAGIC); VTAILQ_INSERT_HEAD(&free_bufs, (*bp), list); *bp = NULL; } /********************************************************************** * Parse a -D option argument into a name/val pair, and insert * into extmacro list */ static int parse_D_opt(char *arg) { char *p, *q; p = arg; q = strchr(p, '='); if (!q) return (0); *q++ = '\0'; extmacro_def(p, NULL, "%s", q); return (1); } /********************************************************************** * Print usage */ static void v_noreturn_ usage(void) { fprintf(stderr, "usage: %s [options] file ...\n", argv0); #define FMT " %-28s # %s\n" fprintf(stderr, FMT, "-b size", "Set internal buffer size (default: 1M)"); fprintf(stderr, FMT, "-C", "Use cleaner subprocess"); fprintf(stderr, FMT, "-D name=val", "Define macro"); fprintf(stderr, FMT, "-i", "Find varnish binaries in build tree"); fprintf(stderr, FMT, "-j jobs", "Run this many tests in parallel"); fprintf(stderr, FMT, "-k", "Continue on test failure"); fprintf(stderr, FMT, "-L", "Always leave temporary vtc.*"); fprintf(stderr, FMT, "-l", "Leave temporary vtc.* if test fails"); fprintf(stderr, FMT, "-n iterations", "Run tests this many times"); fprintf(stderr, FMT, "-p name=val", "Pass a varnishd parameter"); fprintf(stderr, FMT, "-q", "Quiet mode: report only failures"); fprintf(stderr, FMT, "-t duration", "Time tests out after this long"); fprintf(stderr, FMT, "-v", "Verbose mode: always report test log"); exit(1); } /********************************************************************** * When running many tests, cleaning the tmpdir with "rm -rf" becomes * chore which limits our performance. * When the number of tests are above 100, we spawn a child-process * to do that for us. */ static void cleaner_do(const char *dirname) { char buf[BUFSIZ]; AZ(memcmp(dirname, tmppath, strlen(tmppath))); if (cleaner_pid > 0) { bprintf(buf, "%s\n", dirname); assert(write(cleaner_fd, buf, strlen(buf)) == strlen(buf)); return; } bprintf(buf, "exec /bin/rm -rf %s\n", dirname); AZ(system(buf)); } static void cleaner_setup(void) { int p[2], st; char buf[BUFSIZ]; char *q; pid_t pp; AZ(pipe(p)); assert(p[0] > STDERR_FILENO); assert(p[1] > STDERR_FILENO); cleaner_pid = fork(); assert(cleaner_pid >= 0); if (cleaner_pid == 0) { closefd(&p[1]); (void)nice(1); /* Not important */ setbuf(stdin, NULL); AZ(dup2(p[0], STDIN_FILENO)); while (fgets(buf, sizeof buf, stdin)) { AZ(memcmp(buf, tmppath, strlen(tmppath))); q = buf + strlen(buf); assert(q > buf); assert(q[-1] == '\n'); q[-1] = '\0'; /* Dont expend a shell on running /bin/rm */ pp = fork(); assert(pp >= 0); if (pp == 0) exit(execlp( "rm", "rm", "-rf", buf, (char*)0)); assert(waitpid(pp, &st, 0) == pp); AZ(st); } exit(0); } closefd(&p[0]); cleaner_fd = p[1]; } static void cleaner_neuter(void) { if (cleaner_pid > 0) closefd(&cleaner_fd); } static void cleaner_finish(void) { int st; if (cleaner_pid > 0) { closefd(&cleaner_fd); assert(waitpid(cleaner_pid, &st, 0) == cleaner_pid); AZ(st); } } /********************************************************************** * CallBack */ static int tst_cb(const struct vev *ve, int what) { struct vtc_job *jp; char buf[BUFSIZ]; int ecode; int i, stx; pid_t px; double t; FILE *f; char *p; CAST_OBJ_NOTNULL(jp, ve->priv, JOB_MAGIC); CHECK_OBJ_NOTNULL(jp->tst, TST_MAGIC); // printf("CB %p %s %d\n", ve, jp->tst->filename, what); if (what == 0) { jp->killed = 1; AZ(kill(-jp->child, SIGKILL)); /* XXX: Timeout */ } else { assert(what & (VEV__RD | VEV__HUP)); } *buf = '\0'; i = read(ve->fd, buf, sizeof buf); if (i > 0) VSB_bcat(jp->bp->diag, buf, i); if (i == 0) { njob--; px = wait4(jp->child, &stx, 0, NULL); assert(px == jp->child); t = VTIM_mono() - jp->t0; AZ(close(ve->fd)); ecode = WTERMSIG(stx); if (ecode == 0) ecode = WEXITSTATUS(stx); AZ(VSB_finish(jp->bp->diag)); VSB_clear(cbvsb); VSB_cat(cbvsb, jp->bp->buf); p = strchr(jp->bp->buf, '\0'); if (p > jp->bp->buf && p[-1] != '\n') VSB_putc(cbvsb, '\n'); VSB_quote_pfx(cbvsb, "* diag 0.0 ", VSB_data(jp->bp->diag), -1, VSB_QUOTE_NONL); AZ(VSB_finish(cbvsb)); rel_buf(&jp->bp); if ((ecode > 1 && vtc_verbosity) || vtc_verbosity > 1) printf("%s", VSB_data(cbvsb)); if (!ecode) vtc_good++; else if (ecode == 1) vtc_skip++; else vtc_fail++; if (leave_temp == 0 || (leave_temp == 1 && ecode <= 1)) { cleaner_do(jp->tmpdir); } else { bprintf(buf, "%s/LOG", jp->tmpdir); f = fopen(buf, "w"); AN(f); (void)fprintf(f, "%s\n", VSB_data(cbvsb)); AZ(fclose(f)); } free(jp->tmpdir); if (jp->killed) printf("# top TEST %s TIMED OUT (kill -9)\n", jp->tst->filename); if (ecode > 1) { printf("# top TEST %s FAILED (%.3f)", jp->tst->filename, t); if (WIFSIGNALED(stx)) printf(" signal=%d\n", WTERMSIG(stx)); else if (WIFEXITED(stx)) printf(" exit=%d\n", WEXITSTATUS(stx)); if (!vtc_continue) { /* XXX kill -9 other jobs ? */ exit(2); } } else if (vtc_verbosity) { printf("# top TEST %s %s (%.3f)\n", jp->tst->filename, ecode ? "skipped" : "passed", t); } if (jp->evt != NULL) { VEV_Stop(vb, jp->evt); free(jp->evt); } jp->tst->nwait--; if (jp->tst->nwait == 0) { free(jp->tst->script); FREE_OBJ(jp->tst); } FREE_OBJ(jp); return (1); } return (0); } /********************************************************************** * Start Test */ static void start_test(void) { struct vtc_tst *tp; int p[2], retval; struct vtc_job *jp; char tmpdir[PATH_MAX]; ALLOC_OBJ(jp, JOB_MAGIC); AN(jp); jp->bp = get_buf(); bprintf(tmpdir, "%s/vtc.%d.%08x", tmppath, (int)getpid(), (unsigned)random()); AZ(mkdir(tmpdir, 0755)); tp = VTAILQ_FIRST(&tst_head); CHECK_OBJ_NOTNULL(tp, TST_MAGIC); AN(tp->ntodo); tp->ntodo--; VTAILQ_REMOVE(&tst_head, tp, list); if (tp->ntodo > 0) VTAILQ_INSERT_TAIL(&tst_head, tp, list); jp->tst = tp; REPLACE(jp->tmpdir, tmpdir); AZ(pipe(p)); assert(p[0] > STDERR_FILENO); assert(p[1] > STDERR_FILENO); jp->t0 = VTIM_mono(); jp->child = fork(); assert(jp->child >= 0); if (jp->child == 0) { cleaner_neuter(); // Too dangerous to have around AZ(setpgid(getpid(), 0)); VFIL_null_fd(STDIN_FILENO); assert(dup2(p[1], STDOUT_FILENO) == STDOUT_FILENO); assert(dup2(p[1], STDERR_FILENO) == STDERR_FILENO); VSUB_closefrom(STDERR_FILENO + 1); retval = exec_file(jp->tst->filename, jp->tst->script, jp->tmpdir, jp->bp->buf, jp->bp->bufsiz); exit(retval); } closefd(&p[1]); jp->ev = VEV_Alloc(); AN(jp->ev); jp->ev->fd_flags = VEV__RD | VEV__HUP | VEV__ERR; jp->ev->fd = p[0]; jp->ev->priv = jp; jp->ev->callback = tst_cb; AZ(VEV_Start(vb, jp->ev)); jp->evt = VEV_Alloc(); AN(jp->evt); jp->evt->fd = -1; jp->evt->timeout = vtc_maxdur; jp->evt->priv = jp; jp->evt->callback = tst_cb; AZ(VEV_Start(vb, jp->evt)); } /********************************************************************** * i-mode = "we're inside a src-tree" * * Find the abs path to top of source dir from Makefile, if that * fails, fall back on "../../" * * Set PATH to all programs build directories * Set vmod_path to all vmods build directories * */ static char * top_dir(const char *makefile, const char *top_var) { const char *b, *e; char *var; AN(makefile); AN(top_var); assert(*top_var == '\n'); b = strstr(makefile, top_var); top_var++; if (b == NULL) { fprintf(stderr, "could not find '%s' in Makefile\n", top_var); return (NULL); } e = strchr(b + 1, '\n'); if (e == NULL) { fprintf(stderr, "No NL after '%s' in Makefile\n", top_var); return (NULL); } b = memchr(b, '/', e - b); if (b == NULL) { fprintf(stderr, "No '/' after '%s' in Makefile\n", top_var); return (NULL); } var = strndup(b, e - b); AN(var); return (var); } static void build_path(const char *topdir, const char *subdir, const char *pfx, const char *sfx, struct vsb *vsb) { char buf[PATH_MAX]; DIR *dir; struct dirent *de; struct stat st; const char *topsep = "", *sep = ""; if (*subdir != '\0') topsep = "/"; bprintf(buf, "%s%s%s/", topdir, topsep, subdir); dir = opendir(buf); XXXAN(dir); while (1) { de = readdir(dir); if (de == NULL) break; if (strncmp(de->d_name, pfx, strlen(pfx))) continue; bprintf(buf, "%s%s%s/%s", topdir, topsep, subdir, de->d_name); if (!stat(buf, &st) && S_ISDIR(st.st_mode)) { VSB_cat(vsb, sep); VSB_cat(vsb, buf); VSB_cat(vsb, sfx); sep = ":"; } } AZ(closedir(dir)); } static void i_mode(void) { struct vsb *vsb; char *p, *topbuild, *topsrc; /* * This code has a rather intimate knowledge of auto* generated * makefiles. */ vsb = VSB_new_auto(); AN(vsb); p = VFIL_readfile(NULL, "Makefile", NULL); if (p == NULL) { fprintf(stderr, "No Makefile to search for -i flag.\n"); exit(2); } topbuild = top_dir(p, "\nabs_top_builddir"); topsrc = top_dir(p, "\nabs_top_srcdir"); free(p); if (topbuild == NULL || topsrc == NULL) { free(topbuild); free(topsrc); exit(2); } extmacro_def("topbuild", NULL, "%s", topbuild); extmacro_def("topsrc", NULL, "%s", topsrc); /* * Build $PATH which can find all programs in the build tree */ VSB_clear(vsb); VSB_cat(vsb, "PATH="); build_path(topbuild, "bin", "varnish", "", vsb); #ifdef WITH_CONTRIB VSB_putc(vsb, ':'); build_path(topsrc, "", "contrib", "", vsb); #endif VSB_printf(vsb, ":%s", getenv("PATH")); AZ(VSB_finish(vsb)); AZ(putenv(strdup(VSB_data(vsb)))); /* * Build vmod_path which can find all VMODs in the build tree */ VSB_clear(vsb); build_path(topbuild, "vmod", ".libs", "", vsb); AZ(VSB_finish(vsb)); vmod_path = strdup(VSB_data(vsb)); AN(vmod_path); free(topbuild); free(topsrc); VSB_destroy(&vsb); /* * strict jemalloc checking */ AZ(putenv(strdup("MALLOC_CONF=abort:true,junk:true"))); } /********************************************************************** * Figure out what IP related magic */ static void ip_magic(void) { const struct suckaddr *sa; char abuf[VTCP_ADDRBUFSIZE]; char pbuf[VTCP_PORTBUFSIZE]; char *s; /* * In FreeBSD jails localhost/127.0.0.1 becomes the jails IP# * XXX: IPv6-only hosts would have similar issue, but it is not * XXX: obvious how to cope. Ideally "127.0.0.1" would be * XXX: "localhost", but that doesn't work out of the box. * XXX: Things like "prefer_ipv6" parameter complicates things. */ sa = VSS_ResolveOne(NULL, "127.0.0.1", "0", 0, SOCK_STREAM, 0); AN(sa); bad_backend_fd = VTCP_bind(sa, NULL); if (bad_backend_fd < 0) { VSA_free(&sa); sa = VSS_ResolveFirst(NULL, "localhost", "0", 0, SOCK_STREAM, 0); AN(sa); bad_backend_fd = VTCP_bind(sa, NULL); } assert(bad_backend_fd >= 0); VTCP_myname(bad_backend_fd, abuf, sizeof abuf, pbuf, sizeof(pbuf)); extmacro_def("localhost", NULL, "%s", abuf); s = strdup(abuf); AN(s); #if defined (__APPLE__) /* * In MacOS a bound socket that is not listening will timeout * instead of refusing the connection so close it and hope * for the best. */ VTCP_close(&bad_backend_fd); #endif /* Expose a backend that is forever down. */ if (VSA_Get_Proto(sa) == AF_INET) extmacro_def("bad_backend", NULL, "%s:%s", abuf, pbuf); else extmacro_def("bad_backend", NULL, "[%s]:%s", abuf, pbuf); /* our default bind/listen address */ if (VSA_Get_Proto(sa) == AF_INET) bprintf(abuf, "%s:0", s); else bprintf(abuf, "[%s]:0", s); free(s); extmacro_def("listen_addr", NULL, "%s", abuf); default_listen_addr = strdup(abuf); AN(default_listen_addr); VSA_free(&sa); /* * We need an IP number which will not repond, ever, and that is a * lot harder than it sounds. This IP# is from RFC5737 and a * C-class broadcast at that. * If tests involving ${bad_ip} fails and you run linux, you should * check your /proc/sys/net/ipv4/ip_nonlocal_bind setting. */ extmacro_def("bad_ip", NULL, "%s", "192.0.2.255"); } /********************************************************************** * Macros */ static char * v_matchproto_(macro_f) macro_func_date(int argc, char *const *argv, const char **err) { double t; char *s; assert(argc >= 2); AN(argv); AN(err); if (argc > 2) { *err = "macro does not take arguments"; return (NULL); } t = VTIM_real(); s = malloc(VTIM_FORMAT_SIZE); AN(s); VTIM_format(t, s); return (s); } static char * macro_func_string_repeat(int argc, char *const *argv, const char **err) { struct vsb vsb[1]; const char *p; char *res; size_t l; int i; if (argc != 4) { *err = "repeat takes 2 arguments"; return (NULL); } p = argv[2]; i = SF_Parse_Integer(&p, err); if (*err != NULL) return (NULL); if (*p != '\0' || i < 0) { *err = "invalid number of repetitions"; return (NULL); } l = (strlen(argv[3]) * i) + 1; res = malloc(l); AN(res); AN(VSB_init(vsb, res, l)); while (i > 0) { AZ(VSB_cat(vsb, argv[3])); i--; } AZ(VSB_finish(vsb)); VSB_fini(vsb); return (res); } static char * macro_func_string(int argc, char *const *argv, const char **err) { assert(argc >= 2); AN(argv); AN(err); if (argc == 2) { *err = "missing action"; return (NULL); } if (!strcmp(argv[2], "repeat")) return (macro_func_string_repeat(argc - 1, argv + 1, err)); *err = "unknown action"; return (NULL); } /********************************************************************** * Main */ static int read_file(const char *fn, int ntest) { struct vtc_tst *tp; char *p, *q; p = VFIL_readfile(NULL, fn, NULL); if (p == NULL) { fprintf(stderr, "Cannot stat file \"%s\": %s\n", fn, strerror(errno)); return (2); } for (q = p ;q != NULL && *q != '\0'; q++) { if (vct_islws(*q)) continue; if (*q != '#') break; q = strchr(q, '\n'); if (q == NULL) break; } if (q == NULL || *q == '\0') { fprintf(stderr, "File \"%s\" has no content.\n", fn); free(p); return (2); } if ((strncmp(q, "varnishtest", 11) || !isspace(q[11])) && (strncmp(q, "vtest", 5) || !isspace(q[5]))) { fprintf(stderr, "File \"%s\" doesn't start with" " 'vtest' or 'varnishtest'\n", fn); free(p); vtc_skip++; return (2); } ALLOC_OBJ(tp, TST_MAGIC); AN(tp); tp->filename = fn; tp->script = p; tp->ntodo = ntest; tp->nwait = ntest; VTAILQ_INSERT_TAIL(&tst_head, tp, list); return (0); } /********************************************************************** * Main */ int main(int argc, char * const *argv) { int ch, i; int ntest = 1; /* Run tests this many times */ int nstart = 0; int use_cleaner = 0; uintmax_t bufsiz; const char *p; char buf[PATH_MAX]; argv0 = strrchr(argv[0], '/'); if (argv0 == NULL) argv0 = argv[0]; else argv0++; if (getenv("TMPDIR") != NULL) tmppath = strdup(getenv("TMPDIR")); else tmppath = strdup("/tmp"); extmacro_def("pkg_version", NULL, PACKAGE_VERSION); extmacro_def("pkg_branch", NULL, PACKAGE_BRANCH); cwd = getcwd(buf, sizeof buf); extmacro_def("pwd", NULL, "%s", cwd); extmacro_def("date", macro_func_date, NULL); extmacro_def("string", macro_func_string, NULL); vmod_path = NULL; params_vsb = VSB_new_auto(); AN(params_vsb); p = getenv("VTEST_DURATION"); if (p == NULL) p = getenv("VARNISHTEST_DURATION"); if (p != NULL) vtc_maxdur = atoi(p); VRND_SeedAll(); cbvsb = VSB_new_auto(); AN(cbvsb); setbuf(stdout, NULL); setbuf(stderr, NULL); while ((ch = getopt(argc, argv, "b:CD:hij:kLln:p:qt:v")) != -1) { switch (ch) { case 'b': if (VNUM_2bytes(optarg, &bufsiz, 0)) { fprintf(stderr, "Cannot parse b opt '%s'\n", optarg); exit(2); } if (bufsiz > UINT_MAX) { fprintf(stderr, "Invalid b opt '%s'\n", optarg); exit(2); } vtc_bufsiz = (unsigned)bufsiz; break; case 'C': use_cleaner = !use_cleaner; break; case 'D': if (!parse_D_opt(optarg)) { fprintf(stderr, "Cannot parse D opt '%s'\n", optarg); exit(2); } break; case 'i': iflg = 1; break; case 'j': npar = strtoul(optarg, NULL, 0); break; case 'L': leave_temp = 2; break; case 'l': leave_temp = 1; break; case 'k': vtc_continue = !vtc_continue; break; case 'n': ntest = strtoul(optarg, NULL, 0); break; case 'p': VSB_cat(params_vsb, " -p "); VSB_quote(params_vsb, optarg, -1, 0); break; case 'q': if (vtc_verbosity > 0) vtc_verbosity--; break; case 't': vtc_maxdur = strtoul(optarg, NULL, 0); break; case 'v': if (vtc_verbosity < 2) vtc_verbosity++; break; default: usage(); } } argc -= optind; argv += optind; if (argc < 1) usage(); for (; argc > 0; argc--, argv++) { if (!read_file(*argv, ntest)) continue; if (!vtc_continue) exit(2); } AZ(VSB_finish(params_vsb)); ip_magic(); if (iflg) i_mode(); vb = VEV_New(); if (use_cleaner) cleaner_setup(); i = 0; while (!VTAILQ_EMPTY(&tst_head) || i) { if (!VTAILQ_EMPTY(&tst_head) && njob < npar) { start_test(); njob++; /* Stagger ramp-up */ if (nstart++ < npar) (void)usleep(random() % 100000L); i = 1; continue; } i = VEV_Once(vb); } cleaner_finish(); (void)close(bad_backend_fd); if (vtc_continue) fprintf(stderr, "%d tests failed, %d tests skipped, %d tests passed\n", vtc_fail, vtc_skip, vtc_good); if (vtc_fail) return (1); if (vtc_skip && !vtc_good) return (77); return (0); } varnish-7.5.0/bin/varnishtest/vtc_misc.c000066400000000000000000000354511457605730600203170ustar00rootroot00000000000000/*- * Copyright (c) 2008-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include "config.h" #include #include #include #include #include #include #include #include #include #include #ifdef HAVE_SYS_PERSONALITY_H # include #endif #include "vtc.h" #include "vfil.h" #include "vnum.h" #include "vre.h" #include "vtcp.h" #include "vsa.h" #include "vss.h" #include "vtim.h" #include "vus.h" /* SECTION: vtest vtest * * This should be the first command in your vtc as it will identify the test * case with a short yet descriptive sentence. It takes exactly one argument, a * string, eg:: * * vtest "Check that vtest is actually a valid command" * * It will also print that string in the log. */ void v_matchproto_(cmd_f) cmd_vtest(CMD_ARGS) { (void)priv; (void)vl; if (av == NULL) return; AZ(strcmp(av[0], "vtest")); vtc_log(vl, 1, "VTEST %s", av[1]); AZ(av[2]); } /* SECTION: varnishtest varnishtest * * Alternate name for 'vtest', see above. * */ void v_matchproto_(cmd_f) cmd_varnishtest(CMD_ARGS) { (void)priv; (void)vl; if (av == NULL) return; AZ(strcmp(av[0], "varnishtest")); vtc_log(vl, 1, "VTEST %s", av[1]); AZ(av[2]); } /* SECTION: shell shell * * NOTE: This command is available everywhere commands are given. * * Pass the string given as argument to a shell. If you have multiple * commands to run, you can use curly brackets to describe a multi-lines * script, eg:: * * shell { * echo begin * cat /etc/fstab * echo end * } * * By default a zero exit code is expected, otherwise the vtc will fail. * * Notice that the commandstring is prefixed with "exec 2>&1;" to combine * stderr and stdout back to the test process. * * Optional arguments: * * \-err * Expect non-zero exit code. * * \-exit N * Expect exit code N instead of zero. * * \-expect STRING * Expect string to be found in stdout+err. * * \-match REGEXP * Expect regexp to match the stdout+err output. */ /* SECTION: client-server.spec.shell * * shell * Same as for the top-level shell. */ static void cmd_shell_engine(struct vtclog *vl, int ok, const char *cmd, const char *expect, const char *re) { struct vsb *vsb, re_vsb[1]; FILE *fp; vre_t *vre = NULL; int r, c; int err, erroff; char errbuf[VRE_ERROR_LEN]; AN(vl); AN(cmd); vsb = VSB_new_auto(); AN(vsb); if (re != NULL) { vre = VRE_compile(re, 0, &err, &erroff, 1); if (vre == NULL) { AN(VSB_init(re_vsb, errbuf, sizeof errbuf)); AZ(VRE_error(re_vsb, err)); AZ(VSB_finish(re_vsb)); VSB_fini(re_vsb); vtc_fatal(vl, "shell_match invalid regexp (\"%s\" at %d)", errbuf, erroff); } } VSB_printf(vsb, "exec 2>&1 ; %s", cmd); AZ(VSB_finish(vsb)); vtc_dump(vl, 4, "shell_cmd", VSB_data(vsb), -1); fp = popen(VSB_data(vsb), "r"); if (fp == NULL) vtc_fatal(vl, "popen fails: %s", strerror(errno)); VSB_clear(vsb); do { c = getc(fp); if (c != EOF) VSB_putc(vsb, c); } while (c != EOF); r = pclose(fp); AZ(VSB_finish(vsb)); vtc_dump(vl, 4, "shell_out", VSB_data(vsb), VSB_len(vsb)); vtc_log(vl, 4, "shell_status = 0x%04x", WEXITSTATUS(r)); if (WIFSIGNALED(r)) vtc_log(vl, 4, "shell_signal = %d", WTERMSIG(r)); if (ok < 0 && !WEXITSTATUS(r) && !WIFSIGNALED(r)) vtc_fatal(vl, "shell did not fail as expected"); else if (ok >= 0 && WEXITSTATUS(r) != ok) vtc_fatal(vl, "shell_exit not as expected: " "got 0x%04x wanted 0x%04x", WEXITSTATUS(r), ok); if (expect != NULL) { if (strstr(VSB_data(vsb), expect) == NULL) vtc_fatal(vl, "shell_expect not found: (\"%s\")", expect); else vtc_log(vl, 4, "shell_expect found"); } else if (vre != NULL) { if (VRE_match(vre, VSB_data(vsb), VSB_len(vsb), 0, NULL) < 1) vtc_fatal(vl, "shell_match failed: (\"%s\")", re); else vtc_log(vl, 4, "shell_match succeeded"); VRE_free(&vre); } VSB_destroy(&vsb); } void cmd_shell(CMD_ARGS) { const char *expect = NULL; const char *re = NULL; int n; int ok = 0; (void)priv; if (av == NULL) return; for (n = 1; av[n] != NULL; n++) { if (!strcmp(av[n], "-err")) { ok = -1; } else if (!strcmp(av[n], "-exit")) { n += 1; ok = atoi(av[n]); } else if (!strcmp(av[n], "-expect")) { if (re != NULL) vtc_fatal(vl, "Cannot use -expect with -match"); n += 1; expect = av[n]; } else if (!strcmp(av[n], "-match")) { if (expect != NULL) vtc_fatal(vl, "Cannot use -match with -expect"); n += 1; re = av[n]; } else { break; } } AN(av[n]); cmd_shell_engine(vl, ok, av[n], expect, re); } /* SECTION: filewrite filewrite * * Write strings to file * * filewrite [-a] /somefile "Hello" " " "World\n" * * The -a flag opens the file in append mode. * */ void v_matchproto_(cmd_f) cmd_filewrite(CMD_ARGS) { FILE *fo; int n; const char *mode = "w"; (void)priv; if (av == NULL) return; if (av[1] != NULL && !strcmp(av[1], "-a")) { av++; mode = "a"; } if (av[1] == NULL) vtc_fatal(vl, "Need filename"); fo = fopen(av[1], mode); if (fo == NULL) vtc_fatal(vl, "Cannot open %s: %s", av[1], strerror(errno)); for (n = 2; av[n] != NULL; n++) (void)fputs(av[n], fo); AZ(fclose(fo)); } /* SECTION: setenv setenv * * Set or change an environment variable:: * * setenv FOO "bar baz" * * The above will set the environment variable $FOO to the value * provided. There is also an ``-ifunset`` argument which will only * set the value if the the environment variable does not already * exist:: * * setenv -ifunset FOO quux */ void v_matchproto_(cmd_f) cmd_setenv(CMD_ARGS) { int r; int force; (void)priv; if (av == NULL) return; AN(av[1]); AN(av[2]); force = 1; if (strcmp("-ifunset", av[1]) == 0) { force = 0; av++; AN(av[2]); } if (av[3] != NULL) vtc_fatal(vl, "CMD setenv: Unexpected argument '%s'", av[3]); r = setenv(av[1], av[2], force); if (r != 0) vtc_log(vl, 0, "CMD setenv %s=\"%s\" failed: %s", av[1], av[2], strerror(errno)); } /* SECTION: delay delay * * NOTE: This command is available everywhere commands are given. * * Sleep for the number of seconds specified in the argument. The number * can include a fractional part, e.g. 1.5. */ void cmd_delay(CMD_ARGS) { double f; (void)priv; if (av == NULL) return; AN(av[1]); AZ(av[2]); f = VNUM(av[1]); if (isnan(f)) vtc_fatal(vl, "Syntax error in number (%s)", av[1]); vtc_log(vl, 3, "delaying %g second(s)", f); VTIM_sleep(f); } /* SECTION: include include * * Executes a vtc fragment:: * * include FILE [...] * * Open a file and execute it as a VTC fragment. This command is available * everywhere commands are given. * */ void cmd_include(CMD_ARGS) { char *spec; unsigned i; if (av == NULL) return; if (av[1] == NULL) vtc_fatal(vl, "CMD include: At least 1 argument required"); for (i = 1; av[i] != NULL; i++) { spec = VFIL_readfile(NULL, av[i], NULL); if (spec == NULL) vtc_fatal(vl, "CMD include: Unable to read file '%s' " "(%s)", av[i], strerror(errno)); vtc_log(vl, 2, "Begin include '%s'", av[i]); parse_string(vl, priv, spec); vtc_log(vl, 2, "End include '%s'", av[i]); free(spec); } } /********************************************************************** * Most test-cases use only numeric IP#'s but a few requires non-demented * DNS services. This is a basic sanity check for those. */ static int dns_works(void) { const struct suckaddr *sa; char abuf[VTCP_ADDRBUFSIZE]; char pbuf[VTCP_PORTBUFSIZE]; sa = VSS_ResolveOne(NULL, "dns-canary.varnish-cache.org", NULL, AF_INET, SOCK_STREAM, 0); if (sa == NULL) return (0); VTCP_name(sa, abuf, sizeof abuf, pbuf, sizeof pbuf); VSA_free(&sa); if (strcmp(abuf, "192.0.2.255")) return (0); sa = VSS_ResolveOne(NULL, "dns-canary.varnish-cache.org", NULL, AF_INET6, SOCK_STREAM, 0); if (sa == NULL) return (1); /* the canary is ipv4 only */ VSA_free(&sa); return (0); } /********************************************************************** * Test if IPv4/IPv6 works */ static int ipvx_works(const char *target) { const struct suckaddr *sa; int fd; sa = VSS_ResolveOne(NULL, target, "0", 0, SOCK_STREAM, 0); if (sa == NULL) return (0); fd = VTCP_bind(sa, NULL); VSA_free(&sa); if (fd >= 0) { VTCP_close(&fd); return (1); } return (0); } /**********************************************************************/ static int addr_no_randomize_works(void) { int r = 0; #ifdef HAVE_SYS_PERSONALITY_H r = personality(0xffffffff); r = personality(r | ADDR_NO_RANDOMIZE); #endif return (r >= 0); } /**********************************************************************/ static int uds_socket(void *priv, const struct sockaddr_un *uds) { return (VUS_bind(uds, priv)); } static int abstract_uds_works(void) { const char *err; int fd; fd = VUS_resolver("@vtc.feature.abstract_uds", uds_socket, NULL, &err); if (fd < 0) return (0); AZ(close(fd)); return (1); } /* SECTION: feature feature * * Test that the required feature(s) for a test are available, and skip * the test otherwise; or change the interpretation of the test, as * documented below. feature takes any number of arguments from this list: * * 64bit * The environment is 64 bits * ipv4 * 127.0.0.1 works * ipv6 * [::1] works * dns * DNS lookups are working * topbuild * The test has been started with '-i' * root * The test has been invoked by the root user * user_varnish * The varnish user is present * user_vcache * The vcache user is present * group_varnish * The varnish group is present * cmd * A command line that should execute with a zero exit status * ignore_unknown_macro * Do not fail the test if a string of the form ${...} is not * recognized as a macro. * persistent_storage * Varnish was built with the deprecated persistent storage. * coverage * Varnish was built with code coverage enabled. * asan * Varnish was built with the address sanitizer. * msan * Varnish was built with the memory sanitizer. * tsan * Varnish was built with the thread sanitizer. * ubsan * Varnish was built with the undefined behavior sanitizer. * sanitizer * Varnish was built with a sanitizer. * workspace_emulator * Varnish was built with its workspace emulator. * abstract_uds * Creation of an abstract unix domain socket succeeded. * disable_aslr * ASLR can be disabled. * * A feature name can be prefixed with an exclamation mark (!) to skip a * test if the feature is present. * * Be careful with ignore_unknown_macro, because it may cause a test with a * misspelled macro to fail silently. You should only need it if you must * run a test with strings of the form "${...}". */ #if ENABLE_COVERAGE static const unsigned coverage = 1; #else static const unsigned coverage = 0; #endif #if ENABLE_ASAN static const unsigned asan = 1; #else static const unsigned asan = 0; #endif #if ENABLE_MSAN static const unsigned msan = 1; #else static const unsigned msan = 0; #endif #if ENABLE_TSAN static const unsigned tsan = 1; #else static const unsigned tsan = 0; #endif #if ENABLE_UBSAN static const unsigned ubsan = 1; #else static const unsigned ubsan = 0; #endif #if ENABLE_SANITIZER static const unsigned sanitizer = 1; #else static const unsigned sanitizer = 0; #endif #if ENABLE_WORKSPACE_EMULATOR static const unsigned workspace_emulator = 1; #else static const unsigned workspace_emulator = 0; #endif #if WITH_PERSISTENT_STORAGE static const unsigned with_persistent_storage = 1; #else static const unsigned with_persistent_storage = 0; #endif void v_matchproto_(cmd_f) cmd_feature(CMD_ARGS) { const char *feat; int r, good, skip, neg; (void)priv; if (av == NULL) return; #define FEATURE(nm, tst) \ do { \ if (!strcmp(feat, nm)) { \ good = 1; \ if (tst) { \ skip = neg; \ } else { \ skip = !neg; \ } \ } \ } while (0) skip = 0; for (av++; *av != NULL; av++) { good = 0; neg = 0; feat = *av; if (feat[0] == '!') { neg = 1; feat++; } FEATURE("ipv4", ipvx_works("127.0.0.1")); FEATURE("ipv6", ipvx_works("[::1]")); FEATURE("64bit", sizeof(void*) == 8); FEATURE("disable_aslr", addr_no_randomize_works()); FEATURE("dns", dns_works()); FEATURE("topbuild", iflg); FEATURE("root", !geteuid()); FEATURE("user_varnish", getpwnam("varnish") != NULL); FEATURE("user_vcache", getpwnam("vcache") != NULL); FEATURE("group_varnish", getgrnam("varnish") != NULL); FEATURE("persistent_storage", with_persistent_storage); FEATURE("coverage", coverage); FEATURE("asan", asan); FEATURE("msan", msan); FEATURE("tsan", tsan); FEATURE("ubsan", ubsan); FEATURE("sanitizer", sanitizer); FEATURE("workspace_emulator", workspace_emulator); FEATURE("abstract_uds", abstract_uds_works()); if (!strcmp(feat, "cmd")) { good = 1; skip = neg; av++; if (*av == NULL) vtc_fatal(vl, "Missing the command-line"); r = system(*av); if (WEXITSTATUS(r) != 0) skip = !neg; } else if (!strcmp(feat, "ignore_unknown_macro")) { ign_unknown_macro = 1; good = 1; } if (!good) vtc_fatal(vl, "FAIL test, unknown feature: %s", feat); if (!skip) continue; vtc_stop = 2; if (neg) vtc_log(vl, 1, "SKIPPING test, conflicting feature: %s", feat); else vtc_log(vl, 1, "SKIPPING test, lacking feature: %s", feat); return; } } varnish-7.5.0/bin/varnishtest/vtc_process.c000066400000000000000000000704001457605730600210330ustar00rootroot00000000000000/*- * Copyright (c) 2015 Varnish Software AS * All rights reserved. * * Author: Dridi Boukelmoune * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * XXX: * -ignore-stderr (otherwise output to stderr is fail) */ #include "config.h" #include // Linux: struct winsize #include #include #include #include #include #include #include #ifdef __sun # include #endif #include #include #include "vtc.h" #include "vre.h" #include "vev.h" #include "vlu.h" #include "vsb.h" #include "vsub.h" #include "teken.h" struct process { unsigned magic; #define PROCESS_MAGIC 0x1617b43e char *name; struct vtclog *vl; VTAILQ_ENTRY(process) list; char *spec; char *dir; char *out; char *err; int fd_term; int fd_stderr; int f_stdout; int f_stderr; struct vlu *vlu_stdout; struct vlu *vlu_stderr; int log; pid_t pid; int expect_exit; int expect_signal; int allow_core; uintmax_t stdout_bytes; uintmax_t stderr_bytes; pthread_mutex_t mtx; pthread_t tp; unsigned hasthread; int nlin; int ncol; int ansi_response; char **vram; teken_t tek[1]; }; static VTAILQ_HEAD(, process) processes = VTAILQ_HEAD_INITIALIZER(processes); static void term_resize(struct process *pp, int lin, int col); /********************************************************************** * Terminal emulation */ static void term_cursor(void *priv, const teken_pos_t *pos) { (void)priv; (void)pos; } static void term_putchar(void *priv, const teken_pos_t *pos, teken_char_t ch, const teken_attr_t *at) { struct process *pp; CAST_OBJ_NOTNULL(pp, priv, PROCESS_MAGIC); (void)at; if (ch > 126 || ch < 32) ch = '?'; assert(pos->tp_row < pp->nlin); assert(pos->tp_col < pp->ncol); pp->vram[pos->tp_row][pos->tp_col] = ch; } static void term_fill(void *priv, const teken_rect_t *r, teken_char_t c, const teken_attr_t *a) { teken_pos_t p; /* Braindead implementation of fill() - just call putchar(). */ for (p.tp_row = r->tr_begin.tp_row; p.tp_row < r->tr_end.tp_row; p.tp_row++) for (p.tp_col = r->tr_begin.tp_col; p.tp_col < r->tr_end.tp_col; p.tp_col++) term_putchar(priv, &p, c, a); } static void term_copy(void *priv, const teken_rect_t *r, const teken_pos_t *p) { struct process *pp; int nrow, ncol, y; /* Has to be signed - >= 0 comparison */ /* * Copying is a little tricky. We must make sure we do it in * correct order, to make sure we don't overwrite our own data. */ CAST_OBJ_NOTNULL(pp, priv, PROCESS_MAGIC); nrow = r->tr_end.tp_row - r->tr_begin.tp_row; ncol = r->tr_end.tp_col - r->tr_begin.tp_col; if (p->tp_row < r->tr_begin.tp_row) { /* Copy from top to bottom. */ for (y = 0; y < nrow; y++) memmove(&pp->vram[p->tp_row + y][p->tp_col], &pp->vram[r->tr_begin.tp_row + y][r->tr_begin.tp_col], ncol); } else { /* Copy from bottom to top. */ for (y = nrow - 1; y >= 0; y--) memmove(&pp->vram[p->tp_row + y][p->tp_col], &pp->vram[r->tr_begin.tp_row + y][r->tr_begin.tp_col], ncol); } } static void term_respond(void *priv, const void *p, size_t l) { struct process *pp; int r; CAST_OBJ_NOTNULL(pp, priv, PROCESS_MAGIC); vtc_dump(pp->vl, 4, "term_response", p, l); if (pp->ansi_response) { r = write(pp->fd_term, p, l); if (r != l) vtc_fatal(pp->vl, "Could not write to process: %s", strerror(errno)); } } static void term_param(void *priv, int p, unsigned int v) { struct process *pp; CAST_OBJ_NOTNULL(pp, priv, PROCESS_MAGIC); if (p == TP_132COLS && v) term_resize(pp, pp->nlin, 132); if (p == TP_132COLS && !v) term_resize(pp, pp->nlin, 80); } static const teken_funcs_t process_teken_func = { .tf_cursor = term_cursor, .tf_putchar = term_putchar, .tf_fill = term_fill, .tf_copy = term_copy, .tf_respond = term_respond, .tf_param = term_param, }; static void term_screen_dump(const struct process *pp) { int i; const teken_pos_t *pos; for (i = 0; i < pp->nlin; i++) vtc_dump(pp->vl, 3, "screen", pp->vram[i], pp->ncol); pos = teken_get_cursor(pp->tek); vtc_log(pp->vl, 3, "Cursor at line %d column %d", pos->tp_row + 1, pos->tp_col + 1); } static void term_resize(struct process *pp, int lin, int col) { teken_pos_t pos; char **vram; int i, j; vram = calloc(lin, sizeof *pp->vram); AN(vram); for (i = 0; i < lin; i++) { vram[i] = calloc(col + 1L, 1); AN(vram[i]); memset(vram[i], ' ', col); vram[i][col] = '\0'; } if (pp->vram != NULL) { for (i = 0; i < lin; i++) { if (i >= pp->nlin) break; j = col; if (j > pp->ncol) j = pp->ncol; memcpy(vram[i], pp->vram[i], j); } for (i = 0; i < pp->nlin; i++) free(pp->vram[i]); free(pp->vram); } pp->vram = vram; pp->nlin = lin; pp->ncol = col; pos.tp_row = lin; pos.tp_col = col; teken_set_winsize(pp->tek, &pos); } static int term_find_textline(const struct process *pp, int *x, int y, const char *pat) { const char *t; int l; if (*x == 0) { t = strstr(pp->vram[y], pat); if (t != NULL) { *x = 1 + (t - pp->vram[y]); return (1); } } else if (*x <= pp->ncol) { t = pp->vram[y] + *x - 1; l = strlen(pat); assert((*x - 1) + (l - 1) < pp->ncol); if (!memcmp(t, pat, l)) return (1); } return (0); } static int term_find_text(const struct process *pp, int *x, int *y, const char *pat) { int yy; if (*y == 0) { for (yy = 0; yy < pp->nlin; yy++) { if (term_find_textline(pp, x, yy, pat)) { *y = yy + 1; return (1); } } } else if (*y <= pp->nlin) { if (term_find_textline(pp, x, *y - 1, pat)) return (1); } return (0); } static void term_expect_text(struct process *pp, const char *lin, const char *col, const char *pat) { int x, y, l, d = 10000; char *t; y = strtoul(lin, NULL, 0); if (y < 0 || y > pp->nlin) vtc_fatal(pp->vl, "YYY %d nlin %d", y, pp->nlin); x = strtoul(col, NULL, 0); for(l = 0; l <= 10 && x > pp->ncol; l++) // wait for screen change usleep(100000); if (x < 0 || x > pp->ncol) vtc_fatal(pp->vl, "XXX %d ncol %d", x, pp->ncol); l = strlen(pat); if (x + l - 1 > pp->ncol) vtc_fatal(pp->vl, "XXX %d ncol %d", x + l - 1, pp->ncol); PTOK(pthread_mutex_lock(&pp->mtx)); while (!term_find_text(pp, &x, &y, pat)) { if (x != 0 && y != 0) { t = pp->vram[y - 1] + x - 1; vtc_log(pp->vl, 4, "text at %d,%d: '%.*s'", y, x, l, t); } PTOK(pthread_mutex_unlock(&pp->mtx)); usleep(d); PTOK(pthread_mutex_lock(&pp->mtx)); if (d < 3000000) d += d; } PTOK(pthread_mutex_unlock(&pp->mtx)); vtc_log(pp->vl, 4, "found expected text at %d,%d: '%s'", y, x, pat); } static void term_expect_cursor(const struct process *pp, const char *lin, const char *col) { int x, y, l; const teken_pos_t *pos; pos = teken_get_cursor(pp->tek); y = strtoul(lin, NULL, 0); if (y < 0 || y > pp->nlin) vtc_fatal(pp->vl, "YYY %d nlin %d", y, pp->nlin); x = strtoul(col, NULL, 0); for(l = 0; l < 10 && x > pp->ncol; l++) // wait for screen change usleep(100000); if (x < 0 || x > pp->ncol) vtc_fatal(pp->vl, "XXX %d ncol %d", x, pp->ncol); if (y != 0 && (y-1) != pos->tp_row) vtc_fatal(pp->vl, "Cursor on line %d (expected %d)", pos->tp_row + 1, y); if (x != 0 && (x-1) != pos->tp_col) vtc_fatal(pp->vl, "Cursor in column %d (expected %d)", pos->tp_col + 1, y); } static void term_match_text(struct process *pp, const char *lin, const char *col, const char *re) { int i, l, err, erroff; struct vsb *vsb, re_vsb[1]; size_t len; ssize_t x, y; vre_t *vre; char errbuf[VRE_ERROR_LEN]; vsb = VSB_new_auto(); AN(vsb); y = strtoul(lin, NULL, 0); if (y < 0 || y > pp->nlin) vtc_fatal(pp->vl, "YYY %zd nlin %d", y, pp->nlin); x = strtoul(col, NULL, 0); for(l = 0; l < 10 && x > pp->ncol; l++) // wait for screen change usleep(100000); if (x < 0 || x > pp->ncol) vtc_fatal(pp->vl, "XXX %zd ncol %d", x, pp->ncol); if (x) x--; if (y) y--; vre = VRE_compile(re, 0, &err, &erroff, 1); if (vre == NULL) { AN(VSB_init(re_vsb, errbuf, sizeof errbuf)); AZ(VRE_error(re_vsb, err)); AZ(VSB_finish(re_vsb)); VSB_fini(re_vsb); vtc_fatal(pp->vl, "invalid regexp \"%s\" at %d (%s)", re, erroff, errbuf); } PTOK(pthread_mutex_lock(&pp->mtx)); len = (pp->nlin - y) * (pp->ncol - x); for (i = y; i < pp->nlin; i++) { VSB_bcat(vsb, &pp->vram[i][x], pp->ncol - x); VSB_putc(vsb, '\n'); } AZ(VSB_finish(vsb)); if (VRE_match(vre, VSB_data(vsb), len, 0, NULL) < 1) vtc_fatal(pp->vl, "match failed: (\"%s\")", re); else vtc_log(pp->vl, 4, "match succeeded"); PTOK(pthread_mutex_unlock(&pp->mtx)); VSB_destroy(&vsb); VRE_free(&vre); } /********************************************************************** * Allocate and initialize a process */ #define PROCESS_EXPAND(field, format, ...) \ do { \ vsb = macro_expandf(p->vl, format, __VA_ARGS__); \ AN(vsb); \ p->field = strdup(VSB_data(vsb)); \ AN(p->field); \ VSB_destroy(&vsb); \ } while (0) static void process_coverage(struct process *p) { const teken_attr_t *a; teken_pos_t pos; int fg, bg; // Code-Coverage of Teken (void)teken_get_sequence(p->tek, TKEY_UP); (void)teken_get_sequence(p->tek, TKEY_F1); (void)teken_256to8(0); (void)teken_256to16(0); a = teken_get_defattr(p->tek); teken_set_defattr(p->tek, a); a = teken_get_curattr(p->tek); teken_set_curattr(p->tek, a); (void)teken_get_winsize(p->tek); pos.tp_row = 0; pos.tp_col = 8; teken_set_cursor(p->tek, &pos); teken_get_defattr_cons25(p->tek, &fg, &bg); } static struct process * process_new(const char *name) { struct process *p; struct vsb *vsb; char buf[1024]; ALLOC_OBJ(p, PROCESS_MAGIC); AN(p); REPLACE(p->name, name); PTOK(pthread_mutex_init(&p->mtx, NULL)); p->vl = vtc_logopen("%s", name); AN(p->vl); PROCESS_EXPAND(dir, "${tmpdir}/%s", name); PROCESS_EXPAND(out, "${tmpdir}/%s/term", name); PROCESS_EXPAND(err, "${tmpdir}/%s/stderr", name); bprintf(buf, "rm -rf %s ; mkdir -p %s ; touch %s %s", p->dir, p->dir, p->out, p->err); AZ(system(buf)); p->fd_term = -1; VTAILQ_INSERT_TAIL(&processes, p, list); teken_init(p->tek, &process_teken_func, p); term_resize(p, 24, 80); process_coverage(p); return (p); } #undef PROCESS_EXPAND /********************************************************************** * Clean up process */ static void process_delete(struct process *p) { int i; CHECK_OBJ_NOTNULL(p, PROCESS_MAGIC); PTOK(pthread_mutex_destroy(&p->mtx)); vtc_logclose(p->vl); free(p->name); free(p->dir); free(p->out); free(p->err); for (i = 0; i < p->nlin; i++) free(p->vram[i]); free(p->vram); /* * We do not delete the directory, it may contain useful stdout * and stderr files. They will be deleted on account of belonging * to the test's tmpdir. */ /* XXX: MEMLEAK (?) */ FREE_OBJ(p); } static void process_undef(const struct process *p) { CHECK_OBJ_NOTNULL(p, PROCESS_MAGIC); macro_undef(p->vl, p->name, "dir"); macro_undef(p->vl, p->name, "out"); macro_undef(p->vl, p->name, "err"); } /********************************************************************** * Data stream handling */ static int process_vlu_func(void *priv, const char *l) { struct process *p; CAST_OBJ_NOTNULL(p, priv, PROCESS_MAGIC); vtc_dump(p->vl, 4, "output", l, -1); return (0); } static int v_matchproto_(vev_cb_f) process_stdout(const struct vev *ev, int what) { struct process *p; char buf[BUFSIZ]; int i; CAST_OBJ_NOTNULL(p, ev->priv, PROCESS_MAGIC); (void)what; i = read(p->fd_term, buf, sizeof buf); if (i <= 0) { vtc_log(p->vl, 4, "stdout read %d", i); return (1); } PTOK(pthread_mutex_lock(&p->mtx)); p->stdout_bytes += i; PTOK(pthread_mutex_unlock(&p->mtx)); if (p->log == 1) (void)VLU_Feed(p->vlu_stdout, buf, i); else if (p->log == 2) vtc_dump(p->vl, 4, "stdout", buf, i); else if (p->log == 3) vtc_hexdump(p->vl, 4, "stdout", buf, i); assert(write(p->f_stdout, buf, i) == i); PTOK(pthread_mutex_lock(&p->mtx)); teken_input(p->tek, buf, i); PTOK(pthread_mutex_unlock(&p->mtx)); return (0); } static int v_matchproto_(vev_cb_f) process_stderr(const struct vev *ev, int what) { struct process *p; char buf[BUFSIZ]; int i; CAST_OBJ_NOTNULL(p, ev->priv, PROCESS_MAGIC); (void)what; i = read(p->fd_stderr, buf, sizeof buf); if (i <= 0) { vtc_log(p->vl, 4, "stderr read %d", i); return (1); } PTOK(pthread_mutex_lock(&p->mtx)); p->stderr_bytes += i; PTOK(pthread_mutex_unlock(&p->mtx)); vtc_dump(p->vl, 4, "stderr", buf, i); assert(write(p->f_stderr, buf, i) == i); return (0); } static void process_cleanup(void *priv) { struct vev_root *evb = priv; VEV_Destroy(&evb); } static void * process_thread(void *priv) { struct process *p; struct vev_root *evb; struct vev *ev; int r; CAST_OBJ_NOTNULL(p, priv, PROCESS_MAGIC); p->f_stdout = open(p->out, O_WRONLY|O_APPEND); assert(p->f_stdout >= 0); p->f_stderr = open(p->err, O_WRONLY|O_APPEND); assert(p->f_stderr >= 0); evb = VEV_New(); AN(evb); pthread_cleanup_push(process_cleanup, evb); ev = VEV_Alloc(); AN(ev); ev->fd = p->fd_term; ev->fd_flags = VEV__RD | VEV__HUP | VEV__ERR; ev->callback = process_stdout; ev->priv = p; AZ(VEV_Start(evb, ev)); ev = VEV_Alloc(); AN(ev); ev->fd = p->fd_stderr; ev->fd_flags = VEV__RD | VEV__HUP | VEV__ERR; ev->callback = process_stderr; ev->priv = p; AZ(VEV_Start(evb, ev)); if (p->log == 1) { p->vlu_stdout = VLU_New(process_vlu_func, p, 1024); AN(p->vlu_stdout); p->vlu_stderr = VLU_New(process_vlu_func, p, 1024); AN(p->vlu_stderr); } do { r = VEV_Once(evb); } while (r == 1); if (r < 0) vtc_fatal(p->vl, "VEV_Once() = %d, error %s", r, strerror(errno)); vtc_wait4(p->vl, p->pid, p->expect_exit, p->expect_signal, p->allow_core); closefd(&p->f_stdout); closefd(&p->f_stderr); PTOK(pthread_mutex_lock(&p->mtx)); /* NB: We keep the other macros around */ macro_undef(p->vl, p->name, "pid"); p->pid = -1; PTOK(pthread_mutex_unlock(&p->mtx)); pthread_cleanup_pop(0); VEV_Destroy(&evb); if (p->log == 1) { VLU_Destroy(&p->vlu_stdout); VLU_Destroy(&p->vlu_stderr); } return (NULL); } static void process_winsz(struct process *p, int fd) { struct winsize ws; int i; memset(&ws, 0, sizeof ws); ws.ws_row = (short)p->nlin; ws.ws_col = (short)p->ncol; i = ioctl(fd, TIOCSWINSZ, &ws); if (i) vtc_log(p->vl, 4, "TIOCWINSZ %d %s", i, strerror(errno)); } static void process_init_term(struct process *p, int fd) { struct termios tt; int i; process_winsz(p, fd); memset(&tt, 0, sizeof tt); tt.c_cflag = CREAD | CS8 | HUPCL; tt.c_iflag = BRKINT | ICRNL | IMAXBEL | IXON | IXANY; tt.c_lflag = ICANON | ISIG | IEXTEN | ECHO | ECHOE | ECHOKE | ECHOCTL; tt.c_oflag = OPOST | ONLCR; i = cfsetispeed(&tt, B9600); if (i) vtc_log(p->vl, 4, "cfsetispeed %d %s", i, strerror(errno)); i = cfsetospeed(&tt, B9600); if (i) vtc_log(p->vl, 4, "cfsetospeed %d %s", i, strerror(errno)); tt.c_cc[VEOF] = '\x04'; // CTRL-D tt.c_cc[VERASE] = '\x08'; // CTRL-H (Backspace) tt.c_cc[VKILL] = '\x15'; // CTRL-U tt.c_cc[VINTR] = '\x03'; // CTRL-C tt.c_cc[VQUIT] = '\x1c'; // CTRL-backslash i = tcsetattr(fd, TCSAFLUSH, &tt); if (i) vtc_log(p->vl, 4, "TCSAFLUSH %d %s", i, strerror(errno)); } /********************************************************************** * Start the process thread */ static void process_start(struct process *p) { struct vsb *cl; int fd2[2]; int master, slave; const char *slavename; char c; CHECK_OBJ_NOTNULL(p, PROCESS_MAGIC); if (p->hasthread) vtc_fatal(p->vl, "Already running, -wait first"); vtc_log(p->vl, 4, "CMD: %s", p->spec); cl = macro_expand(p->vl, p->spec); AN(cl); master = posix_openpt(O_RDWR|O_NOCTTY); assert(master >= 0); AZ(grantpt(master)); AZ(unlockpt(master)); slavename = ptsname(master); AN(slavename); AZ(pipe(fd2)); p->pid = fork(); assert(p->pid >= 0); if (p->pid == 0) { assert(setsid() == getpid()); assert(dup2(fd2[1], STDERR_FILENO) == STDERR_FILENO); AZ(close(STDIN_FILENO)); slave = open(slavename, O_RDWR); assert(slave == STDIN_FILENO); #ifdef __sun if (ioctl(slave, I_PUSH, "ptem")) vtc_log(p->vl, 4, "PUSH ptem: %s", strerror(errno)); if (ioctl(slave, I_PUSH, "ldterm")) vtc_log(p->vl, 4, "PUSH ldterm: %s", strerror(errno)); (void)ioctl(STDIN_FILENO, TIOCSCTTY, NULL); #else AZ(ioctl(STDIN_FILENO, TIOCSCTTY, NULL)); #endif AZ(close(STDOUT_FILENO)); assert(dup2(slave, STDOUT_FILENO) == STDOUT_FILENO); VSUB_closefrom(STDERR_FILENO + 1); process_init_term(p, slave); AZ(setenv("TERM", "xterm", 1)); AZ(unsetenv("TERMCAP")); // Not using NULL because GCC is now even more demented... assert(write(STDERR_FILENO, "+", 1) == 1); AZ(execl("/bin/sh", "/bin/sh", "-c", VSB_data(cl), (char*)0)); exit(1); } vtc_log(p->vl, 3, "PID: %ld", (long)p->pid); VSB_destroy(&cl); assert(read(fd2[0], &c, 1) == 1); p->fd_term = master; closefd(&fd2[1]); p->fd_stderr = fd2[0]; macro_def(p->vl, p->name, "pid", "%ld", (long)p->pid); macro_def(p->vl, p->name, "dir", "%s", p->dir); macro_def(p->vl, p->name, "out", "%s", p->out); macro_def(p->vl, p->name, "err", "%s", p->err); p->hasthread = 1; PTOK(pthread_create(&p->tp, NULL, process_thread, p)); } /********************************************************************** * Wait for process thread to stop */ static void process_wait(struct process *p) { void *v; if (p->hasthread) { PTOK(pthread_join(p->tp, &v)); p->hasthread = 0; } vtc_log(p->vl, 4, "stdout %ju bytes, stderr %ju bytes", p->stdout_bytes, p->stderr_bytes); } /********************************************************************** * Send a signal to a process */ static void process_kill(struct process *p, const char *sig) { int j = 0; pid_t pid; CHECK_OBJ_NOTNULL(p, PROCESS_MAGIC); AN(sig); PTOK(pthread_mutex_lock(&p->mtx)); pid = p->pid; PTOK(pthread_mutex_unlock(&p->mtx)); if (pid <= 0) vtc_fatal(p->vl, "Cannot signal a non-running process"); if (!strcmp(sig, "TERM")) j = SIGTERM; else if (!strcmp(sig, "INT")) j = SIGINT; else if (!strcmp(sig, "KILL")) j = SIGKILL; else if (!strcmp(sig, "HUP")) j = SIGHUP; else if (*sig == '-') j = strtoul(sig + 1, NULL, 10); else vtc_fatal(p->vl, "Could not grok signal (%s)", sig); if (p->expect_signal == 0) p->expect_signal = -j; if (kill(-pid, j) < 0) vtc_fatal(p->vl, "Failed to send signal %d (%s)", j, strerror(errno)); else vtc_log(p->vl, 4, "Sent signal %d", j); } /********************************************************************** * Write to a process' stdin */ static void process_write(const struct process *p, const char *text) { int r, len; if (!p->hasthread) vtc_fatal(p->vl, "Cannot write to a non-running process"); len = strlen(text); vtc_log(p->vl, 4, "Writing %d bytes", len); r = write(p->fd_term, text, len); if (r != len) vtc_fatal(p->vl, "Failed to write: len=%d %s (%d)", len, strerror(errno), errno); } static void process_write_hex(const struct process *p, const char *text) { struct vsb *vsb; if (!p->hasthread) vtc_fatal(p->vl, "Cannot write to a non-running process"); vsb = vtc_hex_to_bin(p->vl, text); assert(VSB_len(vsb) >= 0); vtc_hexdump(p->vl, 4, "sendhex", VSB_data(vsb), VSB_len(vsb)); AZ(VSB_tofile(vsb, p->fd_term)); VSB_destroy(&vsb); } static void process_close(struct process *p) { if (!p->hasthread) vtc_fatal(p->vl, "Cannot close a non-running process"); process_kill(p, "HUP"); } /* SECTION: process process * * Run a process with stdin+stdout on a pseudo-terminal and stderr on a pipe. * * Output from the pseudo-terminal is copied verbatim to ${pNAME_out}, * and the -log/-dump/-hexdump flags will also put it in the vtc-log. * * The pseudo-terminal is not in ECHO mode, but if the programs run set * it to ECHO mode ("stty sane") any input sent to the process will also * appear in this stream because of the ECHO. * * Output from the stderr-pipe is copied verbatim to ${pNAME_err}, and * is always included in the vtc_log. * * process pNAME SPEC [-allow-core] [-expect-exit N] [-expect-signal N] * [-dump] [-hexdump] [-log] * [-run] [-close] [-kill SIGNAL] [-start] [-stop] [-wait] * [-write STRING] [-writeln STRING] [-writehex HEXSTRING] * [-need-bytes [+]NUMBER] * [-screen-dump] [-winsz LINES COLUMNSS] [-ansi-response] * [-expect-cursor LINE COLUMN] [-expect-text LINE COLUMN TEXT] * [-match-text LINE COLUMN REGEXP] * * pNAME * Name of the process. It must start with 'p'. * * SPEC * The command(s) to run in this process. * * \-hexdump * Log output with vtc_hexdump(). Must be before -start/-run. * * \-dump * Log output with vtc_dump(). Must be before -start/-run. * * \-log * Log output with VLU/vtc_log(). Must be before -start/-run. * * \-start * Start the process. * * \-expect-exit N * Expect exit status N * * \-expect-signal N * Expect signal in exit status N * * \-allow-core * Core dump in exit status is OK * * \-wait * Wait for the process to finish. * * \-run * Shorthand for -start -wait. * * In most cases, if you just want to start a process and wait for it * to finish, you can use the ``shell`` command instead. * The following commands are equivalent:: * * shell "do --something" * * process p1 "do --something" -run * * However, you may use the the ``process`` variant to conveniently * collect the standard input and output without dealing with shell * redirections yourself. The ``shell`` command can also expect an * expression from either output, consider using it if you only need * to match one. * * \-key KEYSYM * Send emulated key-press. * KEYSYM can be one of (NPAGE, PPAGE, HOME, END) * * * \-kill SIGNAL * Send a signal to the process. The argument can be either * the string "TERM", "INT", or "KILL" for SIGTERM, SIGINT or SIGKILL * signals, respectively, or a hyphen (-) followed by the signal * number. * * If you need to use other signal names, you can use the ``kill``\(1) * command directly:: * * shell "kill -USR1 ${pNAME_pid}" * * Note that SIGHUP usage is discouraged in test cases. * * \-stop * Shorthand for -kill TERM. * * \-close * Alias for "-kill HUP" * * \-winsz LINES COLUMNS * Change the terminal window size to LIN lines and COL columns. * * \-write STRING * Write a string to the process' stdin. * * \-writeln STRING * Same as -write followed by a newline (\\n). * * \-writehex HEXSTRING * Same as -write but interpreted as hexadecimal bytes. * * \-need-bytes [+]NUMBER * Wait until at least NUMBER bytes have been received in total. * If '+' is prefixed, NUMBER new bytes must be received. * * \-ansi-response * Respond to terminal respond-back sequences * * \-expect-cursor LINE COLUMN * Expect cursors location * * \-expect-text LINE COLUMNS TEXT * Wait for TEXT to appear at LIN,COL on the virtual screen. * Lines and columns are numbered 1...N * LIN==0 means "on any line" * COL==0 means "anywhere on the line" * * \-match-text LINE COLUMN REGEXP * Wait for the PAT regular expression to match the text at LIN,COL on the virtual screen. * Lines and columns are numbered 1...N * LIN==0 means "on any line" * COL==0 means "anywhere on the line" * * * \-screen-dump * Dump the virtual screen into vtc_log * */ void cmd_process(CMD_ARGS) { struct process *p, *p2; uintmax_t u, v, bsnap; unsigned lin,col; int spec_set = 0; (void)priv; if (av == NULL) { /* Reset and free */ VTAILQ_FOREACH_SAFE(p, &processes, list, p2) { if (p->pid > 0) { process_kill(p, "TERM"); sleep(1); if (p->pid > 0) process_kill(p, "KILL"); } if (p->hasthread) process_wait(p); VTAILQ_REMOVE(&processes, p, list); process_undef(p); process_delete(p); } return; } AZ(strcmp(av[0], "process")); av++; VTC_CHECK_NAME(vl, av[0], "Process", 'p'); VTAILQ_FOREACH(p, &processes, list) if (!strcmp(p->name, av[0])) break; if (p == NULL) p = process_new(av[0]); av++; PTOK(pthread_mutex_lock(&p->mtx)); bsnap = p->stdout_bytes; PTOK(pthread_mutex_unlock(&p->mtx)); for (; *av != NULL; av++) { if (vtc_error) break; if (!strcmp(*av, "-allow-core")) { p->allow_core = 1; continue; } if (!strcmp(*av, "-close")) { process_close(p); continue; } if (!strcmp(*av, "-dump")) { if (p->hasthread) vtc_fatal(p->vl, "Cannot dump a running process"); p->log = 2; continue; } if (!strcmp(*av, "-expect-exit")) { p->expect_exit = strtoul(av[1], NULL, 0); av++; continue; } if (!strcmp(*av, "-expect-signal")) { p->expect_signal = strtoul(av[1], NULL, 0); av++; continue; } if (!strcmp(*av, "-hexdump")) { if (p->hasthread) vtc_fatal(p->vl, "Cannot dump a running process"); p->log = 3; continue; } if (!strcmp(*av, "-key")) { if (!strcmp(av[1], "NPAGE")) process_write(p, "\x1b\x5b\x36\x7e"); else if (!strcmp(av[1], "PPAGE")) process_write(p, "\x1b\x5b\x35\x7e"); else if (!strcmp(av[1], "HOME")) process_write(p, "\x1b\x4f\x48"); else if (!strcmp(av[1], "END")) process_write(p, "\x1b\x4f\x46"); else vtc_fatal(p->vl, "Unknown key %s", av[1]); continue; } if (!strcmp(*av, "-kill")) { process_kill(p, av[1]); av++; continue; } if (!strcmp(*av, "-log")) { if (p->hasthread) vtc_fatal(p->vl, "Cannot log a running process"); p->log = 1; continue; } if (!strcmp(*av, "-need-bytes")) { u = strtoumax(av[1], NULL, 0); if (av[1][0] == '+') u += bsnap; av++; do { PTOK(pthread_mutex_lock(&p->mtx)); v = p->stdout_bytes; PTOK(pthread_mutex_unlock(&p->mtx)); vtc_log(p->vl, 4, "Have %ju bytes", v); usleep(500000); } while(v < u); continue; } if (!strcmp(*av, "-run")) { process_start(p); process_wait(p); continue; } if (!strcmp(*av, "-ansi-response")) { p->ansi_response = 1; continue; } if (!strcmp(*av, "-expect-text")) { AN(av[1]); AN(av[2]); AN(av[3]); term_expect_text(p, av[1], av[2], av[3]); av += 3; continue; } if (!strcmp(*av, "-expect-cursor")) { AN(av[1]); AN(av[2]); term_expect_cursor(p, av[1], av[2]); av += 2; continue; } if (!strcmp(*av, "-match-text")) { AN(av[1]); AN(av[2]); AN(av[3]); term_match_text(p, av[1], av[2], av[3]); av += 3; continue; } if (!strcmp(*av, "-screen_dump") || !strcmp(*av, "-screen-dump")) { term_screen_dump(p); continue; } if (!strcmp(*av, "-start")) { process_start(p); continue; } if (!strcmp(*av, "-stop")) { process_kill(p, "TERM"); sleep(1); continue; } if (!strcmp(*av, "-wait")) { process_wait(p); continue; } if (!strcmp(*av, "-winsz")) { lin = atoi(av[1]); assert(lin > 1); col = atoi(av[2]); assert(col > 1); av += 2; PTOK(pthread_mutex_lock(&p->mtx)); term_resize(p, lin, col); PTOK(pthread_mutex_unlock(&p->mtx)); process_winsz(p, p->fd_term); continue; } if (!strcmp(*av, "-write")) { process_write(p, av[1]); av++; continue; } if (!strcmp(*av, "-writehex")) { process_write_hex(p, av[1]); av++; continue; } if (!strcmp(*av, "-writeln")) { process_write(p, av[1]); process_write(p, "\n"); av++; continue; } if (**av == '-' || spec_set) vtc_fatal(p->vl, "Unknown process argument: %s", *av); REPLACE(p->spec, *av); spec_set = 1; } } varnish-7.5.0/bin/varnishtest/vtc_proxy.c000066400000000000000000000072411457605730600205410ustar00rootroot00000000000000/*- * Copyright (c) 2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include "config.h" #include #include #include #include "vtc.h" #include "vend.h" #include "vsa.h" #include "vtcp.h" static const char vpx1_sig[] = {'P', 'R', 'O', 'X', 'Y'}; static const char vpx2_sig[] = { '\r', '\n', '\r', '\n', '\0', '\r', '\n', 'Q', 'U', 'I', 'T', '\n', }; static void vpx_enc_addr(struct vsb *vsb, int proto, const struct suckaddr *s) { const struct sockaddr_in *sin4; const struct sockaddr_in6 *sin6; socklen_t sl; if (proto == PF_INET6) { sin6 = VSA_Get_Sockaddr(s, &sl); //lint !e826 AN(sin6); assert(sl >= sizeof(*sin6)); VSB_bcat(vsb, &sin6->sin6_addr, sizeof(sin6->sin6_addr)); } else { sin4 = VSA_Get_Sockaddr(s, &sl); //lint !e826 AN(sin4); assert(sl >= sizeof(*sin4)); VSB_bcat(vsb, &sin4->sin_addr, sizeof(sin4->sin_addr)); } } static void vpx_enc_port(struct vsb *vsb, const struct suckaddr *s) { uint8_t b[2]; vbe16enc(b, (uint16_t)VSA_Port(s)); VSB_bcat(vsb, b, sizeof(b)); } int vtc_send_proxy(int fd, int version, const struct suckaddr *sac, const struct suckaddr *sas) { struct vsb *vsb; char hc[VTCP_ADDRBUFSIZE]; char pc[VTCP_PORTBUFSIZE]; char hs[VTCP_ADDRBUFSIZE]; char ps[VTCP_PORTBUFSIZE]; int i; int proto; AN(sac); AN(sas); assert(version == 1 || version == 2); vsb = VSB_new_auto(); AN(vsb); proto = VSA_Get_Proto(sas); assert(proto == PF_INET6 || proto == PF_INET); if (version == 1) { VSB_bcat(vsb, vpx1_sig, sizeof(vpx1_sig)); if (proto == PF_INET6) VSB_cat(vsb, " TCP6 "); else if (proto == PF_INET) VSB_cat(vsb, " TCP4 "); VTCP_name(sac, hc, sizeof(hc), pc, sizeof(pc)); VTCP_name(sas, hs, sizeof(hs), ps, sizeof(ps)); VSB_printf(vsb, "%s %s %s %s\r\n", hc, hs, pc, ps); } else if (version == 2) { VSB_bcat(vsb, vpx2_sig, sizeof(vpx2_sig)); VSB_putc(vsb, 0x21); if (proto == PF_INET6) { VSB_putc(vsb, 0x21); VSB_putc(vsb, 0x00); VSB_putc(vsb, 0x24); } else if (proto == PF_INET) { VSB_putc(vsb, 0x11); VSB_putc(vsb, 0x00); VSB_putc(vsb, 0x0c); } vpx_enc_addr(vsb, proto, sac); vpx_enc_addr(vsb, proto, sas); vpx_enc_port(vsb, sac); vpx_enc_port(vsb, sas); } else WRONG("Wrong proxy version"); AZ(VSB_finish(vsb)); i = VSB_tofile(vsb, fd); VSB_destroy(&vsb); return (i); } varnish-7.5.0/bin/varnishtest/vtc_server.c000066400000000000000000000313711457605730600206670ustar00rootroot00000000000000/*- * Copyright (c) 2008-2010 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include "config.h" #include #include #include #include #include #include #include #include "vsa.h" #include "vtc.h" #include "vtcp.h" #include "vus.h" struct server { unsigned magic; #define SERVER_MAGIC 0x55286619 char *name; struct vtclog *vl; VTAILQ_ENTRY(server) list; struct vtc_sess *vsp; char run; char *spec; int depth; int sock; int fd; unsigned is_dispatch; char listen[256]; char aaddr[VTCP_ADDRBUFSIZE]; char aport[VTCP_PORTBUFSIZE]; pthread_t tp; }; static pthread_mutex_t server_mtx; static VTAILQ_HEAD(, server) servers = VTAILQ_HEAD_INITIALIZER(servers); /********************************************************************** * Allocate and initialize a server */ static struct server * server_new(const char *name, struct vtclog *vl) { struct server *s; VTC_CHECK_NAME(vl, name, "Server", 's'); ALLOC_OBJ(s, SERVER_MAGIC); AN(s); REPLACE(s->name, name); s->vl = vtc_logopen("%s", s->name); AN(s->vl); s->vsp = Sess_New(s->vl, name); AN(s->vsp); bprintf(s->listen, "%s", default_listen_addr); s->depth = 10; s->sock = -1; s->fd = -1; PTOK(pthread_mutex_lock(&server_mtx)); VTAILQ_INSERT_TAIL(&servers, s, list); PTOK(pthread_mutex_unlock(&server_mtx)); return (s); } /********************************************************************** * Clean up a server */ static void server_delete(struct server *s) { CHECK_OBJ_NOTNULL(s, SERVER_MAGIC); Sess_Destroy(&s->vsp); macro_undef(s->vl, s->name, "addr"); macro_undef(s->vl, s->name, "port"); macro_undef(s->vl, s->name, "sock"); vtc_logclose(s->vl); free(s->name); /* XXX: MEMLEAK (?) (VSS ??) */ FREE_OBJ(s); } /********************************************************************** * Server listen */ struct helper { int depth; const char **errp; }; /* cf. VTCP_listen_on() */ static int v_matchproto_(vus_resolved_f) uds_listen(void *priv, const struct sockaddr_un *uds) { int sock, e; struct helper *hp = priv; sock = VUS_bind(uds, hp->errp); if (sock >= 0) { if (listen(sock, hp->depth) != 0) { e = errno; closefd(&sock); errno = e; if (hp->errp != NULL) *hp->errp = "listen(2)"; return (-1); } } if (sock > 0) { *hp->errp = NULL; return (sock); } AN(*hp->errp); return (0); } static void server_listen_uds(struct server *s, const char **errp) { mode_t m; struct helper h; h.depth = s->depth; h.errp = errp; errno = 0; if (unlink(s->listen) != 0 && errno != ENOENT) vtc_fatal(s->vl, "Could not unlink %s before bind: %s", s->listen, strerror(errno)); /* * Temporarily set the umask to 0 to avoid issues with * permissions. */ m = umask(0); s->sock = VUS_resolver(s->listen, uds_listen, &h, errp); (void)umask(m); if (*errp != NULL) return; assert(s->sock > 0); macro_def(s->vl, s->name, "addr", "0.0.0.0"); macro_def(s->vl, s->name, "port", "0"); macro_def(s->vl, s->name, "sock", "%s", s->listen); } static void server_listen_tcp(struct server *s, const char **errp) { char buf[vsa_suckaddr_len]; const struct suckaddr *sua; s->sock = VTCP_listen_on(s->listen, "0", s->depth, errp); if (*errp != NULL) return; assert(s->sock > 0); sua = VSA_getsockname(s->sock, buf, sizeof buf); AN(sua); VTCP_name(sua, s->aaddr, sizeof s->aaddr, s->aport, sizeof s->aport); /* Record the actual port, and reuse it on subsequent starts */ if (VSA_Get_Proto(sua) == AF_INET) bprintf(s->listen, "%s:%s", s->aaddr, s->aport); else bprintf(s->listen, "[%s]:%s", s->aaddr, s->aport); macro_def(s->vl, s->name, "addr", "%s", s->aaddr); macro_def(s->vl, s->name, "port", "%s", s->aport); macro_def(s->vl, s->name, "sock", "%s", s->listen); } static void server_listen(struct server *s) { const char *err; CHECK_OBJ_NOTNULL(s, SERVER_MAGIC); if (s->sock >= 0) VTCP_close(&s->sock); if (VUS_is(s->listen)) server_listen_uds(s, &err); else server_listen_tcp(s, &err); if (err != NULL) vtc_fatal(s->vl, "Server listen address (%s) cannot be resolved: %s", s->listen, err); } /********************************************************************** * Server thread */ static int server_conn(void *priv, struct vtclog *vl) { struct server *s; struct sockaddr_storage addr_s; struct sockaddr *addr; char abuf[VTCP_ADDRBUFSIZE]; char pbuf[VTCP_PORTBUFSIZE]; socklen_t l; int fd; CAST_OBJ_NOTNULL(s, priv, SERVER_MAGIC); addr = (void*)&addr_s; l = sizeof addr_s; fd = accept(s->sock, addr, &l); if (fd < 0) vtc_fatal(vl, "Accept failed: %s", strerror(errno)); if (VUS_is(s->listen)) vtc_log(vl, 3, "accepted fd %d 0.0.0.0 0", fd); else { VTCP_hisname(fd, abuf, sizeof abuf, pbuf, sizeof pbuf); vtc_log(vl, 3, "accepted fd %d %s %s", fd, abuf, pbuf); } return (fd); } static void server_disc(void *priv, struct vtclog *vl, int *fdp) { int j; struct server *s; CAST_OBJ_NOTNULL(s, priv, SERVER_MAGIC); vtc_log(vl, 3, "shutting fd %d", *fdp); j = shutdown(*fdp, SHUT_WR); if (!vtc_stop && !VTCP_Check(j)) vtc_fatal(vl, "Shutdown failed: %s", strerror(errno)); VTCP_close(fdp); } static void server_start_thread(struct server *s) { s->run = 1; s->tp = Sess_Start_Thread( s, s->vsp, server_conn, server_disc, s->listen, &s->sock, s->spec ); } /********************************************************************** * Start the server thread */ static void server_start(struct server *s) { CHECK_OBJ_NOTNULL(s, SERVER_MAGIC); vtc_log(s->vl, 2, "Starting server"); server_listen(s); vtc_log(s->vl, 1, "Listen on %s", s->listen); server_start_thread(s); } /********************************************************************** */ static void * server_dispatch_wrk(void *priv) { struct server *s; struct vtclog *vl; int j, fd; CAST_OBJ_NOTNULL(s, priv, SERVER_MAGIC); assert(s->sock < 0); vl = vtc_logopen("%s", s->name); pthread_cleanup_push(vtc_logclose, vl); fd = s->fd; vtc_log(vl, 3, "start with fd %d", fd); fd = sess_process(vl, s->vsp, s->spec, fd, &s->sock, s->listen); vtc_log(vl, 3, "shutting fd %d", fd); j = shutdown(fd, SHUT_WR); if (!VTCP_Check(j)) vtc_fatal(vl, "Shutdown failed: %s", strerror(errno)); VTCP_close(&s->fd); vtc_log(vl, 2, "Ending"); pthread_cleanup_pop(0); vtc_logclose(vl); return (NULL); } static void * server_dispatch_thread(void *priv) { struct server *s, *s2; static int sn = 1; int fd; char snbuf[8]; struct vtclog *vl; struct sockaddr_storage addr_s; struct sockaddr *addr; socklen_t l; CAST_OBJ_NOTNULL(s, priv, SERVER_MAGIC); assert(s->sock >= 0); vl = vtc_logopen("%s", s->name); pthread_cleanup_push(vtc_logclose, vl); vtc_log(vl, 2, "Dispatch started on %s", s->listen); while (!vtc_stop) { addr = (void*)&addr_s; l = sizeof addr_s; fd = accept(s->sock, addr, &l); if (fd < 0) vtc_fatal(vl, "Accepted failed: %s", strerror(errno)); bprintf(snbuf, "s%d", sn++); vtc_log(vl, 3, "dispatch fd %d -> %s", fd, snbuf); s2 = server_new(snbuf, vl); s2->is_dispatch = 1; s2->spec = s->spec; bstrcpy(s2->listen, s->listen); s2->fd = fd; s2->run = 1; PTOK(pthread_create(&s2->tp, NULL, server_dispatch_wrk, s2)); } pthread_cleanup_pop(0); vtc_logclose(vl); NEEDLESS(return (NULL)); } static void server_dispatch(struct server *s) { CHECK_OBJ_NOTNULL(s, SERVER_MAGIC); server_listen(s); vtc_log(s->vl, 2, "Starting dispatch server"); s->run = 1; PTOK(pthread_create(&s->tp, NULL, server_dispatch_thread, s)); } /********************************************************************** * Force stop the server thread */ static void server_break(struct server *s) { void *res; CHECK_OBJ_NOTNULL(s, SERVER_MAGIC); vtc_log(s->vl, 2, "Breaking for server"); (void)pthread_cancel(s->tp); PTOK(pthread_join(s->tp, &res)); VTCP_close(&s->sock); s->tp = 0; s->run = 0; } /********************************************************************** * Wait for server thread to stop */ static void server_wait(struct server *s) { void *res; CHECK_OBJ_NOTNULL(s, SERVER_MAGIC); vtc_log(s->vl, 2, "Waiting for server (%d/%d)", s->sock, s->fd); PTOK(pthread_join(s->tp, &res)); if (res != NULL && !vtc_stop) vtc_fatal(s->vl, "Server returned \"%p\"", (char *)res); s->tp = 0; s->run = 0; } /********************************************************************** * Generate VCL backend decls for our servers */ void cmd_server_gen_vcl(struct vsb *vsb) { struct server *s; PTOK(pthread_mutex_lock(&server_mtx)); VTAILQ_FOREACH(s, &servers, list) { if (s->is_dispatch) continue; if (VUS_is(s->listen)) VSB_printf(vsb, "backend %s { .path = \"%s\"; }\n", s->name, s->listen); else VSB_printf(vsb, "backend %s { .host = \"%s\"; .port = \"%s\"; }\n", s->name, s->aaddr, s->aport); } PTOK(pthread_mutex_unlock(&server_mtx)); } /********************************************************************** * Generate VCL backend decls for our servers */ void cmd_server_gen_haproxy_conf(struct vsb *vsb) { struct server *s; PTOK(pthread_mutex_lock(&server_mtx)); VTAILQ_FOREACH(s, &servers, list) { if (! VUS_is(s->listen)) VSB_printf(vsb, "\n backend be%s\n" "\tserver srv%s %s:%s\n", s->name + 1, s->name + 1, s->aaddr, s->aport); else INCOMPL(); } VTAILQ_FOREACH(s, &servers, list) { if (! VUS_is(s->listen)) VSB_printf(vsb, "\n frontend http%s\n" "\tuse_backend be%s\n" "\tbind \"fd@${fe%s}\"\n", s->name + 1, s->name + 1, s->name + 1); else INCOMPL(); } PTOK(pthread_mutex_unlock(&server_mtx)); } /********************************************************************** * Server command dispatch */ void cmd_server(CMD_ARGS) { struct server *s; (void)priv; if (av == NULL) { /* Reset and free */ while (1) { PTOK(pthread_mutex_lock(&server_mtx)); s = VTAILQ_FIRST(&servers); CHECK_OBJ_ORNULL(s, SERVER_MAGIC); if (s != NULL) VTAILQ_REMOVE(&servers, s, list); PTOK(pthread_mutex_unlock(&server_mtx)); if (s == NULL) break; if (s->run) { (void)pthread_cancel(s->tp); server_wait(s); } if (s->sock >= 0) VTCP_close(&s->sock); server_delete(s); } return; } AZ(strcmp(av[0], "server")); av++; PTOK(pthread_mutex_lock(&server_mtx)); VTAILQ_FOREACH(s, &servers, list) if (!strcmp(s->name, av[0])) break; PTOK(pthread_mutex_unlock(&server_mtx)); if (s == NULL) s = server_new(av[0], vl); CHECK_OBJ_NOTNULL(s, SERVER_MAGIC); av++; for (; *av != NULL; av++) { if (vtc_error) break; if (!strcmp(*av, "-wait")) { if (!s->run) vtc_fatal(s->vl, "Server not -started"); server_wait(s); continue; } if (!strcmp(*av, "-break")) { server_break(s); continue; } /* * We do an implict -wait if people muck about with a * running server. */ if (s->run) server_wait(s); AZ(s->run); if (Sess_GetOpt(s->vsp, &av)) continue; if (!strcmp(*av, "-listen")) { if (s->sock >= 0) VTCP_close(&s->sock); bprintf(s->listen, "%s", av[1]); av++; continue; } if (!strcmp(*av, "-start")) { server_start(s); continue; } if (!strcmp(*av, "-dispatch")) { if (strcmp(s->name, "s0")) vtc_fatal(s->vl, "server -dispatch only works on s0"); server_dispatch(s); continue; } if (**av == '-') vtc_fatal(s->vl, "Unknown server argument: %s", *av); s->spec = *av; } } void init_server(void) { PTOK(pthread_mutex_init(&server_mtx, NULL)); } varnish-7.5.0/bin/varnishtest/vtc_sess.c000066400000000000000000000103651457605730600203360ustar00rootroot00000000000000/*- * Copyright (c) 2020 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #include "config.h" #include #include #include #include "vtc.h" #include "vtc_http.h" struct thread_arg { unsigned magic; #define THREAD_ARG_MAGIC 0xd5dc5f1c void *priv; sess_conn_f *conn_f; sess_disc_f *disc_f; const char *listen_addr; struct vtc_sess *vsp; int *asocket; const char *spec; }; struct vtc_sess * Sess_New(struct vtclog *vl, const char *name) { struct vtc_sess *vsp; ALLOC_OBJ(vsp, VTC_SESS_MAGIC); AN(vsp); vsp->vl = vl; REPLACE(vsp->name, name); vsp->repeat = 1; return (vsp); } void Sess_Destroy(struct vtc_sess **vspp) { struct vtc_sess *vsp; TAKE_OBJ_NOTNULL(vsp, vspp, VTC_SESS_MAGIC); REPLACE(vsp->name, NULL); FREE_OBJ(vsp); } int Sess_GetOpt(struct vtc_sess *vsp, char * const **avp) { char * const *av; int rv = 0; CHECK_OBJ_NOTNULL(vsp, VTC_SESS_MAGIC); AN(avp); av = *avp; AN(*av); if (!strcmp(*av, "-rcvbuf")) { AN(av[1]); vsp->rcvbuf = atoi(av[1]); av += 1; rv = 1; } else if (!strcmp(*av, "-repeat")) { AN(av[1]); vsp->repeat = atoi(av[1]); av += 1; rv = 1; } else if (!strcmp(*av, "-keepalive")) { vsp->keepalive = 1; rv = 1; } *avp = av; return (rv); } int sess_process(struct vtclog *vl, struct vtc_sess *vsp, const char *spec, int sock, int *sfd, const char *addr) { int rv; CHECK_OBJ_NOTNULL(vsp, VTC_SESS_MAGIC); rv = http_process(vl, vsp, spec, sock, sfd, addr, vsp->rcvbuf); return (rv); } static void * sess_thread(void *priv) { struct vtclog *vl; struct vtc_sess *vsp; struct thread_arg ta, *tap; int i, fd = -1; CAST_OBJ_NOTNULL(tap, priv, THREAD_ARG_MAGIC); ta = *tap; FREE_OBJ(tap); vsp = ta.vsp; CHECK_OBJ_NOTNULL(vsp, VTC_SESS_MAGIC); vl = vtc_logopen("%s", vsp->name); pthread_cleanup_push(vtc_logclose, vl); assert(vsp->repeat > 0); vtc_log(vl, 2, "Started on %s (%u iterations%s)", ta.listen_addr, vsp->repeat, vsp->keepalive ? " using keepalive" : ""); for (i = 0; i < vsp->repeat; i++) { if (fd < 0) fd = ta.conn_f(ta.priv, vl); fd = sess_process(vl, ta.vsp, ta.spec, fd, ta.asocket, ta.listen_addr); if (! vsp->keepalive) ta.disc_f(ta.priv, vl, &fd); } if (vsp->keepalive) ta.disc_f(ta.priv, vl, &fd); vtc_log(vl, 2, "Ending"); pthread_cleanup_pop(0); vtc_logclose(vl); return (NULL); } pthread_t Sess_Start_Thread( void *priv, struct vtc_sess *vsp, sess_conn_f *conn, sess_disc_f *disc, const char *listen_addr, int *asocket, const char *spec ) { struct thread_arg *ta; pthread_t pt; AN(priv); CHECK_OBJ_NOTNULL(vsp, VTC_SESS_MAGIC); AN(conn); AN(disc); AN(listen_addr); ALLOC_OBJ(ta, THREAD_ARG_MAGIC); AN(ta); ta->priv = priv; ta->vsp = vsp; ta->conn_f = conn; ta->disc_f = disc; ta->listen_addr = listen_addr; ta->asocket = asocket; ta->spec = spec; PTOK(pthread_create(&pt, NULL, sess_thread, ta)); return (pt); } varnish-7.5.0/bin/varnishtest/vtc_subr.c000066400000000000000000000145211457605730600203320ustar00rootroot00000000000000/*- * Copyright (c) 2008-2017 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include "config.h" #include #include #include #include #include #include #include #include #include "vtc.h" #include "vct.h" #include "vnum.h" #include "vre.h" #include "vapi/vsig.h" struct vsb * vtc_hex_to_bin(struct vtclog *vl, const char *arg) { struct vsb *vsb; unsigned sh = 4; unsigned c, b = 0; vsb = VSB_new_auto(); AN(vsb); for (; *arg != '\0'; arg++) { if (vct_issp(*arg) || *arg == '\n') continue; c = (uint8_t)*arg; if (c >= '0' && c <= '9') b |= (c - 48U) << sh; else if (c >= 'A' && c <= 'F') b |= (c - 55U) << sh; else if (c >= 'a' && c <= 'f') b |= (c - 87U) << sh; else vtc_fatal(vl,"Illegal hex string"); sh = 4 - sh; if (sh == 4) { VSB_putc(vsb, b); b = 0; } } if (sh != 4) VSB_putc(vsb, b); AZ(VSB_finish(vsb)); return (vsb); } void vtc_expect(struct vtclog *vl, const char *olhs, const char *lhs, const char *cmp, const char *orhs, const char *rhs) { vre_t *vre; struct vsb vsb[1]; int error, erroroffset; int i, j, retval = -1; double fl, fr; char errbuf[VRE_ERROR_LEN]; j = lhs == NULL || rhs == NULL; if (lhs == NULL) lhs = ""; if (rhs == NULL) rhs = ""; if (!strcmp(cmp, "~") || !strcmp(cmp, "!~")) { vre = VRE_compile(rhs, 0, &error, &erroroffset, 1); if (vre == NULL) { AN(VSB_init(vsb, errbuf, sizeof errbuf)); AZ(VRE_error(vsb, error)); AZ(VSB_finish(vsb)); VSB_fini(vsb); vtc_fatal(vl, "REGEXP error: %s (@%d) (%s)", errbuf, erroroffset, rhs); } i = VRE_match(vre, lhs, 0, 0, NULL); retval = (i >= 0 && *cmp == '~') || (i < 0 && *cmp == '!'); VRE_free(&vre); } else if (!strcmp(cmp, "==")) { retval = strcmp(lhs, rhs) == 0; } else if (!strcmp(cmp, "!=")) { retval = strcmp(lhs, rhs) != 0; } else if (!strcmp(cmp, "-lt")) { retval = strtoul(lhs, NULL, 0) < strtoul(rhs, NULL, 0); } else if (!strcmp(cmp, "-le")) { retval = strtoul(lhs, NULL, 0) <= strtoul(rhs, NULL, 0); } else if (!strcmp(cmp, "-eq")) { retval = strtoul(lhs, NULL, 0) == strtoul(rhs, NULL, 0); } else if (!strcmp(cmp, "-ne")) { retval = strtoul(lhs, NULL, 0) != strtoul(rhs, NULL, 0); } else if (!strcmp(cmp, "-ge")) { retval = strtoul(lhs, NULL, 0) >= strtoul(rhs, NULL, 0); } else if (!strcmp(cmp, "-gt")) { retval = strtoul(lhs, NULL, 0) > strtoul(rhs, NULL, 0); } else if (j) { // fail inequality comparisons if either side is undef'ed retval = 0; } else { fl = VNUM(lhs); fr = VNUM(rhs); if (!strcmp(cmp, "<")) retval = isless(fl, fr); else if (!strcmp(cmp, ">")) retval = isgreater(fl, fr); else if (!strcmp(cmp, "<=")) retval = islessequal(fl, fr); else if (!strcmp(cmp, ">=")) retval = isgreaterequal(fl, fr); } if (retval == -1) vtc_fatal(vl, "EXPECT %s (%s) %s %s (%s) test not implemented", olhs, lhs, cmp, orhs, rhs); else if (retval == 0) vtc_fatal(vl, "EXPECT %s (%s) %s \"%s\" failed", olhs, lhs, cmp, rhs); else vtc_log(vl, 4, "EXPECT %s (%s) %s \"%s\" match", olhs, lhs, cmp, rhs); } /********************************************************************** * Wait for a subprocess. * * if expect_signal > 0, the process must die on that signal. * if expect_signal < 0, dying on that signal is allowed, but not required. * if allow_core > 0, a coredump is allowed, but not required. * otherwise, the process must die on exit(expect_status) */ void vtc_wait4(struct vtclog *vl, long pid, int expect_status, int expect_signal, int allow_core) { int status, r; struct rusage ru; r = wait4(pid, &status, 0, &ru); if (r < 0) vtc_fatal(vl, "wait4 failed on pid %ld: %s", pid, strerror(errno)); assert(r == pid); vtc_log(vl, 2, "WAIT4 pid=%ld status=0x%04x (user %.6f sys %.6f)", pid, status, ru.ru_utime.tv_sec + 1e-6 * ru.ru_utime.tv_usec, ru.ru_stime.tv_sec + 1e-6 * ru.ru_stime.tv_usec ); if (WIFEXITED(status) && expect_signal <= 0 && WEXITSTATUS(status) == expect_status) return; if (expect_signal < 0) expect_signal = -expect_signal; if (WIFSIGNALED(status) && WCOREDUMP(status) <= allow_core && WTERMSIG(status) == expect_signal) return; vtc_log(vl, 1, "Expected exit: 0x%x signal: %d core: %d", expect_status, expect_signal, allow_core); vtc_fatal(vl, "Bad exit status: 0x%04x exit 0x%x signal %d core %d", status, WEXITSTATUS(status), WIFSIGNALED(status) ? WTERMSIG(status) : 0, WCOREDUMP(status)); } void * vtc_record(struct vtclog *vl, int fd, struct vsb *vsb) { char buf[BUFSIZ]; int i; while (1) { errno = 0; i = read(fd, buf, sizeof buf - 1); if (i > 0) { if (vsb != NULL) VSB_bcat(vsb, buf, i); buf[i] = '\0'; vtc_dump(vl, 3, "debug", buf, -2); } else if (i == 0 && errno == 0) { vtc_log(vl, 4, "STDOUT EOF"); break; } else { vtc_log(vl, 4, "STDOUT read failed with %d - %s.", errno, strerror(errno)); break; } } return (NULL); } varnish-7.5.0/bin/varnishtest/vtc_syslog.c000066400000000000000000000350031457605730600206750ustar00rootroot00000000000000/*- * Copyright (c) 2008-2010 Varnish Software AS * All rights reserved. * * Author: Frédéric Lécaille * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include "config.h" #include #include #include #include #include #include #include "vtc.h" #include "vsa.h" #include "vss.h" #include "vtcp.h" #include "vre.h" struct syslog_srv { unsigned magic; #define SYSLOG_SRV_MAGIC 0xbf28a692 char *name; struct vtclog *vl; VTAILQ_ENTRY(syslog_srv) list; char run; int repeat; char *spec; int sock; char bind[256]; int lvl; pthread_t tp; ssize_t rxbuf_left; size_t rxbuf_sz; char *rxbuf; vtim_dur timeout; }; static pthread_mutex_t syslog_mtx; static VTAILQ_HEAD(, syslog_srv) syslogs = VTAILQ_HEAD_INITIALIZER(syslogs); #define SYSLOGCMDS \ CMD_SYSLOG(expect) \ CMD_SYSLOG(recv) #define CMD_SYSLOG(nm) static cmd_f cmd_syslog_##nm; SYSLOGCMDS #undef CMD_SYSLOG static const struct cmds syslog_cmds[] = { #define CMD_SYSLOG(n) { #n, cmd_syslog_##n }, SYSLOGCMDS #undef CMD_SYSLOG { NULL, NULL } }; static const char * const syslog_levels[] = { "emerg", "alert", "crit", "err", "warning", "notice", "info", "debug", NULL, }; static int get_syslog_level(struct vtclog *vl, const char *lvl) { int i; for (i = 0; syslog_levels[i]; i++) if (!strcmp(lvl, syslog_levels[i])) return (i); vtc_fatal(vl, "wrong syslog level '%s'\n", lvl); } /*-------------------------------------------------------------------- * Check if a UDP syscall return value is fatal * XXX: Largely copied from VTCP, not sure if really applicable */ static int VUDP_Check(int a) { if (a == 0) return (1); if (errno == ECONNRESET) return (1); #if (defined (__SVR4) && defined (__sun)) || defined (__NetBSD__) /* * Solaris returns EINVAL if the other end unexpectedly reset the * connection. * This is a bug in Solaris and documented behaviour on NetBSD. */ if (errno == EINVAL || errno == ETIMEDOUT || errno == EPIPE) return (1); #elif defined (__APPLE__) /* * MacOS returns EINVAL if the other end unexpectedly reset * the connection. */ if (errno == EINVAL) return (1); #endif return (0); } /*-------------------------------------------------------------------- * When closing a UDP connection, a couple of errno's are legit, we * can't be held responsible for the other end wanting to talk to us. */ static void VUDP_close(int *s) { int i; i = close(*s); assert(VUDP_Check(i)); *s = -1; } /*-------------------------------------------------------------------- * Given a struct suckaddr, open a socket of the appropriate type, and bind * it to the requested address. * * If the address is an IPv6 address, the IPV6_V6ONLY option is set to * avoid conflicts between INADDR_ANY and IN6ADDR_ANY. */ static int VUDP_bind(const struct suckaddr *sa, const char **errp) { #ifdef IPV6_V6ONLY int val; #endif int sd, e; socklen_t sl; const struct sockaddr *so; int proto; if (errp != NULL) *errp = NULL; proto = VSA_Get_Proto(sa); sd = socket(proto, SOCK_DGRAM, 0); if (sd < 0) { if (errp != NULL) *errp = "socket(2)"; return (-1); } #ifdef IPV6_V6ONLY /* forcibly use separate sockets for IPv4 and IPv6 */ val = 1; if (proto == AF_INET6 && setsockopt(sd, IPPROTO_IPV6, IPV6_V6ONLY, &val, sizeof val) != 0) { if (errp != NULL) *errp = "setsockopt(IPV6_V6ONLY, 1)"; e = errno; closefd(&sd); errno = e; return (-1); } #endif so = VSA_Get_Sockaddr(sa, &sl); if (bind(sd, so, sl) != 0) { if (errp != NULL) *errp = "bind(2)"; e = errno; closefd(&sd); errno = e; return (-1); } return (sd); } /*--------------------------------------------------------------------*/ struct udp_helper { const char **errp; }; static int v_matchproto_(vss_resolved_f) vudp_lo_cb(void *priv, const struct suckaddr *sa) { int sock; struct udp_helper *hp = priv; sock = VUDP_bind(sa, hp->errp); if (sock > 0) { *hp->errp = NULL; return (sock); } AN(*hp->errp); return (0); } static int VUDP_bind_on(const char *addr, const char *def_port, const char **errp) { struct udp_helper h; int sock; h.errp = errp; sock = VSS_resolver_socktype( addr, def_port, vudp_lo_cb, &h, errp, SOCK_DGRAM); if (*errp != NULL) return (-1); return (sock); } /********************************************************************** * Allocate and initialize a syslog */ static struct syslog_srv * syslog_new(const char *name, struct vtclog *vl) { struct syslog_srv *s; VTC_CHECK_NAME(vl, name, "Syslog", 'S'); ALLOC_OBJ(s, SYSLOG_SRV_MAGIC); AN(s); REPLACE(s->name, name); s->vl = vtc_logopen("%s", s->name); AN(s->vl); vtc_log_set_cmd(s->vl, syslog_cmds); bprintf(s->bind, "%s", default_listen_addr); s->repeat = 1; s->sock = -1; s->lvl = -1; s->timeout = vtc_maxdur * .5; // XXX vl = vtc_logopen("%s", s->name); AN(vl); s->rxbuf_sz = s->rxbuf_left = 2048*1024; s->rxbuf = malloc(s->rxbuf_sz); /* XXX */ AN(s->rxbuf); PTOK(pthread_mutex_lock(&syslog_mtx)); VTAILQ_INSERT_TAIL(&syslogs, s, list); PTOK(pthread_mutex_unlock(&syslog_mtx)); return (s); } /********************************************************************** * Clean up a syslog */ static void syslog_delete(struct syslog_srv *s) { CHECK_OBJ_NOTNULL(s, SYSLOG_SRV_MAGIC); macro_undef(s->vl, s->name, "addr"); macro_undef(s->vl, s->name, "port"); macro_undef(s->vl, s->name, "sock"); vtc_logclose(s->vl); free(s->name); free(s->rxbuf); /* XXX: MEMLEAK (?) (VSS ??) */ FREE_OBJ(s); } static void syslog_rx(const struct syslog_srv *s, int lvl) { ssize_t ret; while (!vtc_error) { /* Pointers to syslog priority value (see , rfc5424). */ char *prib, *prie, *end; unsigned int prival; VTCP_set_read_timeout(s->sock, s->timeout); ret = recv(s->sock, s->rxbuf, s->rxbuf_sz - 1, 0); if (ret < 0) { if (errno == EINTR || errno == EAGAIN) continue; vtc_fatal(s->vl, "%s: recv failed (fd: %d read: %s", __func__, s->sock, strerror(errno)); } if (ret == 0) vtc_fatal(s->vl, "syslog rx timeout (fd: %d %.3fs ret: %zd)", s->sock, s->timeout, ret); s->rxbuf[ret] = '\0'; vtc_dump(s->vl, 4, "syslog", s->rxbuf, ret); prib = s->rxbuf; if (*prib++ != '<') vtc_fatal(s->vl, "syslog PRI, no '<'"); prie = strchr(prib, '>'); if (prie == NULL) vtc_fatal(s->vl, "syslog PRI, no '>'"); prival = strtoul(prib, &end, 10); if (end != prie) vtc_fatal(s->vl, "syslog PRI, bad number"); if (lvl >= 0 && lvl == (prival & 0x7)) return; } } /********************************************************************** * Syslog server bind */ static void syslog_bind(struct syslog_srv *s) { const char *err; char aaddr[VTCP_ADDRBUFSIZE]; char aport[VTCP_PORTBUFSIZE]; char buf[vsa_suckaddr_len]; const struct suckaddr *sua; CHECK_OBJ_NOTNULL(s, SYSLOG_SRV_MAGIC); if (s->sock >= 0) VUDP_close(&s->sock); s->sock = VUDP_bind_on(s->bind, "0", &err); if (err != NULL) vtc_fatal(s->vl, "Syslog server bind address (%s) cannot be resolved: %s", s->bind, err); assert(s->sock > 0); sua = VSA_getsockname(s->sock, buf, sizeof buf); AN(sua); VTCP_name(sua, aaddr, sizeof aaddr, aport, sizeof aport); macro_def(s->vl, s->name, "addr", "%s", aaddr); macro_def(s->vl, s->name, "port", "%s", aport); if (VSA_Get_Proto(sua) == AF_INET) macro_def(s->vl, s->name, "sock", "%s:%s", aaddr, aport); else macro_def(s->vl, s->name, "sock", "[%s]:%s", aaddr, aport); /* Record the actual port, and reuse it on subsequent starts */ bprintf(s->bind, "%s %s", aaddr, aport); } static void v_matchproto_(cmd_f) cmd_syslog_expect(CMD_ARGS) { struct syslog_srv *s; struct vsb vsb[1]; vre_t *vre; int error, erroroffset, i, ret; char *cmp, *spec, errbuf[VRE_ERROR_LEN]; (void)vl; CAST_OBJ_NOTNULL(s, priv, SYSLOG_SRV_MAGIC); AZ(strcmp(av[0], "expect")); av++; cmp = av[0]; spec = av[1]; AN(cmp); AN(spec); AZ(av[2]); assert(!strcmp(cmp, "~") || !strcmp(cmp, "!~")); vre = VRE_compile(spec, 0, &error, &erroroffset, 1); if (vre == NULL) { AN(VSB_init(vsb, errbuf, sizeof errbuf)); AZ(VRE_error(vsb, error)); AZ(VSB_finish(vsb)); VSB_fini(vsb); vtc_fatal(s->vl, "REGEXP error: '%s' (@%d) (%s)", errbuf, erroroffset, spec); } i = VRE_match(vre, s->rxbuf, 0, 0, NULL); VRE_free(&vre); ret = (i >= 0 && *cmp == '~') || (i < 0 && *cmp == '!'); if (!ret) vtc_fatal(s->vl, "EXPECT FAILED %s \"%s\"", cmp, spec); else vtc_log(s->vl, 4, "EXPECT MATCH %s \"%s\"", cmp, spec); } static void v_matchproto_(cmd_f) cmd_syslog_recv(CMD_ARGS) { int lvl; struct syslog_srv *s; CAST_OBJ_NOTNULL(s, priv, SYSLOG_SRV_MAGIC); (void)vl; AZ(strcmp(av[0], "recv")); av++; if (av[0] == NULL) lvl = s->lvl; else lvl = get_syslog_level(vl, av[0]); syslog_rx(s, lvl); } /********************************************************************** * Syslog server thread */ static void * syslog_thread(void *priv) { struct syslog_srv *s; int i; CAST_OBJ_NOTNULL(s, priv, SYSLOG_SRV_MAGIC); assert(s->sock >= 0); vtc_log(s->vl, 2, "Started on %s (level: %d)", s->bind, s->lvl); for (i = 0; i < s->repeat; i++) { if (s->repeat > 1) vtc_log(s->vl, 3, "Iteration %d", i); parse_string(s->vl, s, s->spec); vtc_log(s->vl, 3, "shutting fd %d", s->sock); } VUDP_close(&s->sock); vtc_log(s->vl, 2, "Ending"); return (NULL); } /********************************************************************** * Start the syslog thread */ static void syslog_start(struct syslog_srv *s) { CHECK_OBJ_NOTNULL(s, SYSLOG_SRV_MAGIC); vtc_log(s->vl, 2, "Starting syslog server"); if (s->sock == -1) syslog_bind(s); vtc_log(s->vl, 1, "Bound on %s", s->bind); s->run = 1; PTOK(pthread_create(&s->tp, NULL, syslog_thread, s)); } /********************************************************************** * Force stop the syslog thread */ static void syslog_stop(struct syslog_srv *s) { void *res; CHECK_OBJ_NOTNULL(s, SYSLOG_SRV_MAGIC); vtc_log(s->vl, 2, "Stopping for syslog server"); (void)pthread_cancel(s->tp); PTOK(pthread_join(s->tp, &res)); s->tp = 0; s->run = 0; } /********************************************************************** * Wait for syslog thread to stop */ static void syslog_wait(struct syslog_srv *s) { void *res; CHECK_OBJ_NOTNULL(s, SYSLOG_SRV_MAGIC); vtc_log(s->vl, 2, "Waiting for syslog server (%d)", s->sock); PTOK(pthread_join(s->tp, &res)); if (res != NULL && !vtc_stop) vtc_fatal(s->vl, "Syslog server returned \"%p\"", (char *)res); s->tp = 0; s->run = 0; } /* SECTION: syslog syslog * * Define and interact with syslog instances (for use with haproxy) * * To define a syslog server, you'll use this syntax:: * * syslog SNAME * * Arguments: * * SNAME * Identify the syslog server with a string which must start with 'S'. * * \-level STRING * Set the default syslog priority level used by any subsequent "recv" * command. * Any syslog dgram with a different level will be skipped by * "recv" command. This default level value may be superseded * by "recv" command if supplied as first argument: "recv ". * * \-start * Start the syslog server thread in the background. * * \-repeat * Instead of processing the specification only once, do it * NUMBER times. * * \-bind * Bind the syslog socket to a local address. * * \-wait * Wait for that thread to terminate. * * \-stop * Stop the syslog server thread. */ void v_matchproto_(cmd_f) cmd_syslog(CMD_ARGS) { struct syslog_srv *s; (void)priv; if (av == NULL) { /* Reset and free */ do { PTOK(pthread_mutex_lock(&syslog_mtx)); s = VTAILQ_FIRST(&syslogs); CHECK_OBJ_ORNULL(s, SYSLOG_SRV_MAGIC); if (s != NULL) VTAILQ_REMOVE(&syslogs, s, list); PTOK(pthread_mutex_unlock(&syslog_mtx)); if (s != NULL) { if (s->run) { (void)pthread_cancel(s->tp); syslog_wait(s); } if (s->sock >= 0) VUDP_close(&s->sock); syslog_delete(s); } } while (s != NULL); return; } AZ(strcmp(av[0], "syslog")); av++; PTOK(pthread_mutex_lock(&syslog_mtx)); VTAILQ_FOREACH(s, &syslogs, list) if (!strcmp(s->name, av[0])) break; PTOK(pthread_mutex_unlock(&syslog_mtx)); if (s == NULL) s = syslog_new(av[0], vl); CHECK_OBJ_NOTNULL(s, SYSLOG_SRV_MAGIC); av++; for (; *av != NULL; av++) { if (vtc_error) break; if (!strcmp(*av, "-wait")) { if (!s->run) vtc_fatal(s->vl, "Syslog server not -started"); syslog_wait(s); continue; } if (!strcmp(*av, "-stop")) { syslog_stop(s); continue; } /* * We do an implict -wait if people muck about with a * running syslog. * This only works if the previous ->spec has completed */ if (s->run) syslog_wait(s); AZ(s->run); if (!strcmp(*av, "-repeat")) { AN(av[1]); s->repeat = atoi(av[1]); av++; continue; } if (!strcmp(*av, "-bind")) { AN(av[1]); bprintf(s->bind, "%s", av[1]); av++; syslog_bind(s); continue; } if (!strcmp(*av, "-level")) { AN(av[1]); s->lvl = get_syslog_level(vl, av[1]); av++; continue; } if (!strcmp(*av, "-start")) { syslog_start(s); continue; } if (**av == '-') vtc_fatal(s->vl, "Unknown syslog argument: %s", *av); s->spec = *av; } } void init_syslog(void) { PTOK(pthread_mutex_init(&syslog_mtx, NULL)); } varnish-7.5.0/bin/varnishtest/vtc_tunnel.c000066400000000000000000000441201457605730600206620ustar00rootroot00000000000000/*- * Copyright (c) 2020 Varnish Software * All rights reserved. * * Author: Dridi Boukelmoune * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include "config.h" #include #include #include #include #include #include #include #include #include #include "vtc.h" #include "vsa.h" #include "vtcp.h" /* SECTION: tunnel tunnel * * The goal of a tunnel is to help control the data transfer between two * parties, for example to trigger socket timeouts in the middle of protocol * frames, without the need to change how both parties are implemented. * * A tunnel accepts a connection and then connects on behalf of the source to * the desired destination. Once both connections are established the tunnel * will transfer bytes unchanged between the source and destination. Transfer * can be interrupted, usually with the help of synchronization methods like * barriers. Once the transfer is paused, it is possible to let a specific * amount of bytes move in either direction. * * SECTION: tunnel.args Arguments * * \-start * Start the tunnel in background, processing the last given * specification. * * \-start+pause * Start the tunnel, but already paused. * * \-wait * Block until the thread finishes. * * \-listen STRING * Dictate the listening socket for the server. STRING is of the form * "IP PORT", or "HOST PORT". * * Listens by defaults to a local random port. * * \-connect STRING * Indicate the server to connect to. STRING is also of the form * "IP PORT", or "HOST PORT". * * Connects by default to a varnish instance called ``v1``. * * SECTION: tunnel.spec Specification * * The specification contains a list of tunnel commands that can be combined * with barriers and delays. For example:: * * tunnel t1 { * barrier b1 sync * pause * delay 1 * send 42 * barrier b2 sync * resume * } -start * * If one end of the tunnel is closed before the end of the specification * the test case will fail. A specification that ends in a paused state will * implicitely resume the tunnel. */ enum tunnel_state_e { TUNNEL_ACCEPT, TUNNEL_RUNNING, TUNNEL_PAUSED, TUNNEL_SPEC_DONE, TUNNEL_POLL_DONE, TUNNEL_STOPPED, }; struct tunnel_lane { char buf[1024]; ssize_t buf_len; size_t wrk_len; int *rfd; int *wfd; }; struct tunnel { unsigned magic; #define TUNNEL_MAGIC 0x7f59913d char *name; struct vtclog *vl; VTAILQ_ENTRY(tunnel) list; enum tunnel_state_e state; unsigned start_paused; char *spec; char connect[256]; int csock; char listen[256]; int lsock; char laddr[VTCP_ADDRBUFSIZE]; char lport[VTCP_PORTBUFSIZE]; int asock; struct tunnel_lane send_lane[1]; struct tunnel_lane recv_lane[1]; pthread_mutex_t mtx; /* state and lanes->*_len */ pthread_cond_t cond; pthread_t tspec; pthread_t tpoll; }; static pthread_mutex_t tunnel_mtx; static VTAILQ_HEAD(, tunnel) tunnels = VTAILQ_HEAD_INITIALIZER(tunnels); /********************************************************************** * Is the tunnel still operating? */ static unsigned tunnel_is_open(struct tunnel *t) { unsigned is_open; PTOK(pthread_mutex_lock(&t->mtx)); is_open = (t->send_lane->buf_len >= 0 && t->recv_lane->buf_len >= 0); PTOK(pthread_mutex_unlock(&t->mtx)); return (is_open); } /********************************************************************** * SECTION: tunnel.spec.pause * * pause * Wait for in-flight bytes to be transferred and pause the tunnel. * * The tunnel must be running. */ static void cmd_tunnel_pause(CMD_ARGS) { struct tunnel *t; CAST_OBJ_NOTNULL(t, priv, TUNNEL_MAGIC); AZ(av[1]); if (!tunnel_is_open(t)) vtc_fatal(vl, "Tunnel already closed"); PTOK(pthread_mutex_lock(&t->mtx)); if (t->state == TUNNEL_PAUSED) { PTOK(pthread_mutex_unlock(&t->mtx)); vtc_fatal(vl, "Tunnel already paused"); } assert(t->state == TUNNEL_RUNNING); t->state = TUNNEL_PAUSED; PTOK(pthread_cond_signal(&t->cond)); PTOK(pthread_cond_wait(&t->cond, &t->mtx)); PTOK(pthread_mutex_unlock(&t->mtx)); } /********************************************************************** * SECTION: tunnel.spec.send * * send NUMBER * Wait until NUMBER bytes are transferred from source to * destination. * * The tunnel must be paused, it remains paused afterwards. */ static void cmd_tunnel_send(CMD_ARGS) { struct tunnel *t; unsigned len; CAST_OBJ_NOTNULL(t, priv, TUNNEL_MAGIC); AN(av[1]); AZ(av[2]); len = atoi(av[1]); if (!tunnel_is_open(t)) vtc_fatal(vl, "Tunnel already closed"); PTOK(pthread_mutex_lock(&t->mtx)); if (t->state == TUNNEL_RUNNING) { PTOK(pthread_mutex_unlock(&t->mtx)); vtc_fatal(vl, "Tunnel still running"); } assert(t->state == TUNNEL_PAUSED); AZ(t->send_lane->wrk_len); AZ(t->recv_lane->wrk_len); if (!strcmp(av[0], "send")) t->send_lane->wrk_len = len; else t->recv_lane->wrk_len = len; PTOK(pthread_cond_signal(&t->cond)); PTOK(pthread_cond_wait(&t->cond, &t->mtx)); PTOK(pthread_mutex_unlock(&t->mtx)); } /********************************************************************** * SECTION: tunnel.spec.recv * * recv NUMBER * Wait until NUMBER bytes are transferred from destination to * source. * * The tunnel must be paused, it remains paused afterwards. */ static void cmd_tunnel_recv(CMD_ARGS) { cmd_tunnel_send(av, priv, vl); } /********************************************************************** * SECTION: tunnel.spec.resume * * resume * Resume the transfer of bytes in both directions. * * The tunnel must be paused. */ static void cmd_tunnel_resume(CMD_ARGS) { struct tunnel *t; CAST_OBJ_NOTNULL(t, priv, TUNNEL_MAGIC); AZ(av[1]); if (!tunnel_is_open(t)) vtc_fatal(vl, "Tunnel already closed"); PTOK(pthread_mutex_lock(&t->mtx)); if (t->state == TUNNEL_RUNNING) { PTOK(pthread_mutex_unlock(&t->mtx)); vtc_fatal(vl, "Tunnel already running"); } assert(t->state == TUNNEL_PAUSED); t->state = TUNNEL_RUNNING; PTOK(pthread_cond_signal(&t->cond)); PTOK(pthread_mutex_unlock(&t->mtx)); } static const struct cmds tunnel_cmds[] = { #define CMD_TUNNEL(n) { #n, cmd_tunnel_##n }, CMD_TUNNEL(pause) CMD_TUNNEL(send) CMD_TUNNEL(recv) CMD_TUNNEL(resume) #undef CMD_TUNNEL { NULL, NULL } }; /********************************************************************** * Tunnel poll thread */ static void tunnel_read(struct tunnel *t, struct vtclog *vl, const struct pollfd *pfd, struct tunnel_lane *lane) { size_t len; ssize_t res; enum tunnel_state_e state; assert(pfd->fd == *lane->rfd); if (!(pfd->revents & POLLIN)) return; PTOK(pthread_mutex_lock(&t->mtx)); AZ(lane->buf_len); len = lane->wrk_len; state = t->state; PTOK(pthread_mutex_unlock(&t->mtx)); if (len == 0 && state == TUNNEL_PAUSED) return; if (len == 0 || len > sizeof lane->buf) len = sizeof lane->buf; res = read(pfd->fd, lane->buf, len); if (res < 0) vtc_fatal(vl, "Read failed: %s", strerror(errno)); PTOK(pthread_mutex_lock(&t->mtx)); lane->buf_len = (res == 0) ? -1 : res; PTOK(pthread_mutex_unlock(&t->mtx)); } static void tunnel_write(struct tunnel *t, struct vtclog *vl, struct tunnel_lane *lane, const char *action) { const char *p; ssize_t res, l; p = lane->buf; l = lane->buf_len; if (l > 0) vtc_log(vl, 3, "%s %zd bytes", action, l); while (l > 0) { res = write(*lane->wfd, p, l); if (res <= 0) vtc_fatal(vl, "Write failed: %s", strerror(errno)); l -= res; p += res; } PTOK(pthread_mutex_lock(&t->mtx)); if (lane->wrk_len > 0 && lane->buf_len != -1) { assert(lane->buf_len >= 0); assert(lane->wrk_len >= (size_t)lane->buf_len); lane->wrk_len -= lane->buf_len; } lane->buf_len = l; PTOK(pthread_mutex_unlock(&t->mtx)); } static void * tunnel_poll_thread(void *priv) { struct tunnel *t; struct vtclog *vl; struct pollfd pfd[2]; enum tunnel_state_e state; int res; CAST_OBJ_NOTNULL(t, priv, TUNNEL_MAGIC); vl = vtc_logopen("%s", t->name); pthread_cleanup_push(vtc_logclose, vl); while (tunnel_is_open(t) && !vtc_stop) { PTOK(pthread_mutex_lock(&t->mtx)); /* NB: can be woken up by `tunnel tX -wait` */ while (t->state == TUNNEL_ACCEPT && !vtc_stop) PTOK(pthread_cond_wait(&t->cond, &t->mtx)); state = t->state; PTOK(pthread_mutex_unlock(&t->mtx)); if (vtc_stop) break; assert(state < TUNNEL_POLL_DONE); memset(pfd, 0, sizeof pfd); pfd[0].fd = *t->send_lane->rfd; pfd[1].fd = *t->recv_lane->rfd; pfd[0].events = POLLIN; pfd[1].events = POLLIN; res = poll(pfd, 2, 100); if (res == -1) vtc_fatal(vl, "Poll failed: %s", strerror(errno)); tunnel_read(t, vl, &pfd[0], t->send_lane); tunnel_read(t, vl, &pfd[1], t->recv_lane); PTOK(pthread_mutex_lock(&t->mtx)); if (t->state == TUNNEL_PAUSED && t->send_lane->wrk_len == 0 && t->recv_lane->wrk_len == 0) { AZ(t->send_lane->buf_len); AZ(t->recv_lane->buf_len); PTOK(pthread_cond_signal(&t->cond)); PTOK(pthread_cond_wait(&t->cond, &t->mtx)); } PTOK(pthread_mutex_unlock(&t->mtx)); if (vtc_stop) break; tunnel_write(t, vl, t->send_lane, "Sending"); tunnel_write(t, vl, t->recv_lane, "Receiving"); } PTOK(pthread_mutex_lock(&t->mtx)); if (t->state != TUNNEL_SPEC_DONE && !vtc_stop) { PTOK(pthread_cond_signal(&t->cond)); PTOK(pthread_cond_wait(&t->cond, &t->mtx)); } PTOK(pthread_mutex_unlock(&t->mtx)); pthread_cleanup_pop(0); vtc_logclose(vl); t->state = TUNNEL_POLL_DONE; return (NULL); } /********************************************************************** * Tunnel spec thread */ static void tunnel_accept(struct tunnel *t, struct vtclog *vl) { struct vsb *vsb; const char *addr, *err; int afd, cfd; CHECK_OBJ_NOTNULL(t, TUNNEL_MAGIC); assert(t->lsock >= 0); assert(t->asock < 0); assert(t->csock < 0); assert(t->state == TUNNEL_ACCEPT); vtc_log(vl, 4, "Accepting"); afd = accept(t->lsock, NULL, NULL); if (afd < 0) vtc_fatal(vl, "Accept failed: %s", strerror(errno)); vtc_log(vl, 3, "Accepted socket fd is %d", afd); vsb = macro_expand(vl, t->connect); AN(vsb); addr = VSB_data(vsb); cfd = VTCP_open(addr, NULL, 10., &err); if (cfd < 0) vtc_fatal(vl, "Failed to open %s: %s", addr, err); vtc_log(vl, 3, "Connected socket fd is %d", cfd); VSB_destroy(&vsb); VTCP_blocking(afd); VTCP_blocking(cfd); PTOK(pthread_mutex_lock(&t->mtx)); t->asock = afd; t->csock = cfd; t->send_lane->buf_len = 0; t->send_lane->wrk_len = 0; t->recv_lane->buf_len = 0; t->recv_lane->wrk_len = 0; if (t->start_paused) { t->state = TUNNEL_PAUSED; t->start_paused = 0; } else t->state = TUNNEL_RUNNING; PTOK(pthread_cond_signal(&t->cond)); PTOK(pthread_mutex_unlock(&t->mtx)); } static void * tunnel_spec_thread(void *priv) { struct tunnel *t; struct vtclog *vl; enum tunnel_state_e state; CAST_OBJ_NOTNULL(t, priv, TUNNEL_MAGIC); AN(*t->connect); vl = vtc_logopen("%s", t->name); vtc_log_set_cmd(vl, tunnel_cmds); pthread_cleanup_push(vtc_logclose, vl); tunnel_accept(t, vl); parse_string(vl, t, t->spec); PTOK(pthread_mutex_lock(&t->mtx)); state = t->state; PTOK(pthread_mutex_unlock(&t->mtx)); if (state == TUNNEL_PAUSED && !vtc_stop) parse_string(vl, t, "resume"); PTOK(pthread_mutex_lock(&t->mtx)); t->state = TUNNEL_SPEC_DONE; PTOK(pthread_cond_signal(&t->cond)); PTOK(pthread_mutex_unlock(&t->mtx)); vtc_log(vl, 2, "Ending"); pthread_cleanup_pop(0); vtc_logclose(vl); return (NULL); } /********************************************************************** * Tunnel management */ static struct tunnel * tunnel_new(const char *name) { struct tunnel *t; ALLOC_OBJ(t, TUNNEL_MAGIC); AN(t); REPLACE(t->name, name); t->vl = vtc_logopen("%s", name); AN(t->vl); t->state = TUNNEL_STOPPED; bprintf(t->connect, "%s", "${v1_sock}"); bprintf(t->listen, "%s", default_listen_addr); t->csock = -1; t->lsock = -1; t->asock = -1; t->send_lane->rfd = &t->asock; t->send_lane->wfd = &t->csock; t->recv_lane->rfd = &t->csock; t->recv_lane->wfd = &t->asock; PTOK(pthread_mutex_init(&t->mtx, NULL)); PTOK(pthread_cond_init(&t->cond, NULL)); PTOK(pthread_mutex_lock(&tunnel_mtx)); VTAILQ_INSERT_TAIL(&tunnels, t, list); PTOK(pthread_mutex_unlock(&tunnel_mtx)); return (t); } static void tunnel_delete(struct tunnel *t) { CHECK_OBJ_NOTNULL(t, TUNNEL_MAGIC); assert(t->asock < 0); assert(t->csock < 0); if (t->lsock >= 0) VTCP_close(&t->lsock); macro_undef(t->vl, t->name, "addr"); macro_undef(t->vl, t->name, "port"); macro_undef(t->vl, t->name, "sock"); vtc_logclose(t->vl); (void)pthread_mutex_destroy(&t->mtx); (void)pthread_cond_destroy(&t->cond); free(t->name); FREE_OBJ(t); } /********************************************************************** * Tunnel listen */ static void tunnel_listen(struct tunnel *t) { char buf[vsa_suckaddr_len]; const struct suckaddr *sua; const char *err; if (t->lsock >= 0) VTCP_close(&t->lsock); t->lsock = VTCP_listen_on(t->listen, "0", 1, &err); if (err != NULL) vtc_fatal(t->vl, "Tunnel listen address (%s) cannot be resolved: %s", t->listen, err); assert(t->lsock > 0); sua = VSA_getsockname(t->lsock, buf, sizeof buf); AN(sua); VTCP_name(sua, t->laddr, sizeof t->laddr, t->lport, sizeof t->lport); /* Record the actual port, and reuse it on subsequent starts */ if (VSA_Get_Proto(sua) == AF_INET) bprintf(t->listen, "%s:%s", t->laddr, t->lport); else bprintf(t->listen, "[%s]:%s", t->laddr, t->lport); macro_def(t->vl, t->name, "addr", "%s", t->laddr); macro_def(t->vl, t->name, "port", "%s", t->lport); macro_def(t->vl, t->name, "sock", "%s %s", t->laddr, t->lport); } /********************************************************************** * Start the tunnel thread */ static void tunnel_start(struct tunnel *t) { CHECK_OBJ_NOTNULL(t, TUNNEL_MAGIC); vtc_log(t->vl, 2, "Starting tunnel"); tunnel_listen(t); vtc_log(t->vl, 1, "Listen on %s", t->listen); assert(t->state == TUNNEL_STOPPED); t->state = TUNNEL_ACCEPT; t->send_lane->buf_len = 0; t->send_lane->wrk_len = 0; t->recv_lane->buf_len = 0; t->recv_lane->wrk_len = 0; PTOK(pthread_create(&t->tpoll, NULL, tunnel_poll_thread, t)); PTOK(pthread_create(&t->tspec, NULL, tunnel_spec_thread, t)); } static void tunnel_start_pause(struct tunnel *t) { CHECK_OBJ_NOTNULL(t, TUNNEL_MAGIC); t->start_paused = 1; tunnel_start(t); } /********************************************************************** * Wait for tunnel thread to stop */ static void tunnel_wait(struct tunnel *t) { void *res; CHECK_OBJ_NOTNULL(t, TUNNEL_MAGIC); vtc_log(t->vl, 2, "Waiting for tunnel"); PTOK(pthread_cond_signal(&t->cond)); PTOK(pthread_join(t->tspec, &res)); if (res != NULL && !vtc_stop) vtc_fatal(t->vl, "Tunnel spec returned \"%p\"", res); PTOK(pthread_join(t->tpoll, &res)); if (res != NULL && !vtc_stop) vtc_fatal(t->vl, "Tunnel poll returned \"%p\"", res); if (t->csock >= 0) VTCP_close(&t->csock); if (t->asock >= 0) VTCP_close(&t->asock); t->tspec = 0; t->tpoll = 0; t->state = TUNNEL_STOPPED; } /********************************************************************** * Reap tunnel */ static void tunnel_reset(void) { struct tunnel *t; while (1) { PTOK(pthread_mutex_lock(&tunnel_mtx)); t = VTAILQ_FIRST(&tunnels); CHECK_OBJ_ORNULL(t, TUNNEL_MAGIC); if (t != NULL) VTAILQ_REMOVE(&tunnels, t, list); PTOK(pthread_mutex_unlock(&tunnel_mtx)); if (t == NULL) break; if (t->state != TUNNEL_STOPPED) tunnel_wait(t); tunnel_delete(t); } } /********************************************************************** * Tunnel command dispatch */ void cmd_tunnel(CMD_ARGS) { struct tunnel *t; (void)priv; if (av == NULL) { /* Reset and free */ tunnel_reset(); return; } AZ(strcmp(av[0], "tunnel")); av++; VTC_CHECK_NAME(vl, av[0], "Tunnel", 't'); PTOK(pthread_mutex_lock(&tunnel_mtx)); VTAILQ_FOREACH(t, &tunnels, list) if (!strcmp(t->name, av[0])) break; PTOK(pthread_mutex_unlock(&tunnel_mtx)); if (t == NULL) t = tunnel_new(av[0]); CHECK_OBJ_NOTNULL(t, TUNNEL_MAGIC); av++; for (; *av != NULL; av++) { if (vtc_error) break; if (!strcmp(*av, "-wait")) { if (t->state == TUNNEL_STOPPED) vtc_fatal(t->vl, "Tunnel not -started"); tunnel_wait(t); continue; } /* Don't mess with a running tunnel */ if (t->state != TUNNEL_STOPPED) tunnel_wait(t); assert(t->state == TUNNEL_STOPPED); if (!strcmp(*av, "-connect")) { bprintf(t->connect, "%s", av[1]); av++; continue; } if (!strcmp(*av, "-listen")) { bprintf(t->listen, "%s", av[1]); av++; continue; } if (!strcmp(*av, "-start")) { tunnel_start(t); continue; } if (!strcmp(*av, "-start+pause")) { tunnel_start_pause(t); continue; } if (**av == '-') vtc_fatal(t->vl, "Unknown tunnel argument: %s", *av); t->spec = *av; } } void init_tunnel(void) { PTOK(pthread_mutex_init(&tunnel_mtx, NULL)); } varnish-7.5.0/bin/varnishtest/vtc_varnish.c000066400000000000000000000763771457605730600210520ustar00rootroot00000000000000/*- * Copyright (c) 2008-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #ifdef VTEST_WITH_VTC_VARNISH #include "config.h" #include #include #include #include #include #include #include #include #include #include #include "vtc.h" #include "vapi/vsc.h" #include "vapi/vsl.h" #include "vapi/vsm.h" #include "vcli.h" #include "vjsn.h" #include "vre.h" #include "vsub.h" #include "vtcp.h" #include "vtim.h" struct varnish { unsigned magic; #define VARNISH_MAGIC 0x208cd8e3 char *name; struct vtclog *vl; VTAILQ_ENTRY(varnish) list; struct vsb *args; int fds[4]; pid_t pid; double syntax; pthread_t tp; pthread_t tp_vsl; int expect_exit; int cli_fd; int vcl_nbr; char *workdir; char *jail; char *proto; struct vsm *vsm_vsl; struct vsm *vsm_vsc; struct vsc *vsc; int has_a_arg; unsigned vsl_tag_count[256]; volatile int vsl_rec; volatile int vsl_idle; }; #define NONSENSE "%XJEIFLH|)Xspa8P" static VTAILQ_HEAD(, varnish) varnishes = VTAILQ_HEAD_INITIALIZER(varnishes); /********************************************************************** * Ask a question over CLI */ static enum VCLI_status_e varnish_ask_cli(const struct varnish *v, const char *cmd, char **repl) { int i; unsigned retval; char *r; if (cmd != NULL) { vtc_dump(v->vl, 4, "CLI TX", cmd, -1); i = write(v->cli_fd, cmd, strlen(cmd)); if (i != strlen(cmd) && !vtc_stop) vtc_fatal(v->vl, "CLI write failed (%s) = %u %s", cmd, errno, strerror(errno)); i = write(v->cli_fd, "\n", 1); if (i != 1 && !vtc_stop) vtc_fatal(v->vl, "CLI write failed (%s) = %u %s", cmd, errno, strerror(errno)); } i = VCLI_ReadResult(v->cli_fd, &retval, &r, vtc_maxdur); if (i != 0 && !vtc_stop) vtc_fatal(v->vl, "CLI failed (%s) = %d %u %s", cmd != NULL ? cmd : "NULL", i, retval, r); vtc_log(v->vl, 3, "CLI RX %u", retval); vtc_dump(v->vl, 4, "CLI RX", r, -1); if (repl != NULL) *repl = r; else free(r); return ((enum VCLI_status_e)retval); } /********************************************************************** * */ static void wait_stopped(const struct varnish *v) { char *r = NULL; enum VCLI_status_e st; vtc_log(v->vl, 3, "wait-stopped"); while (1) { st = varnish_ask_cli(v, "status", &r); if (st != CLIS_OK) vtc_fatal(v->vl, "CLI status command failed: %u %s", st, r); if (!strcmp(r, "Child in state stopped")) { free(r); break; } free(r); r = NULL; (void)usleep(200000); } } /********************************************************************** * */ static void wait_running(const struct varnish *v) { char *r = NULL; enum VCLI_status_e st; while (1) { vtc_log(v->vl, 3, "wait-running"); st = varnish_ask_cli(v, "status", &r); if (st != CLIS_OK) vtc_fatal(v->vl, "CLI status command failed: %u %s", st, r); if (!strcmp(r, "Child in state stopped")) vtc_fatal(v->vl, "Child stopped before running: %u %s", st, r); if (!strcmp(r, "Child in state running")) { free(r); r = NULL; st = varnish_ask_cli(v, "debug.listen_address", &r); if (st != CLIS_OK) vtc_fatal(v->vl, "CLI status command failed: %u %s", st, r); free(r); break; } free(r); r = NULL; (void)usleep(200000); } } /********************************************************************** * Varnishlog gatherer thread */ static void * varnishlog_thread(void *priv) { struct varnish *v; struct VSL_data *vsl; struct vsm *vsm; struct VSL_cursor *c; enum VSL_tag_e tag; uint64_t vxid; unsigned len; const char *tagname, *data; int type, i, opt; struct vsb *vsb = NULL; CAST_OBJ_NOTNULL(v, priv, VARNISH_MAGIC); vsl = VSL_New(); AN(vsl); vsm = v->vsm_vsl; c = NULL; opt = 0; while (v->fds[1] > 0 || c != NULL) { //lint !e845 bug in flint if (c == NULL) { if (vtc_error) break; VTIM_sleep(0.1); (void)VSM_Status(vsm); c = VSL_CursorVSM(vsl, vsm, opt); if (c == NULL) { vtc_log(v->vl, 3, "vsl|%s", VSL_Error(vsl)); VSL_ResetError(vsl); continue; } } AN(c); opt = VSL_COPT_TAIL; while (1) { i = VSL_Next(c); if (i != 1) break; v->vsl_rec = 1; tag = VSL_TAG(c->rec.ptr); vxid = VSL_ID(c->rec.ptr); if (tag == SLT__Batch) continue; tagname = VSL_tags[tag]; len = VSL_LEN(c->rec.ptr); type = VSL_CLIENT(c->rec.ptr) ? 'c' : VSL_BACKEND(c->rec.ptr) ? 'b' : '-'; data = VSL_CDATA(c->rec.ptr); v->vsl_tag_count[tag]++; if (VSL_tagflags[tag] & SLT_F_BINARY) { if (vsb == NULL) vsb = VSB_new_auto(); VSB_clear(vsb); VSB_quote(vsb, data, len, VSB_QUOTE_HEX); AZ(VSB_finish(vsb)); /* +2 to skip "0x" */ vtc_log(v->vl, 4, "vsl| %10ju %-15s %c [%s]", (uintmax_t)vxid, tagname, type, VSB_data(vsb) + 2); } else { vtc_log(v->vl, 4, "vsl| %10ju %-15s %c %.*s", (uintmax_t)vxid, tagname, type, (int)len, data); } } if (i == 0) { /* Nothing to do but wait */ v->vsl_idle++; if (!(VSM_Status(vsm) & VSM_WRK_RUNNING)) { /* Abandoned - try reconnect */ VSL_DeleteCursor(c); c = NULL; } else { VTIM_sleep(0.1); } } else if (i == -2) { /* Abandoned - try reconnect */ VSL_DeleteCursor(c); c = NULL; } else break; } if (c) VSL_DeleteCursor(c); VSL_Delete(vsl); if (vsb != NULL) VSB_destroy(&vsb); return (NULL); } /********************************************************************** * Allocate and initialize a varnish */ static struct varnish * varnish_new(const char *name) { struct varnish *v; struct vsb *vsb; char buf[1024]; ALLOC_OBJ(v, VARNISH_MAGIC); AN(v); REPLACE(v->name, name); REPLACE(v->jail, ""); v->vl = vtc_logopen("%s", name); AN(v->vl); vsb = macro_expandf(v->vl, "${tmpdir}/%s", name); AN(vsb); v->workdir = strdup(VSB_data(vsb)); AN(v->workdir); VSB_destroy(&vsb); bprintf(buf, "rm -rf %s ; mkdir -p %s", v->workdir, v->workdir); AZ(system(buf)); v->args = VSB_new_auto(); v->cli_fd = -1; VTAILQ_INSERT_TAIL(&varnishes, v, list); return (v); } /********************************************************************** * Delete a varnish instance */ static void varnish_delete(struct varnish *v) { CHECK_OBJ_NOTNULL(v, VARNISH_MAGIC); vtc_logclose(v->vl); free(v->name); free(v->jail); free(v->workdir); VSB_destroy(&v->args); if (v->vsc != NULL) VSC_Destroy(&v->vsc, v->vsm_vsc); if (v->vsm_vsc != NULL) VSM_Destroy(&v->vsm_vsc); if (v->vsm_vsl != NULL) VSM_Destroy(&v->vsm_vsl); /* * We do not delete the workdir, it may contain stuff people * want (coredumps, shmlog/stats etc), and trying to divine * "may want" is just too much trouble. Leave it around and * nuke it at the start of the next test-run. */ /* XXX: MEMLEAK (?) */ FREE_OBJ(v); } /********************************************************************** * Varnish listener */ static void * varnish_thread(void *priv) { struct varnish *v; CAST_OBJ_NOTNULL(v, priv, VARNISH_MAGIC); return (vtc_record(v->vl, v->fds[0], NULL)); } /********************************************************************** * Launch a Varnish */ static void varnish_launch(struct varnish *v) { struct vsb *vsb, *vsb1; int i, nfd, asock; char abuf[128], pbuf[128]; struct pollfd fd[3]; enum VCLI_status_e u; const char *err; char *r = NULL; /* Create listener socket */ asock = VTCP_listen_on(default_listen_addr, NULL, 1, &err); if (err != NULL) vtc_fatal(v->vl, "Create CLI listen socket failed: %s", err); assert(asock > 0); VTCP_myname(asock, abuf, sizeof abuf, pbuf, sizeof pbuf); AZ(VSB_finish(v->args)); vtc_log(v->vl, 2, "Launch"); vsb = VSB_new_auto(); AN(vsb); VSB_cat(vsb, "cd ${pwd} &&"); VSB_printf(vsb, " exec varnishd %s -d -n %s -i %s", v->jail, v->workdir, v->name); if (macro_isdef(NULL, "varnishd_args_prepend")) { VSB_putc(vsb, ' '); macro_cat(v->vl, vsb, "varnishd_args_prepend", NULL); } VSB_cat(vsb, VSB_data(params_vsb)); if (leave_temp) { VSB_cat(vsb, " -p debug=+vcl_keep"); VSB_cat(vsb, " -p debug=+vmod_so_keep"); VSB_cat(vsb, " -p debug=+vsm_keep"); } VSB_cat(vsb, " -l 2m"); VSB_cat(vsb, " -p auto_restart=off"); VSB_cat(vsb, " -p syslog_cli_traffic=off"); VSB_cat(vsb, " -p thread_pool_min=10"); VSB_cat(vsb, " -p debug=+vtc_mode"); VSB_cat(vsb, " -p vsl_mask=+Debug,+H2RxHdr,+H2RxBody"); VSB_cat(vsb, " -p h2_initial_window_size=1m"); VSB_cat(vsb, " -p h2_rx_window_low_water=64k"); if (!v->has_a_arg) { VSB_printf(vsb, " -a '%s'", default_listen_addr); if (v->proto != NULL) VSB_printf(vsb, ",%s", v->proto); } VSB_printf(vsb, " -M '%s %s'", abuf, pbuf); VSB_printf(vsb, " -P %s/varnishd.pid", v->workdir); if (vmod_path != NULL) VSB_printf(vsb, " -p vmod_path=%s", vmod_path); VSB_printf(vsb, " %s", VSB_data(v->args)); if (macro_isdef(NULL, "varnishd_args_append")) { VSB_putc(vsb, ' '); macro_cat(v->vl, vsb, "varnishd_args_append", NULL); } AZ(VSB_finish(vsb)); vtc_log(v->vl, 3, "CMD: %s", VSB_data(vsb)); vsb1 = macro_expand(v->vl, VSB_data(vsb)); AN(vsb1); VSB_destroy(&vsb); vsb = vsb1; vtc_log(v->vl, 3, "CMD: %s", VSB_data(vsb)); AZ(pipe(&v->fds[0])); AZ(pipe(&v->fds[2])); v->pid = fork(); assert(v->pid >= 0); if (v->pid == 0) { AZ(dup2(v->fds[0], 0)); assert(dup2(v->fds[3], 1) == 1); assert(dup2(1, 2) == 2); closefd(&v->fds[0]); closefd(&v->fds[1]); closefd(&v->fds[2]); closefd(&v->fds[3]); VSUB_closefrom(STDERR_FILENO + 1); AZ(execl("/bin/sh", "/bin/sh", "-c", VSB_data(vsb), (char*)0)); exit(1); } else { vtc_log(v->vl, 3, "PID: %ld", (long)v->pid); macro_def(v->vl, v->name, "pid", "%ld", (long)v->pid); macro_def(v->vl, v->name, "name", "%s", v->workdir); } closefd(&v->fds[0]); closefd(&v->fds[3]); v->fds[0] = v->fds[2]; v->fds[2] = v->fds[3] = -1; VSB_destroy(&vsb); PTOK(pthread_create(&v->tp, NULL, varnish_thread, v)); /* Wait for the varnish to call home */ memset(fd, 0, sizeof fd); fd[0].fd = asock; fd[0].events = POLLIN; fd[1].fd = v->fds[1]; fd[1].events = POLLIN; fd[2].fd = v->fds[2]; fd[2].events = POLLIN; i = poll(fd, 2, (int)(vtc_maxdur * 1000 / 3)); vtc_log(v->vl, 4, "CLIPOLL %d 0x%x 0x%x 0x%x", i, fd[0].revents, fd[1].revents, fd[2].revents); if (i == 0) vtc_fatal(v->vl, "FAIL timeout waiting for CLI connection"); if (fd[1].revents & POLLHUP) vtc_fatal(v->vl, "FAIL debug pipe closed"); if (!(fd[0].revents & POLLIN)) vtc_fatal(v->vl, "FAIL CLI connection wait failure"); nfd = accept(asock, NULL, NULL); closefd(&asock); if (nfd < 0) vtc_fatal(v->vl, "FAIL no CLI connection accepted"); v->cli_fd = nfd; vtc_log(v->vl, 3, "CLI connection fd = %d", v->cli_fd); assert(v->cli_fd >= 0); /* Receive the banner or auth response */ u = varnish_ask_cli(v, NULL, &r); if (vtc_error) return; if (u != CLIS_AUTH) vtc_fatal(v->vl, "CLI auth demand expected: %u %s", u, r); bprintf(abuf, "%s/_.secret", v->workdir); nfd = open(abuf, O_RDONLY); assert(nfd >= 0); assert(sizeof abuf >= CLI_AUTH_RESPONSE_LEN + 7); bstrcpy(abuf, "auth "); VCLI_AuthResponse(nfd, r, abuf + 5); closefd(&nfd); free(r); r = NULL; strcat(abuf, "\n"); u = varnish_ask_cli(v, abuf, &r); if (vtc_error) return; if (u != CLIS_OK) vtc_fatal(v->vl, "CLI auth command failed: %u %s", u, r); free(r); v->vsm_vsc = VSM_New(); AN(v->vsm_vsc); v->vsc = VSC_New(); AN(v->vsc); assert(VSM_Arg(v->vsm_vsc, 'n', v->workdir) > 0); AZ(VSM_Attach(v->vsm_vsc, -1)); v->vsm_vsl = VSM_New(); assert(VSM_Arg(v->vsm_vsl, 'n', v->workdir) > 0); AZ(VSM_Attach(v->vsm_vsl, -1)); PTOK(pthread_create(&v->tp_vsl, NULL, varnishlog_thread, v)); } #define VARNISH_LAUNCH(v) \ do { \ CHECK_OBJ_NOTNULL(v, VARNISH_MAGIC); \ if (v->cli_fd < 0) \ varnish_launch(v); \ if (vtc_error) \ return; \ } while (0) /********************************************************************** * Start a Varnish */ static void varnish_listen(const struct varnish *v, char *la) { const char *a, *p, *n, *n2; char m[64], s[256]; unsigned first; n2 = ""; first = 1; while (*la != '\0') { n = la; la = strchr(la, ' '); AN(la); *la = '\0'; a = ++la; la = strchr(la, ' '); AN(la); *la = '\0'; p = ++la; la = strchr(la, '\n'); AN(la); *la = '\0'; la++; AN(*n); AN(*a); AN(*p); if (*p == '-') { bprintf(s, "%s", a); a = "0.0.0.0"; p = "0"; } else if (strchr(a, ':')) { bprintf(s, "[%s]:%s", a, p); } else { bprintf(s, "%s:%s", a, p); } if (first) { vtc_log(v->vl, 2, "Listen on %s %s", a, p); macro_def(v->vl, v->name, "addr", "%s", a); macro_def(v->vl, v->name, "port", "%s", p); macro_def(v->vl, v->name, "sock", "%s", s); first = 0; } if (!strcmp(n, n2)) continue; bprintf(m, "%s_addr", n); macro_def(v->vl, v->name, m, "%s", a); bprintf(m, "%s_port", n); macro_def(v->vl, v->name, m, "%s", p); bprintf(m, "%s_sock", n); macro_def(v->vl, v->name, m, "%s", s); n2 = n; } } static void varnish_start(struct varnish *v) { enum VCLI_status_e u; char *resp = NULL; VARNISH_LAUNCH(v); vtc_log(v->vl, 2, "Start"); u = varnish_ask_cli(v, "start", &resp); if (vtc_error) return; if (u != CLIS_OK) vtc_fatal(v->vl, "CLI start command failed: %u %s", u, resp); wait_running(v); free(resp); resp = NULL; u = varnish_ask_cli(v, "debug.xid 1000", &resp); if (vtc_error) return; if (u != CLIS_OK) vtc_fatal(v->vl, "CLI debug.xid command failed: %u %s", u, resp); free(resp); resp = NULL; u = varnish_ask_cli(v, "debug.listen_address", &resp); if (vtc_error) return; if (u != CLIS_OK) vtc_fatal(v->vl, "CLI debug.listen_address command failed: %u %s", u, resp); varnish_listen(v, resp); free(resp); /* Wait for vsl logging to get underway */ while (v->vsl_rec == 0) VTIM_sleep(.1); } /********************************************************************** * Stop a Varnish */ static void varnish_stop(struct varnish *v) { VARNISH_LAUNCH(v); vtc_log(v->vl, 2, "Stop"); (void)varnish_ask_cli(v, "stop", NULL); wait_stopped(v); } /********************************************************************** * Cleanup */ static void varnish_cleanup(struct varnish *v) { void *p; /* Close the CLI connection */ closefd(&v->cli_fd); /* Close the STDIN connection. */ closefd(&v->fds[1]); /* Wait until STDOUT+STDERR closes */ PTOK(pthread_join(v->tp, &p)); closefd(&v->fds[0]); /* Pick up the VSL thread */ PTOK(pthread_join(v->tp_vsl, &p)); vtc_wait4(v->vl, v->pid, v->expect_exit, 0, 0); v->pid = 0; } /********************************************************************** * Wait for a Varnish */ static void varnish_wait(struct varnish *v) { if (v->cli_fd < 0) return; vtc_log(v->vl, 2, "Wait"); if (!vtc_error) { /* Do a backend.list to log if child is still running */ (void)varnish_ask_cli(v, "backend.list", NULL); } /* Then stop it */ varnish_stop(v); if (varnish_ask_cli(v, "panic.show", NULL) != CLIS_CANT) vtc_fatal(v->vl, "Unexpected panic"); varnish_cleanup(v); } /********************************************************************** * Ask a CLI JSON question */ static void varnish_cli_json(struct varnish *v, const char *cli) { enum VCLI_status_e u; char *resp = NULL; const char *errptr; struct vjsn *vj; VARNISH_LAUNCH(v); u = varnish_ask_cli(v, cli, &resp); vtc_log(v->vl, 2, "CLI %03u <%s>", u, cli); if (u != CLIS_OK) vtc_fatal(v->vl, "FAIL CLI response %u expected %u", u, CLIS_OK); vj = vjsn_parse(resp, &errptr); if (vj == NULL) vtc_fatal(v->vl, "FAIL CLI, not good JSON: %s", errptr); vjsn_delete(&vj); free(resp); } /********************************************************************** * Ask a CLI question */ static void varnish_cli(struct varnish *v, const char *cli, unsigned exp, const char *re) { enum VCLI_status_e u; struct vsb vsb[1]; vre_t *vre = NULL; char *resp = NULL, errbuf[VRE_ERROR_LEN]; int err, erroff; VARNISH_LAUNCH(v); if (re != NULL) { vre = VRE_compile(re, 0, &err, &erroff, 1); if (vre == NULL) { AN(VSB_init(vsb, errbuf, sizeof errbuf)); AZ(VRE_error(vsb, err)); AZ(VSB_finish(vsb)); VSB_fini(vsb); vtc_fatal(v->vl, "Illegal regexp: %s (@%d)", errbuf, erroff); } } u = varnish_ask_cli(v, cli, &resp); vtc_log(v->vl, 2, "CLI %03u <%s>", u, cli); if (exp != 0 && exp != (unsigned)u) vtc_fatal(v->vl, "FAIL CLI response %u expected %u", u, exp); if (vre != NULL) { err = VRE_match(vre, resp, 0, 0, NULL); if (err < 1) { AN(VSB_init(vsb, errbuf, sizeof errbuf)); AZ(VRE_error(vsb, err)); AZ(VSB_finish(vsb)); VSB_fini(vsb); vtc_fatal(v->vl, "Expect failed (%s)", errbuf); } VRE_free(&vre); } free(resp); } /********************************************************************** * Load a VCL program */ static void varnish_vcl(struct varnish *v, const char *vcl, int fail, char **resp) { struct vsb *vsb; enum VCLI_status_e u; VARNISH_LAUNCH(v); vsb = VSB_new_auto(); AN(vsb); VSB_printf(vsb, "vcl.inline vcl%d << %s\nvcl %.1f;\n%s\n%s\n", ++v->vcl_nbr, NONSENSE, v->syntax, vcl, NONSENSE); AZ(VSB_finish(vsb)); u = varnish_ask_cli(v, VSB_data(vsb), resp); if (u == CLIS_OK) { VSB_clear(vsb); VSB_printf(vsb, "vcl.use vcl%d", v->vcl_nbr); AZ(VSB_finish(vsb)); u = varnish_ask_cli(v, VSB_data(vsb), NULL); } if (u == CLIS_OK && fail) { VSB_destroy(&vsb); vtc_fatal(v->vl, "VCL compilation succeeded expected failure"); } else if (u != CLIS_OK && !fail) { VSB_destroy(&vsb); vtc_fatal(v->vl, "VCL compilation failed expected success"); } else if (fail) vtc_log(v->vl, 2, "VCL compilation failed (as expected)"); VSB_destroy(&vsb); } /********************************************************************** * Load a VCL program prefixed by backend decls for our servers */ static void varnish_vclbackend(struct varnish *v, const char *vcl) { struct vsb *vsb, *vsb2; enum VCLI_status_e u; VARNISH_LAUNCH(v); vsb = VSB_new_auto(); AN(vsb); vsb2 = VSB_new_auto(); AN(vsb2); VSB_printf(vsb2, "vcl %.1f;\n", v->syntax); cmd_server_gen_vcl(vsb2); AZ(VSB_finish(vsb2)); VSB_printf(vsb, "vcl.inline vcl%d << %s\n%s\n%s\n%s\n", ++v->vcl_nbr, NONSENSE, VSB_data(vsb2), vcl, NONSENSE); AZ(VSB_finish(vsb)); u = varnish_ask_cli(v, VSB_data(vsb), NULL); if (u != CLIS_OK) { VSB_destroy(&vsb); VSB_destroy(&vsb2); vtc_fatal(v->vl, "FAIL VCL does not compile"); } VSB_clear(vsb); VSB_printf(vsb, "vcl.use vcl%d", v->vcl_nbr); AZ(VSB_finish(vsb)); u = varnish_ask_cli(v, VSB_data(vsb), NULL); assert(u == CLIS_OK); VSB_destroy(&vsb); VSB_destroy(&vsb2); } /********************************************************************** */ struct dump_priv { const char *arg; const struct varnish *v; }; static int do_stat_dump_cb(void *priv, const struct VSC_point * const pt) { const struct varnish *v; struct dump_priv *dp; uint64_t u; if (pt == NULL) return (0); dp = priv; v = dp->v; if (strcmp(pt->ctype, "uint64_t")) return (0); u = VSC_Value(pt); if (strcmp(dp->arg, "*")) { if (fnmatch(dp->arg, pt->name, 0)) return (0); } vtc_log(v->vl, 4, "VSC %s %ju", pt->name, (uintmax_t)u); return (0); } static void varnish_vsc(struct varnish *v, const char *arg) { struct dump_priv dp; VARNISH_LAUNCH(v); memset(&dp, 0, sizeof dp); dp.v = v; dp.arg = arg; (void)VSM_Status(v->vsm_vsc); (void)VSC_Iter(v->vsc, v->vsm_vsc, do_stat_dump_cb, &dp); } /********************************************************************** * Check statistics */ struct stat_arg { const char *pattern; uintmax_t val; unsigned good; }; struct stat_priv { struct stat_arg lhs; struct stat_arg rhs; }; static int stat_match(const char *pattern, const char *name) { if (strchr(pattern, '.') == NULL) { if (fnmatch("MAIN.*", name, 0)) return (FNM_NOMATCH); name += 5; } return (fnmatch(pattern, name, 0)); } static int do_expect_cb(void *priv, const struct VSC_point * const pt) { struct stat_priv *sp = priv; if (pt == NULL) return (0); if (!sp->lhs.good && stat_match(sp->lhs.pattern, pt->name) == 0) { AZ(strcmp(pt->ctype, "uint64_t")); AN(pt->ptr); sp->lhs.val = VSC_Value(pt); sp->lhs.good = 1; } if (sp->rhs.pattern == NULL) { sp->rhs.good = 1; } else if (!sp->rhs.good && stat_match(sp->rhs.pattern, pt->name) == 0) { AZ(strcmp(pt->ctype, "uint64_t")); AN(pt->ptr); sp->rhs.val = VSC_Value(pt); sp->rhs.good = 1; } return (sp->lhs.good && sp->rhs.good); } /********************************************************************** */ static void varnish_expect(struct varnish *v, char * const *av) { struct stat_priv sp; int good, i, not; uintmax_t u; char *l, *p; VARNISH_LAUNCH(v); ZERO_OBJ(&sp, sizeof sp); l = av[0]; not = (*l == '!'); if (not) { l++; AZ(av[1]); } else { AN(av[1]); AN(av[2]); u = strtoumax(av[2], &p, 0); if (u != UINTMAX_MAX && *p == '\0') sp.rhs.val = u; else sp.rhs.pattern = av[2]; } sp.lhs.pattern = l; for (i = 0; i < 50; i++, (void)usleep(100000)) { (void)VSM_Status(v->vsm_vsc); sp.lhs.good = sp.rhs.good = 0; good = VSC_Iter(v->vsc, v->vsm_vsc, do_expect_cb, &sp); if (!good) good = -2; if (good < 0) continue; if (not) vtc_fatal(v->vl, "Found (not expected): %s", l); good = -1; if (!strcmp(av[1], "==")) good = (sp.lhs.val == sp.rhs.val); if (!strcmp(av[1], "!=")) good = (sp.lhs.val != sp.rhs.val); if (!strcmp(av[1], ">" )) good = (sp.lhs.val > sp.rhs.val); if (!strcmp(av[1], "<" )) good = (sp.lhs.val < sp.rhs.val); if (!strcmp(av[1], ">=")) good = (sp.lhs.val >= sp.rhs.val); if (!strcmp(av[1], "<=")) good = (sp.lhs.val <= sp.rhs.val); if (good == -1) vtc_fatal(v->vl, "comparison %s unknown", av[1]); if (good) break; } if (good == -1) { vtc_fatal(v->vl, "VSM error: %s", VSM_Error(v->vsm_vsc)); } if (good == -2) { if (not) { vtc_log(v->vl, 2, "not found (as expected): %s", l); return; } vtc_fatal(v->vl, "stats field %s unknown", sp.lhs.good ? sp.rhs.pattern : sp.lhs.pattern); } if (good == 1) { vtc_log(v->vl, 2, "as expected: %s (%ju) %s %s (%ju)", av[0], sp.lhs.val, av[1], av[2], sp.rhs.val); } else { vtc_fatal(v->vl, "Not true: %s (%ju) %s %s (%ju)", av[0], sp.lhs.val, av[1], av[2], sp.rhs.val); } } static void vsl_catchup(struct varnish *v) { int vsl_idle; VARNISH_LAUNCH(v); vsl_idle = v->vsl_idle; while (!vtc_error && vsl_idle == v->vsl_idle) VTIM_sleep(0.1); } /* SECTION: varnish varnish * * Define and interact with varnish instances. * * To define a Varnish server, you'll use this syntax:: * * varnish vNAME [-arg STRING] [-vcl STRING] [-vcl+backend STRING] * [-errvcl STRING STRING] [-jail STRING] [-proto PROXY] * * The first ``varnish vNAME`` invocation will start the varnishd master * process in the background, waiting for the ``-start`` switch to actually * start the child. * * Types used in the description below: * * PATTERN * is a 'glob' style pattern (ie: fnmatch(3)) as used in shell filename * expansion. * * Arguments: * * vNAME * Identify the Varnish server with a string, it must starts with 'v'. * * \-arg STRING * Pass an argument to varnishd, for example "-h simple_list". * * If the ${varnishd_args_prepend} or ${varnishd_args_append} macros are * defined, they are expanded and inserted before / appended to the * varnishd command line as constructed by varnishtest, before the * command line itself is expanded. This enables tweaks to the varnishd * command line without editing test cases. This macros can be defined * using the ``-D`` option for varnishtest. * * \-vcl STRING * Specify the VCL to load on this Varnish instance. You'll probably * want to use multi-lines strings for this ({...}). * * \-vcl+backend STRING * Do the exact same thing as -vcl, but adds the definition block of * known backends (ie. already defined). * * \-errvcl STRING1 STRING2 * Load STRING2 as VCL, expecting it to fail, and Varnish to send an * error string matching STRING1 * * \-jail STRING * Look at ``man varnishd`` (-j) for more information. * * \-proto PROXY * Have Varnish use the proxy protocol. Note that PROXY here is the * actual string. * * You can decide to start the Varnish instance and/or wait for several events:: * * varnish vNAME [-start] [-wait] [-wait-running] [-wait-stopped] * * \-start * Start the child process. * * Once successfully started, the following macros are available for * the default listen address: ``${vNAME_addr}``, ``${vNAME_port}`` * and ``${vNAME_sock}``. Additional macros are available, including * the listen address name for each address vNAME listens to, like for * example: ``${vNAME_a0_addr}``. * * \-stop * Stop the child process. * * \-syntax * Set the VCL syntax level for this command (default: 4.1) * * \-wait * Wait for that instance to terminate. * * \-wait-running * Wait for the Varnish child process to be started. * * \-wait-stopped * Wait for the Varnish child process to stop. * * \-cleanup * Once Varnish is stopped, clean everything after it. This is only used * in very few tests and you should never need it. * * \-expectexit NUMBER * Expect varnishd to exit(3) with this value * * Once Varnish is started, you can talk to it (as you would through * ``varnishadm``) with these additional switches:: * * varnish vNAME [-cli STRING] [-cliok STRING] [-clierr STRING] * [-clijson STRING] * * \-cli STRING|-cliok STRING|-clierr STATUS STRING|-cliexpect REGEXP STRING * All four of these will send STRING to the CLI, the only difference * is what they expect the result to be. -cli doesn't expect * anything, -cliok expects 200, -clierr expects STATUS, and * -cliexpect expects the REGEXP to match the returned response. * * \-clijson STRING * Send STRING to the CLI, expect success (CLIS_OK/200) and check * that the response is parsable JSON. * * It is also possible to interact with its shared memory (as you would * through tools like ``varnishstat``) with additional switches: * * \-expect \!PATTERN|PATTERN OP NUMBER|PATTERN OP PATTERN * Look into the VSM and make sure the first VSC counter identified by * PATTERN has a correct value. OP can be ==, >, >=, <, <=. For * example:: * * varnish v1 -expect SM?.s1.g_space > 1000000 * varnish v1 -expect cache_hit >= cache_hit_grace * * In the \! form the test fails if a counter matches PATTERN. * * The ``MAIN.`` namespace can be omitted from PATTERN. * * The test takes up to 5 seconds before timing out. * * \-vsc PATTERN * Dump VSC counters matching PATTERN. * * \-vsl_catchup * Wait until the logging thread has idled to make sure that all * the generated log is flushed */ void cmd_varnish(CMD_ARGS) { struct varnish *v, *v2; (void)priv; if (av == NULL) { /* Reset and free */ VTAILQ_FOREACH_SAFE(v, &varnishes, list, v2) { if (v->cli_fd >= 0) varnish_wait(v); VTAILQ_REMOVE(&varnishes, v, list); varnish_delete(v); } return; } AZ(strcmp(av[0], "varnish")); av++; VTC_CHECK_NAME(vl, av[0], "Varnish", 'v'); VTAILQ_FOREACH(v, &varnishes, list) if (!strcmp(v->name, av[0])) break; if (v == NULL) v = varnish_new(av[0]); av++; v->syntax = 4.1; for (; *av != NULL; av++) { if (vtc_error) break; if (!strcmp(*av, "-arg")) { AN(av[1]); AZ(v->pid); VSB_cat(v->args, " "); VSB_cat(v->args, av[1]); if (av[1][0] == '-' && av[1][1] == 'a') v->has_a_arg = 1; av++; continue; } if (!strcmp(*av, "-cleanup")) { AZ(av[1]); varnish_cleanup(v); continue; } if (!strcmp(*av, "-cli")) { AN(av[1]); varnish_cli(v, av[1], 0, NULL); av++; continue; } if (!strcmp(*av, "-clierr")) { AN(av[1]); AN(av[2]); varnish_cli(v, av[2], atoi(av[1]), NULL); av += 2; continue; } if (!strcmp(*av, "-cliexpect")) { AN(av[1]); AN(av[2]); varnish_cli(v, av[2], 0, av[1]); av += 2; continue; } if (!strcmp(*av, "-clijson")) { AN(av[1]); varnish_cli_json(v, av[1]); av++; continue; } if (!strcmp(*av, "-cliok")) { AN(av[1]); varnish_cli(v, av[1], (unsigned)CLIS_OK, NULL); av++; continue; } if (!strcmp(*av, "-errvcl")) { char *r = NULL; AN(av[1]); AN(av[2]); varnish_vcl(v, av[2], 1, &r); if (strstr(r, av[1]) == NULL) vtc_fatal(v->vl, "Did not find expected string: (\"%s\")", av[1]); else vtc_log(v->vl, 3, "Found expected string: (\"%s\")", av[1]); free(r); av += 2; continue; } if (!strcmp(*av, "-expect")) { av++; varnish_expect(v, av); av += 2; continue; } if (!strcmp(*av, "-expectexit")) { v->expect_exit = strtoul(av[1], NULL, 0); av++; continue; } if (!strcmp(*av, "-jail")) { AN(av[1]); AZ(v->pid); REPLACE(v->jail, av[1]); av++; continue; } if (!strcmp(*av, "-proto")) { AN(av[1]); AZ(v->pid); REPLACE(v->proto, av[1]); av++; continue; } if (!strcmp(*av, "-start")) { varnish_start(v); continue; } if (!strcmp(*av, "-stop")) { varnish_stop(v); continue; } if (!strcmp(*av, "-syntax")) { AN(av[1]); v->syntax = strtod(av[1], NULL); av++; continue; } if (!strcmp(*av, "-vcl")) { AN(av[1]); varnish_vcl(v, av[1], 0, NULL); av++; continue; } if (!strcmp(*av, "-vcl+backend")) { AN(av[1]); varnish_vclbackend(v, av[1]); av++; continue; } if (!strcmp(*av, "-vsc")) { AN(av[1]); varnish_vsc(v, av[1]); av++; continue; } if (!strcmp(*av, "-wait-stopped")) { wait_stopped(v); continue; } if (!strcmp(*av, "-wait-running")) { wait_running(v); continue; } if (!strcmp(*av, "-wait")) { varnish_wait(v); continue; } if (!strcmp(*av, "-vsl_catchup")) { vsl_catchup(v); continue; } vtc_fatal(v->vl, "Unknown varnish argument: %s", *av); } } #endif /* VTEST_WITH_VTC_VARNISH */ varnish-7.5.0/bin/varnishtop/000077500000000000000000000000001457605730600161575ustar00rootroot00000000000000varnish-7.5.0/bin/varnishtop/Makefile.am000066400000000000000000000004631457605730600202160ustar00rootroot00000000000000# AM_CPPFLAGS = \ -I$(top_srcdir)/include \ -I$(top_builddir)/include \ @CURSES_CFLAGS@ bin_PROGRAMS = varnishtop varnishtop_SOURCES = \ varnishtop.c \ varnishtop_options.h varnishtop_LDADD = \ $(top_builddir)/lib/libvarnishapi/libvarnishapi.la \ @CURSES_LIBS@ ${RT_LIBS} ${LIBM} ${PTHREAD_LIBS} varnish-7.5.0/bin/varnishtop/flint.lnt000066400000000000000000000014721457605730600200160ustar00rootroot00000000000000// Copyright (c) 2010-2018 Varnish Software AS // SPDX-License-Identifier: BSD-2-Clause // See LICENSE file for full text of license -efile(451, "varnishtop_options.h") -e712 // 14 Info 712 Loss of precision (___) (___ to ___) -e747 // 16 Info 747 Significant prototype coercion (___) ___ to ___ -e763 // Redundant declaration for symbol '...' previously declared -e732 // Loss of sign (arg. no. 2) (int to unsigned -e713 // Loss of precision (assignment) (unsigned long long to long long) -sem(t_order_VRBT_INSERT, custodial(2)) -sem(t_key_VRBT_INSERT, custodial(2)) /////////////////////////////////////////////////////////////////////// // Varnishstat specific /// -e850 // for loop index variable '___' whose type category is '___' is modified in body of the for loop that began at '___' varnish-7.5.0/bin/varnishtop/flint.sh000066400000000000000000000003701457605730600176270ustar00rootroot00000000000000#!/bin/sh # # Copyright (c) 2010-2021 Varnish Software AS # SPDX-License-Identifier: BSD-2-Clause # See LICENSE file for full text of license FLOPS=' *.c ../../lib/libvarnishapi/flint.lnt ../../lib/libvarnishapi/*.c ' ../../tools/flint_skel.sh varnish-7.5.0/bin/varnishtop/varnishtop.c000066400000000000000000000210761457605730600205260ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * Author: Dag-Erling Smørgrav * Author: Guillaume Quintard * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Log tailer for Varnish */ #include "config.h" #include #include #include #include #include #include #include #include #define VOPT_DEFINITION #define VOPT_INC "varnishtop_options.h" #include "vdef.h" #include "vcurses.h" #include "vapi/vsl.h" #include "vapi/vsm.h" #include "vapi/voptget.h" #include "vas.h" #include "vtree.h" #include "vut.h" #include "vapi/vsig.h" #if 0 #define AC(x) assert((x) != ERR) #else #define AC(x) x #endif static struct VUT *vut; struct top { uint8_t tag; const char *rec_data; char *rec_buf; int clen; unsigned hash; VRBT_ENTRY(top) e_order; VRBT_ENTRY(top) e_key; double count; }; static unsigned period = 60; /* seconds */ static int end_of_file = 0; static unsigned ntop; static pthread_mutex_t mtx = PTHREAD_MUTEX_INITIALIZER; static int f_flag = 0; static unsigned maxfieldlen = 0; static const char *ident; static VRBT_HEAD(t_order, top) h_order = VRBT_INITIALIZER(&h_order); static VRBT_HEAD(t_key, top) h_key = VRBT_INITIALIZER(&h_key); static inline int cmp_key(const struct top *a, const struct top *b) { if (a->hash != b->hash) return (a->hash - b->hash); if (a->tag != b->tag) return (a->tag - b->tag); if (a->clen != b->clen) return (a->clen - b->clen); return (memcmp(a->rec_data, b->rec_data, a->clen)); } static inline int cmp_order(const struct top *a, const struct top *b) { if (a->count > b->count) return (-1); else if (a->count < b->count) return (1); return (cmp_key(a, b)); } VRBT_GENERATE_INSERT_COLOR(t_order, top, e_order, static) VRBT_GENERATE_INSERT_FINISH(t_order, top, e_order, static) VRBT_GENERATE_INSERT(t_order, top, e_order, cmp_order, static) VRBT_GENERATE_REMOVE_COLOR(t_order, top, e_order, static) VRBT_GENERATE_MINMAX(t_order, top, e_order, static) VRBT_GENERATE_NEXT(t_order, top, e_order, static) VRBT_GENERATE_REMOVE(t_order, top, e_order, static) VRBT_GENERATE_INSERT_COLOR(t_key, top, e_key, static) VRBT_GENERATE_REMOVE_COLOR(t_key, top, e_key, static) VRBT_GENERATE_INSERT_FINISH(t_key, top, e_key, static) VRBT_GENERATE_INSERT(t_key, top, e_key, cmp_key, static) VRBT_GENERATE_REMOVE(t_key, top, e_key, static) VRBT_GENERATE_FIND(t_key, top, e_key, cmp_key, static) static int v_matchproto_(VSLQ_dispatch_f) accumulate(struct VSL_data *vsl, struct VSL_transaction * const pt[], void *priv) { struct top *tp, t; unsigned int u; unsigned tag; const char *b, *e, *p; unsigned len; struct VSL_transaction *tr; (void)priv; for (tr = pt[0]; tr != NULL; tr = *++pt) { while ((1 == VSL_Next(tr->c))) { tag = VSL_TAG(tr->c->rec.ptr); if (VSL_tagflags[tag]) continue; if (!VSL_Match(vsl, tr->c)) continue; b = VSL_CDATA(tr->c->rec.ptr); e = b + VSL_LEN(tr->c->rec.ptr); u = 0; for (p = b; p <= e; p++) { if (*p == '\0') break; if (f_flag && (*p == ':' || isspace(*p))) break; u += *p; } len = p - b; if (len == 0) continue; t.hash = u; t.tag = tag; t.clen = len; t.rec_data = VSL_CDATA(tr->c->rec.ptr); PTOK(pthread_mutex_lock(&mtx)); tp = VRBT_FIND(t_key, &h_key, &t); if (tp) { VRBT_REMOVE(t_order, &h_order, tp); tp->count += 1.0; /* Reinsert to rebalance */ VRBT_INSERT(t_order, &h_order, tp); } else { ntop++; tp = calloc(1, sizeof *tp); assert(tp != NULL); tp->hash = u; tp->count = 1.0; tp->clen = len; tp->tag = tag; tp->rec_buf = strdup(t.rec_data); tp->rec_data = tp->rec_buf; AN(tp->rec_data); VRBT_INSERT(t_key, &h_key, tp); VRBT_INSERT(t_order, &h_order, tp); } PTOK(pthread_mutex_unlock(&mtx)); } } return (0); } static void update(unsigned p) { struct top *tp, *tp2; int l, len; double t = 0; static time_t last = 0; static unsigned n = 0; const char *q; time_t now; now = time(NULL); if (now == last) return; last = now; l = 1; if (n < p) n++; AC(erase()); q = ident; len = COLS - strlen(q); if (end_of_file) AC(mvprintw(0, len - (1 + 6), "%s (EOF)", q)); else AC(mvprintw(0, len - 1, "%s", q)); AC(mvprintw(0, 0, "list length %u", ntop)); for (tp = VRBT_MIN(t_order, &h_order); tp != NULL; tp = tp2) { tp2 = VRBT_NEXT(t_order, &h_order, tp); if (++l < LINES) { len = vmin(tp->clen, COLS - 20); AC(mvprintw(l, 0, "%9.2f %-*.*s %*.*s\n", tp->count, maxfieldlen, maxfieldlen, VSL_tags[tp->tag], len, len, tp->rec_data)); t = tp->count; } if (end_of_file) continue; tp->count += (1.0/3.0 - tp->count) / (double)n; if (tp->count * 10 < t || l > LINES * 10) { VRBT_REMOVE(t_key, &h_key, tp); VRBT_REMOVE(t_order, &h_order, tp); free(tp->rec_buf); free(tp); ntop--; } } AC(refresh()); } static void * do_curses(void *arg) { int i; (void)arg; for (i = 0; i < 256; i++) { if (VSL_tags[i] == NULL) continue; if (maxfieldlen < strlen(VSL_tags[i])) maxfieldlen = strlen(VSL_tags[i]); } (void)initscr(); AC(raw()); AC(noecho()); AC(nonl()); AC(intrflush(stdscr, FALSE)); (void)curs_set(0); AC(erase()); timeout(1000); while (!VSIG_int && !VSIG_term && !VSIG_hup) { PTOK(pthread_mutex_lock(&mtx)); update(period); PTOK(pthread_mutex_unlock(&mtx)); switch (getch()) { case ERR: break; #ifdef KEY_RESIZE case KEY_RESIZE: AC(erase()); break; #endif case '\014': /* Ctrl-L */ case '\024': /* Ctrl-T */ AC(redrawwin(stdscr)); AC(refresh()); break; case '\032': /* Ctrl-Z */ AC(endwin()); AZ(raise(SIGTSTP)); break; case '\003': /* Ctrl-C */ case '\021': /* Ctrl-Q */ case 'Q': case 'q': AZ(raise(SIGINT)); break; default: AC(beep()); break; } } AC(endwin()); return (NULL); } static void dump(void) { struct top *tp, *tp2; for (tp = VRBT_MIN(t_order, &h_order); tp != NULL; tp = tp2) { tp2 = VRBT_NEXT(t_order, &h_order, tp); printf("%9.2f %s %*.*s\n", tp->count, VSL_tags[tp->tag], tp->clen, tp->clen, tp->rec_data); } } int main(int argc, char **argv) { int o, once = 0; pthread_t thr; char *e = NULL; vut = VUT_InitProg(argc, argv, &vopt_spec); AN(vut); while ((o = getopt(argc, argv, vopt_spec.vopt_optstring)) != -1) { switch (o) { case '1': AN(VUT_Arg(vut, 'd', NULL)); once = 1; break; case 'f': f_flag = 1; break; case 'h': /* Usage help */ VUT_Usage(vut, &vopt_spec, 0); case 'p': errno = 0; e = NULL; period = strtoul(optarg, &e, 0); if (errno != 0 || e == NULL || *e != '\0') { fprintf(stderr, "Syntax error, %s is not a number", optarg); exit(1); } break; default: if (!VUT_Arg(vut, o, optarg)) VUT_Usage(vut, &vopt_spec, 1); } } if (optind != argc) VUT_Usage(vut, &vopt_spec, 1); VUT_Setup(vut); if (vut->vsm) ident = VSM_Dup(vut->vsm, "Arg", "-i"); else ident = strdup(""); AN(ident); vut->dispatch_f = accumulate; vut->dispatch_priv = NULL; if (once) { (void)VUT_Main(vut); dump(); } else { PTOK(pthread_create(&thr, NULL, do_curses, NULL)); (void)VUT_Main(vut); end_of_file = 1; PTOK(pthread_join(thr, NULL)); } VUT_Fini(&vut); return (0); } varnish-7.5.0/bin/varnishtop/varnishtop_options.h000066400000000000000000000054571457605730600223130ustar00rootroot00000000000000/*- * Copyright (c) 2014-2015 Varnish Software AS * All rights reserved. * * Author: Martin Blix Grydeland * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Option definitions for varnishtop */ #include "vapi/vapi_options.h" #include "vut_options.h" #define TOP_OPT_1 \ VOPT("1", "[-1]", "Run once", \ "Instead of a continuously updated display, print the" \ " statistics once and exit. Implies ``-d``." \ ) #define TOP_OPT_d \ VOPT("d", "[-d]", "Process old log entries", \ "Process log records at the head of the log." \ ) #define TOP_OPT_f \ VOPT("f", "[-f]", "First field only", \ "Sort and group only on the first field of each log entry." \ " For log entries in the form ``prefix: value`` it is the" \ " prefix without the colon that is sorted and grouped." \ " This is useful when displaying e.g. ReqStart entries," \ " where the first field is the client IP address." \ ) #define TOP_OPT_p \ VOPT("p:", "[-p ]", "Sampling period", \ "Specified the number of seconds to measure over, the" \ " default is 60 seconds. The first number in the list is" \ " the average number of requests seen over this time" \ " period." \ " This option has no effect if -1 option is also used." \ ) TOP_OPT_1 VSL_OPT_b VSL_OPT_c VSL_OPT_C TOP_OPT_d VSL_OPT_E TOP_OPT_f VUT_OPT_g VUT_OPT_h VSL_OPT_i VSL_OPT_I VSL_OPT_L VUT_OPT_n TOP_OPT_p VUT_OPT_Q VUT_OPT_q VUT_OPT_r VUT_OPT_t VSL_OPT_T VSL_OPT_x VSL_OPT_X VUT_GLOBAL_OPT_V varnish-7.5.0/configure.ac000066400000000000000000000621651457605730600155120ustar00rootroot00000000000000AC_PREREQ([2.69]) AC_COPYRIGHT([Copyright (c) 2006 Verdens Gang AS Copyright (c) 2006-2024 Varnish Software Copyright 2010-2024 UPLEX - Nils Goroll Systemoptimierung]) AC_REVISION([$Id$]) AC_INIT([Varnish],[7.5.0],[varnish-dev@varnish-cache.org]) AC_CONFIG_SRCDIR(include/miniobj.h) AC_CONFIG_HEADERS([config.h]) AC_CONFIG_MACRO_DIR([m4]) AC_CONFIG_AUX_DIR([build-aux]) AC_USE_SYSTEM_EXTENSIONS PACKAGE_BRANCH=${PACKAGE_VERSION%.*} AC_SUBST([PACKAGE_BRANCH]) AC_DEFINE_UNQUOTED([PACKAGE_BRANCH], ["$PACKAGE_BRANCH"], [Define the branch of this package.]) CFLAGS="$CFLAGS -DZ_PREFIX" # save command line CFLAGS for use in VCC_CC (to pass through things like -m64) # and make distcheck configure OCFLAGS="$CFLAGS" EXTCFLAGS="$CFLAGS" AC_SUBST(EXTCFLAGS) AC_CANONICAL_TARGET AC_LANG(C) AM_MAINTAINER_MODE([disable]) AM_INIT_AUTOMAKE([1.13 foreign color-tests parallel-tests subdir-objects]) AM_EXTRA_RECURSIVE_TARGETS([recheck]) AM_SILENT_RULES([yes]) AC_DISABLE_STATIC LT_INIT # Checks for programs. AC_PROG_CC # XXX remove the next line when AC_PREREQ >= 2.70 and # replace with check of $ac_prog_cc_stdc AC_PROG_CC_C99 AX_PTHREAD(,[AC_MSG_ERROR([Could not configure pthreads support])]) LIBS="$PTHREAD_LIBS $LIBS" CFLAGS="$CFLAGS $PTHREAD_CFLAGS" CC="$PTHREAD_CC" AC_PROG_INSTALL AC_ARG_WITH([rst2man], AS_HELP_STRING([--with-rst2man=PATH], [Location of rst2man (auto)]), [RST2MAN="$withval"], [AC_CHECK_PROGS(RST2MAN, [rst2man-3.6 rst2man-3 rst2man rst2man.py], [no])]) if test "x$RST2MAN" = "xno"; then AC_MSG_ERROR( [rst2man is needed to build Varnish, please install python3-docutils.]) fi AC_ARG_WITH([sphinx-build], AS_HELP_STRING([--with-sphinx-build=PATH], [Location of sphinx-build (auto)]), [SPHINX="$withval"], [AC_CHECK_PROGS(SPHINX, [sphinx-build-3.6 sphinx-build-3 sphinx-build], [no])]) if test "x$SPHINX" = "xno"; then AC_MSG_ERROR( [sphinx-build is needed to build Varnish, please install python3-sphinx.]) fi AC_ARG_WITH([rst2html], AS_HELP_STRING([--with-rst2html=PATH], [Location of rst2html (auto)]), [RST2HTML="$withval"], [AC_CHECK_PROGS(RST2HTML, [rst2html-3.6 rst2html-3 rst2html rst2html.py], "no")]) if test "x$RST2HTML" = "xno"; then AC_MSG_ERROR( [rst2html not found - (Weird, we found rst2man?!)]) fi AC_ARG_VAR([DOT], [The dot program from graphviz to build SVG graphics]) AM_MISSING_PROG([DOT], [dot]) AC_CHECK_PROGS([DOT], [dot]) # Define VMOD flags _VARNISH_VMOD_LDFLAGS # Check for python. _VARNISH_CHECK_PYTHON # Check for libraries. _VARNISH_SEARCH_LIBS(pthread, pthread_create, [thr pthread c_r]) _VARNISH_CHECK_LIB(rt, clock_gettime) _VARNISH_CHECK_LIB(dl, dlopen) _VARNISH_CHECK_LIB(socket, socket) _VARNISH_CHECK_LIB(nsl, getaddrinfo) AC_SUBST(NET_LIBS, "${SOCKET_LIBS} ${NSL_LIBS}") # Userland slab allocator from Solaris, ported to other systems AC_CHECK_HEADERS([umem.h]) # More portable vmb.h AC_CHECK_HEADERS([stdatomic.h]) # XXX: This _may_ be for OS/X LT_LIB_M AC_SUBST(LIBM) m4_ifndef([PKG_PROG_PKG_CONFIG], [m4_fatal([pkg.m4 missing, please install pkg-config])]) PKG_PROG_PKG_CONFIG if test -n $PKG_CONFIG; then PKG_CHECK_MODULES([PCRE2], [libpcre2-8]) else AC_CHECK_PROG(PCRE2_CONFIG, pcre2-config, pcre2-config) AC_ARG_WITH(pcre2-config, AS_HELP_STRING([--with-pcre2-config=PATH], [Location of PCRE2 pcre2-config (auto)]), [pcre2_config="$withval"], [pcre2_config=""]) if test "x$pcre2_config" != "x" ; then AC_MSG_CHECKING(for $pcre2_config) if test -f $pcre2_config ; then PCRE2_CONFIG=$pcre2_config AC_MSG_RESULT(yes) else AC_MSG_RESULT(no - searching PATH) fi fi if test "x$PCRE2_CONFIG" = "x"; then AC_CHECK_PROGS(PCRE2_CONFIG, pcre2-config) fi PCRE2_CFLAGS=`$PCRE2_CONFIG --cflags` PCRE2_LIBS=`$PCRE2_CONFIG --libs8` fi AC_SUBST(PCRE2_CFLAGS) AC_SUBST(PCRE2_LIBS) save_LIBS="${LIBS}" LIBS="${LIBS} ${PCRE2_LIBS}" AC_CHECK_FUNCS([pcre2_set_depth_limit_8], [ AC_DEFINE([HAVE_PCRE2_SET_DEPTH_LIMIT], [1], [Use pcre2_set_depth_limit()]) ]) LIBS="${save_LIBS}" # --enable-pcre2-jit AC_ARG_ENABLE(pcre2-jit, AS_HELP_STRING([--enable-pcre2-jit], [use the PCRE2 JIT compiler (default is YES)]), [], [enable_pcre2_jit=yes]) if test "$enable_pcre2_jit" = yes; then AC_DEFINE([USE_PCRE2_JIT], [1], [Use the PCRE2 JIT compiler]) fi AC_CHECK_HEADERS([edit/readline/readline.h], [AC_DEFINE([HAVE_LIBEDIT], [1], [Define if we have libedit]) LIBEDIT_LIBS="-ledit"], [PKG_CHECK_MODULES([LIBEDIT], [libedit], [ # having the module does not imply having the header AC_SUBST(LIBEDIT_CFLAGS) AC_SUBST(LIBEDIT_LIBS) save_CFLAGS="${CFLAGS}" CFLAGS="${CFLAGS} ${LIBEDIT_CFLAGS}" AC_CHECK_HEADERS([editline/readline.h], [AC_DEFINE([HAVE_LIBEDIT], [1], [Define if we have libedit])], [AC_MSG_ERROR([Found libedit, but header file is missing. Hint: Install dev package?])]) CFLAGS="${save_CFLAGS}" ], [ # AX_LIB_READLINE overwrites LIBS which leads to every binary getting # linked against libreadline uselessly. So we re-use LIBEDIT_LIBS which # we have for libedit to add the lib specifically where needed save_LIBS="${LIBS}" AX_LIB_READLINE LIBS="${save_LIBS}" if test "$ax_cv_lib_readline" = "no"; then AC_MSG_ERROR([neither libedit nor another readline compatible library found]) fi if test "x$ax_cv_lib_readline_history" != "xyes"; then AC_MSG_ERROR([need readline history support]) fi LIBEDIT_LIBS="$ax_cv_lib_readline" ]) ]) PKG_CHECK_MODULES([CURSES], [ncursesw], [], [ PKG_CHECK_MODULES([CURSES], [ncurses], [], [ PKG_CHECK_MODULES([CURSES], [curses], [], [ AX_WITH_CURSES if test "x$ax_cv_curses" != xyes; then AC_MSG_ERROR([requires an X/Open-compatible Curses library]) fi CURSES_LIBS="$CURSES_LIB" ]) ]) ]) AC_SUBST([CURSES_CFLAGS]) AC_SUBST([CURSES_LIBS]) save_CFLAGS="${CFLAGS}" CFLAGS="${CFLAGS} ${CURSES_CFLAGS}" AC_CHECK_HEADERS([ncursesw/curses.h ncursesw.h ncurses/curses.h ncurses.h curses.h]) CFLAGS="${save_CFLAGS}" # Checks for header files. AC_CHECK_HEADERS([sys/endian.h]) AC_CHECK_HEADERS([sys/filio.h]) AC_CHECK_HEADERS([sys/mount.h], [], [], [#include ]) AC_CHECK_HEADERS([sys/personality.h]) AC_CHECK_HEADERS([sys/statvfs.h]) AC_CHECK_HEADERS([sys/vfs.h]) AC_CHECK_HEADERS([endian.h]) AC_CHECK_HEADERS([pthread_np.h], [], [], [#include ]) AC_CHECK_HEADERS([priv.h]) AC_CHECK_HEADERS([fnmatch.h], [], [AC_MSG_ERROR([fnmatch.h is required])]) # Checks for library functions. AC_CHECK_FUNCS([nanosleep]) AC_CHECK_FUNCS([setppriv]) AC_CHECK_FUNCS([fallocate]) AC_CHECK_FUNCS([closefrom]) AC_CHECK_FUNCS([sigaltstack]) AC_CHECK_FUNCS([getpeereid]) AC_CHECK_FUNCS([getpeerucred]) AC_CHECK_FUNCS([fnmatch], [], [AC_MSG_ERROR([fnmatch(3) is required])]) save_LIBS="${LIBS}" LIBS="${PTHREAD_LIBS}" AC_CHECK_FUNCS([pthread_set_name_np]) AC_CHECK_FUNCS([pthread_setname_np]) AC_CHECK_FUNCS([pthread_mutex_isowned_np]) AC_CHECK_FUNCS([pthread_getattr_np]) LIBS="${save_LIBS}" AC_CHECK_DECL([__SUNPRO_C], [SUNCC="yes"], [SUNCC="no"]) AC_ARG_ENABLE(ubsan, AS_HELP_STRING([--enable-ubsan], [enable undefined behavior sanitizer (default is NO)]), [ AC_DEFINE([ENABLE_UBSAN], [1], [Define to 1 if UBSAN is enabled.]) UBSAN_FLAGS=-fsanitize=undefined ]) AC_ARG_ENABLE(tsan, AS_HELP_STRING([--enable-tsan], [enable thread sanitizer (default is NO)]), [ AC_DEFINE([ENABLE_TSAN], [1], [Define to 1 if TSAN is enabled.]) TSAN_FLAGS=-fsanitize=thread ]) AC_ARG_ENABLE(asan, AS_HELP_STRING([--enable-asan], [enable address sanitizer (default is NO)]), [ AC_DEFINE([ENABLE_ASAN], [1], [Define to 1 if ASAN sanitizer is enabled.]) ASAN_FLAGS=-fsanitize=address ]) if test -n "$ASAN_FLAGS"; then AX_CHECK_COMPILE_FLAG( [$ASAN_FLAGS -fsanitize-address-use-after-scope], [ASAN_FLAGS="$ASAN_FLAGS -fsanitize-address-use-after-scope"]) fi AC_ARG_ENABLE(msan, AS_HELP_STRING([--enable-msan], [enable memory sanitizer (default is NO)]), [ AC_DEFINE([ENABLE_MSAN], [1], [Define to 1 if MSAN is enabled.]) MSAN_FLAGS=-fsanitize=memory ]) if test "x$UBSAN_FLAGS$TSAN_FLAGS$ASAN_FLAGS$MSAN_FLAGS" != "x"; then AC_DEFINE([ENABLE_SANITIZER], [1], [Define to 1 if any sanitizer is enabled.]) SAN_FLAGS="$ASAN_FLAGS $UBSAN_FLAGS $TSAN_FLAGS $MSAN_FLAGS" SAN_CFLAGS="$SAN_FLAGS -fPIC -fPIE -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer" SAN_LDFLAGS= save_CFLAGS=$CFLAGS CFLAGS="${CFLAGS} -Werror=unused-command-line-argument" AX_CHECK_LINK_FLAG([-pie], [SAN_LDFLAGS=-pie]) CFLAGS=$save_CFLAGS case $CC in gcc*) SAN_CFLAGS="$SAN_CFLAGS -fuse-ld=gold" ;; esac CFLAGS="$CFLAGS $SAN_CFLAGS" LDFLAGS="$LDFLAGS $SAN_LDFLAGS" fi AC_ARG_ENABLE(workspace-emulator, AS_HELP_STRING([--enable-workspace-emulator], [emulate workspace allocations (default is NO)]), [], [enable_workspace_emulator=no]) AM_CONDITIONAL([ENABLE_WORKSPACE_EMULATOR], [test "$enable_workspace_emulator" = yes]) AM_COND_IF([ENABLE_WORKSPACE_EMULATOR], [ AC_CHECK_HEADERS([sanitizer/asan_interface.h]) AC_DEFINE([ENABLE_WORKSPACE_EMULATOR], [1], [Define to 1 if the workspace emulator is enabled]) ]) # Use jemalloc on Linux JEMALLOC_LDADD= AC_ARG_WITH([jemalloc], [AS_HELP_STRING([--with-jemalloc], [use jemalloc memory allocator. Default is yes on Linux, no elsewhere])], [], [with_jemalloc=check]) case $target in *-*-linux*) if test "x$with_jemalloc" != xno; then AC_CHECK_LIB([jemalloc], [malloc_conf], [JEMALLOC_LDADD="-ljemalloc"], [AC_MSG_WARN([No system jemalloc found, using system malloc])]) fi ;; esac AC_SUBST(JEMALLOC_LDADD) AC_CHECK_FUNCS([setproctitle]) # if the default libexecinfo on alpine causes issues, you can use libunwind AC_ARG_WITH([unwind], [AS_HELP_STRING([--with-unwind], [use libunwind to print stacktraces (use libexecinfo otherwise). Recommended on alpine linux. Defaults to auto.])]) if test "$with_unwind" != no; then case $target in *-*-darwin*) # Always present but .pc is not installed have_unwind=yes ;; *) PKG_CHECK_MODULES([LIBUNWIND], [libunwind], [have_unwind=yes], [have_unwind=no]) ;; esac fi if test "$with_unwind" = yes && test "$have_unwind" != yes; then AC_MSG_ERROR([Could not find libunwind]) fi if test "$have_unwind" = yes; then AC_DEFINE([WITH_UNWIND], [1], [Define to 1 to use libunwind instead of libexecinfo]) else AC_SEARCH_LIBS(backtrace, [execinfo], [], [ AC_MSG_ERROR([Could not find backtrace() support]) ]) fi AM_CONDITIONAL([WITH_UNWIND], [test "$have_unwind" = yes]) case $target in *-*-darwin*) # white lie - we don't actually test it # present but not functional AC_MSG_CHECKING([whether daemon() works]) AC_MSG_RESULT([no]) ac_cv_func_daemon=no ;; *) AC_CHECK_FUNCS([daemon]) ;; esac AM_CONDITIONAL(HAVE_DAEMON, [test "x$ac_cv_func_daemon" != "xno"]) AC_SYS_LARGEFILE save_LIBS="${LIBS}" LIBS="${LIBS} ${RT_LIBS}" AC_CHECK_FUNCS([clock_gettime]) AC_CHECK_FUNCS([gethrtime]) LIBS="${save_LIBS}" if test "x$ac_cv_func_gethrtime" = xyes && \ test "x$ac_cv_func_clock_gettime" = xyes ; then AC_MSG_CHECKING(if clock_gettime is faster than gethrtime) AC_RUN_IFELSE( [AC_LANG_PROGRAM([[ #include #include #include static hrtime_t cl() { struct timespec ts; (void) clock_gettime(CLOCK_MONOTONIC, &ts); return (ts.tv_sec * 1e9 + ts.tv_nsec); } ]],[[ hrtime_t s, c, e, t_hr, t_cl; int i, r, wins; wins = 0; for (r = 0; r < 10; r++) { c = 0; s = gethrtime(); for (i=0; i<100000; i++) c += gethrtime(); e = gethrtime(); t_hr = e - s; fprintf(stderr, "hrtime\t\t%12lu check %lu\n", (unsigned long)t_hr, (unsigned long)c); c = 0; s = gethrtime(); for (i=0; i<100000; i++) c += cl(); e = gethrtime(); t_cl = e - s; fprintf(stderr, "clock_gettime\t%12lu check %lu\n", (unsigned long)t_cl, (unsigned long)c); if (t_cl * 2 < t_hr) wins++; } fprintf(stderr, "clock_gettime wins %d/%d\n", wins, r); if (2 * wins >= r) return (0); return (1); ]])], [AC_MSG_RESULT(yes) ], [AC_MSG_RESULT(no) AC_DEFINE([USE_GETHRTIME], [1], [Define if gethrtime is preferred]) ] ) fi # --enable-kqueue AC_ARG_ENABLE(kqueue, AS_HELP_STRING([--enable-kqueue], [use kqueue if available (default is YES)]), , [enable_kqueue=yes]) if test "$enable_kqueue" = yes; then AC_CHECK_FUNCS([kqueue]) else ac_cv_func_kqueue=no fi # --enable-epoll AC_ARG_ENABLE(epoll, AS_HELP_STRING([--enable-epoll], [use epoll if available (default is YES)]), , [enable_epoll=yes]) if test "$enable_epoll" = yes; then AC_CHECK_FUNCS([epoll_ctl]) else ac_cv_func_epoll_ctl=no fi # --enable-ports AC_ARG_ENABLE(ports, AS_HELP_STRING([--enable-ports], [use ports if available (default is YES)]), , [enable_ports=yes]) if test "$enable_ports" = yes; then AC_CHECK_FUNCS([port_create]) else ac_cv_func_port_create=no fi # --with-persistent-storage AC_ARG_WITH(persistent-storage, AS_HELP_STRING([--with-persistent-storage], [use deprecated persistent storage (default is NO)]), [], [with_persistent_storage=no]) if test "$with_persistent_storage" = yes; then AC_DEFINE([WITH_PERSISTENT_STORAGE], [1], [Define to 1 to build the deprecated peristent storage.]) fi AM_CONDITIONAL([WITH_PERSISTENT_STORAGE], [test "$with_persistent_storage" = yes]) AM_MISSING_HAS_RUN AC_CHECK_MEMBER([struct sockaddr.sa_len], [AC_DEFINE([HAVE_STRUCT_SOCKADDR_SA_LEN], [1], [Define if sa_len is present in struct sockaddr])], [], [#include ]) AC_CHECK_DECL([SO_ACCEPTFILTER], AC_DEFINE(HAVE_ACCEPT_FILTERS,1,[Define to 1 if you have accept filters]), [], [ #include #include ]) AC_CHECK_DECL([SO_RCVTIMEO], [], AC_MSG_ERROR([SO_RCVTIMEO is needed to build Varnish.]), [ #include #include ]) AC_CHECK_DECL([SO_SNDTIMEO], [], AC_MSG_ERROR([SO_SNDTIMEO is needed to build Varnish.]), [ #include #include ]) # Check if the OS supports TCP_KEEP(CNT|IDLE|INTVL) socket options save_LIBS="${LIBS}" LIBS="${LIBS} ${NET_LIBS}" AC_CACHE_CHECK([for TCP_KEEP(CNT|IDLE|INTVL) socket options], [ac_cv_have_tcp_keep], [AC_RUN_IFELSE( [AC_LANG_PROGRAM([[ #include #include #include #include #include ]],[[ int s = socket(AF_INET, SOCK_STREAM, 0); int i = 5; if (s < 0 && errno == EPROTONOSUPPORT) s = socket(AF_INET6, SOCK_STREAM, 0); if (setsockopt(s, IPPROTO_TCP, TCP_KEEPCNT, &i, sizeof i)) return (1); if (setsockopt(s, IPPROTO_TCP, TCP_KEEPIDLE, &i, sizeof i)) return (1); if (setsockopt(s, IPPROTO_TCP, TCP_KEEPINTVL, &i, sizeof i)) return (1); return (0); ]])], [ac_cv_have_tcp_keep=yes], [ac_cv_have_tcp_keep=no]) ]) if test "$ac_cv_have_tcp_keep" = yes; then AC_DEFINE([HAVE_TCP_KEEP], [1], [Define if OS supports TCP_KEEP* socket options]) else # Check TCP_KEEPALIVE on macOs which uses seconds as idle time unit like TCP_KEEPIDLE AC_CACHE_CHECK([for TCP_KEEPALIVE socket option], [ac_cv_have_tcp_keepalive], [AC_RUN_IFELSE( [AC_LANG_PROGRAM([[ #include #include #include #include #include ]],[[ int s = socket(AF_INET, SOCK_STREAM, 0); int i = 5; if (s < 0) return (1); if (setsockopt(s, IPPROTO_TCP, TCP_KEEPALIVE, &i, sizeof i)) return (1); return 0; ]])], [ac_cv_have_tcp_keepalive=yes], [ac_cv_have_tcp_keepalive=no]) ]) if test "$ac_cv_have_tcp_keepalive" = yes; then AC_DEFINE([HAVE_TCP_KEEPALIVE], [1], [Define if OS supports TCP_KEEPALIVE socket option]) fi fi LIBS="${save_LIBS}" # Check if the OS supports TCP_FASTOPEN socket option save_LIBS="${LIBS}" LIBS="${LIBS} ${NET_LIBS}" AC_CACHE_CHECK([for TCP_FASTOPEN socket option], [ac_cv_have_tcp_fastopen], [AC_RUN_IFELSE( [AC_LANG_PROGRAM([[ #include #include #include #include #include #ifndef SOL_TCP # define SOL_TCP IPPROTO_TCP #endif ]],[[ int s = socket(AF_INET, SOCK_STREAM, 0); int i = 5; if (s < 0 && errno == EPROTONOSUPPORT) s = socket(AF_INET6, SOCK_STREAM, 0); if (setsockopt(s, SOL_TCP, TCP_FASTOPEN, &i, sizeof i)) return (1); return (0); ]])], [ac_cv_have_tcp_fastopen=yes], [ac_cv_have_tcp_fastopen=no]) ]) if test "$ac_cv_have_tcp_fastopen" = yes; then AC_DEFINE([HAVE_TCP_FASTOPEN], [1], [Define if OS supports TCP_FASTOPEN socket option]) fi LIBS="${save_LIBS}" AC_CHECK_FUNCS([close_range]) # Check for working close_range() if test "$ac_cv_func_close_range" = yes; then AC_CACHE_CHECK([if close_range is working], [ac_cv_have_working_close_range], [AC_RUN_IFELSE( [AC_LANG_PROGRAM([[ #include ]],[[ return (close_range(0, 2, 0)); ]])], [ac_cv_have_working_close_range=yes], [ac_cv_have_working_close_range=no]) ]) fi if test "x$ac_cv_have_working_close_range" = xyes; then AC_DEFINE([HAVE_WORKING_CLOSE_RANGE], [1], [Define if OS has working close_range()]) fi # Run-time directory if test "${localstatedir}" = '${prefix}/var' ; then VARNISH_STATE_DIR='/var/run' else VARNISH_STATE_DIR='${localstatedir}/varnish' fi AC_SUBST(VARNISH_STATE_DIR) # Default configuration directory. pkgsysconfdir='${sysconfdir}/varnish' AC_SUBST(pkgsysconfdir) # VMOD variables AC_SUBST(vmoddir, [$\(pkglibdir\)/vmods]) AC_SUBST(VMODTOOL, [$\(top_srcdir\)/lib/libvcc/vmodtool.py]) AC_SUBST(VSCTOOL, [$\(top_srcdir\)/lib/libvsc/vsctool.py]) # Check for linker script support gl_LD_VERSION_SCRIPT ####################################################################### # Now that we're done using the compiler to look for functions and # libraries, set CFLAGS to what we want them to be for our own code # This is a test to see how much havoc Jenkins exposes. # # The reason for -Wno-error=unused-result is a glibc/gcc interaction # idiocy where write is marked as warn_unused_result, causing build # failures. WFLAGS= AX_CHECK_COMPILE_FLAG([-Wall], [CFLAGS="${CFLAGS} -Wall" WFLAGS="${WFLAGS} -Wall"]) if test "$SUNCC" = "yes" ; then SUNCC_CFLAGS=" \ -errwarn=%all,no%E_EMPTY_TRANSLATION_UNIT \ -errtags=yes \ " AX_CHECK_COMPILE_FLAG([${SUNCC_CFLAGS}], [CFLAGS="${CFLAGS} ${SUNCC_CFLAGS}" WFLAGS="${WFLAGS} ${SUNCC_CFLAGS}"]) else AX_CHECK_COMPILE_FLAG([-Werror], [CFLAGS="${CFLAGS} -Werror" WFLAGS="${WFLAGS} -Werror"]) fi case $target in *-*-darwin*) AX_CHECK_COMPILE_FLAG([-Wno-expansion-to-defined], [CFLAGS="${CFLAGS} -Wno-expansion-to-defined" WFLAGS="${WFLAGS} -Wno-expansion-to-defined"]) ;; esac AX_CHECK_COMPILE_FLAG([-Werror=unused-result], [CFLAGS="${CFLAGS} -Wno-error=unused-result" WFLAGS="${WFLAGS} -Wno-error=unused-result"], [AX_CHECK_COMPILE_FLAG([-Wunused-result], [CFLAGS="${CFLAGS} -Wno-unused-result" WFLAGS="${WFLAGS} -Wno-unused-result"])]) # This corresponds to FreeBSD's WARNS level 6 DEVELOPER_CFLAGS=`$PYTHON $srcdir/wflags.py` if test $? -ne 0 ; then AC_MSG_ERROR([wflags.py failure]) fi # zlib-specific flags AC_SUBST(VGZ_CFLAGS) # Support for visibility attribute (zlib) AC_CACHE_CHECK([whether we have support for visibility attributes], [ac_cv_have_viz], [AC_COMPILE_IFELSE( [AC_LANG_PROGRAM([[ int __attribute__((visibility ("hidden"))) foo; ]],[])], [ac_cv_have_viz=yes], [ac_cv_have_viz=no]) ]) AS_IF([test $ac_cv_have_viz = yes], [ AC_DEFINE([HAVE_HIDDEN], [1], [Define to 1 if visibility attribute hidden is available.])]) # --enable-stack-protector AC_ARG_ENABLE(stack-protector, AS_HELP_STRING([--enable-stack-protector],[enable stack protector (default is YES)]), [], [enable_stack_protector=yes]) if test "x$enable_stack_protector" != "xno"; then AX_CHECK_COMPILE_FLAG([-fstack-protector], AX_CHECK_LINK_FLAG([-fstack-protector], [DEVELOPER_CFLAGS="${DEVELOPER_CFLAGS} -fstack-protector"], [], []), [], []) fi # --enable-developer-warnings AC_ARG_ENABLE(developer-warnings, AS_HELP_STRING([--enable-developer-warnings],[enable strict warnings (default is NO)]), [], [enable_developer_warnings=no]) if test "x$SUNCC" != "xyes" && test "x$enable_developer_warnings" != "xno"; then # no known way to specifically disabling missing-field-initializers # warnings keeping the rest of Wextra AX_CHECK_COMPILE_FLAG([-Wno-missing-field-initializers], [DEVELOPER_CFLAGS="${DEVELOPER_CFLAGS} -Wno-missing-field-initializers"], [DEVELOPER_CFLAGS="${DEVELOPER_CFLAGS} -Wno-extra"], []) CFLAGS="${CFLAGS} ${DEVELOPER_CFLAGS}" WFLAGS="${WFLAGS} ${DEVELOPER_CFLAGS}" fi # gcc on solaris needs -fstack-protector when calling gcc in linker # mode but libtool does not pass it on, so we need to trick it # specifically case $CFLAGS in *-fstack-protector*) case $target in *-*-solaris*) case $CC in gcc*) AM_LT_LDFLAGS="${AM_LT_LDFLAGS} -Wc,-fstack-protector" ;; esac ;; esac ;; esac AC_SUBST(AM_LT_LDFLAGS) # --enable-coverage AC_ARG_ENABLE(coverage, AS_HELP_STRING([--enable-coverage], [enable coverage (implies debugging symbols, default is NO)]), [], [enable_coverage=no]) # --enable-debugging-symbols AC_ARG_ENABLE(debugging-symbols, AS_HELP_STRING([--enable-debugging-symbols], [enable debugging symbols (default is NO)]), [], [enable_debugging_symbols=no]) if test "$enable_coverage" != no; then AC_DEFINE([ENABLE_COVERAGE], [1], [Define to 1 if code coverage is enabled.]) save_CFLAGS=$CFLAGS CFLAGS= AX_CHECK_COMPILE_FLAG([--coverage], [COV_FLAGS=--coverage], [AX_CHECK_COMPILE_FLAG([-fprofile-arcs -ftest-coverage], [COV_FLAGS="-fprofile-arcs -ftest-coverage"])]) AX_CHECK_COMPILE_FLAG([-fprofile-abs-path], [COV_FLAGS="$COV_FLAGS -fprofile-abs-path"]) AX_CHECK_COMPILE_FLAG([-fPIC], [COV_FLAGS="$COV_FLAGS -fPIC"]) CFLAGS=$COV_FLAGS AC_CHECK_FUNCS([__gcov_flush]) AC_CHECK_FUNCS([__gcov_dump]) AC_CHECK_FUNCS([__llvm_gcov_flush]) CFLAGS="$save_CFLAGS $COV_FLAGS" enable_debugging_symbols=yes fi if test "$enable_debugging_symbols" != no; then if test "x$SUNCC" = "xyes" ; then CFLAGS="${CFLAGS} -O0 -g" else CFLAGS="${CFLAGS} -O0 -g -fno-inline" fi fi # --enable-oss-fuzz AC_ARG_ENABLE(oss-fuzz, AS_HELP_STRING([--enable-oss-fuzz], [enable build tweaks for OSS-Fuzz (default is NO)]), [], [enable_oss_fuzz=no]) AM_CONDITIONAL(ENABLE_OSS_FUZZ, [test "$enable_oss_fuzz" != no]) # Command line for compiling VCL code. I wish there were a simple way # to figure this out dynamically without introducing a run-time # dependency on libtool. AC_ARG_VAR([VCC_CC], [C compiler command line for VCL code]) if test "$ac_cv_env_VCC_CC_set" = "set"; then VCC_CC="$ac_cv_env_VCC_CC_value" else case $target in *-*-darwin*) VCC_CC="cc $OCFLAGS" ;; *) VCC_CC="$PTHREAD_CC $OCFLAGS" ;; esac save_CFLAGS="$CFLAGS" save_CC="$CC" CFLAGS= CC="$VCC_CC" AX_CHECK_COMPILE_FLAG( [-fno-var-tracking-assignments], [VCC_CC="$VCC_CC -fno-var-tracking-assignments"]) CFLAGS="$save_CFLAGS" CC="$save_CC" case $target in *-*-solaris*) case $PTHREAD_CC in *gcc*) VCC_CC="exec $VCC_CC %w $PTHREAD_CFLAGS -fpic -shared -o %o %s" ;; *cc) VCC_CC="exec $VCC_CC %w -errwarn=%%all,no%%E_STATEMENT_NOT_REACHED $PTHREAD_CFLAGS -Kpic -G -o %o %s" ;; esac ;; *-*-darwin*) VCC_CC="exec $VCC_CC %w -dynamiclib -Wl,-undefined,dynamic_lookup -o %o %s" ;; *) VCC_CC="exec $VCC_CC %w $PTHREAD_CFLAGS $SAN_CFLAGS -fpic -shared -Wl,-x -o %o %s" ;; esac fi if test "$ac_cv_env_VCC_WARN_set" = set; then VCC_WARN=$ac_cv_env_VCC_WARN_value else VCC_WARN=$WFLAGS fi OCFLAGS="$OCFLAGS $WFLAGS" AC_DEFINE_UNQUOTED([VCC_CC],"$VCC_CC",[C compiler command line for VCL code]) AC_DEFINE_UNQUOTED([VCC_WARN],"$VCC_WARN",[C compiler warnings for VCL code]) # Stupid automake needs this VTC_TESTS="$(cd $srcdir/bin/varnishtest && echo tests/*.vtc)" AC_SUBST(VTC_TESTS) AM_SUBST_NOTMAKE(VTC_TESTS) VMOD_TESTS="$(cd $srcdir/vmod && echo tests/*.vtc)" AC_SUBST(VMOD_TESTS) AM_SUBST_NOTMAKE(VMOD_TESTS) AC_ARG_WITH([contrib], [AS_HELP_STRING([--with-contrib], [Build Varnish with external contributions.])]) AM_CONDITIONAL([WITH_CONTRIB], [test "$with_contrib" = yes]) CONTRIB_TESTS="$(cd $srcdir/contrib && echo tests/*.vtc)" AC_SUBST(CONTRIB_TESTS) AM_SUBST_NOTMAKE(CONTRIB_TESTS) AM_COND_IF([WITH_CONTRIB], [ AC_DEFINE([WITH_CONTRIB], [1], [Define to 1 when Varnish is built with contributions.]) ]) # Make sure this include dir exists AC_CONFIG_COMMANDS([mkdir], [$MKDIR_P doc/sphinx/include]) # Generate output AC_CONFIG_FILES([ Makefile bin/Makefile bin/varnishadm/Makefile bin/varnishd/Makefile bin/varnishlog/Makefile bin/varnishstat/Makefile bin/varnishtop/Makefile bin/varnishhist/Makefile bin/varnishtest/Makefile bin/varnishncsa/Makefile contrib/Makefile doc/Makefile doc/graphviz/Makefile doc/sphinx/Makefile doc/sphinx/conf.py etc/Makefile include/Makefile lib/Makefile lib/libvsc/Makefile lib/libvarnish/Makefile lib/libvarnishapi/Makefile lib/libvcc/Makefile lib/libvgz/Makefile man/Makefile varnishapi.pc varnishapi-uninstalled.pc vmod/Makefile ]) AC_OUTPUT varnish-7.5.0/contrib/000077500000000000000000000000001457605730600146525ustar00rootroot00000000000000varnish-7.5.0/contrib/Makefile.am000066400000000000000000000003041457605730600167030ustar00rootroot00000000000000# include $(top_srcdir)/vtc.am if !WITH_CONTRIB dist_noinst_SCRIPTS = \ varnishstatdiff else dist_bin_SCRIPTS = \ varnishstatdiff TESTS = @CONTRIB_TESTS@ endif EXTRA_DIST = @CONTRIB_TESTS@ varnish-7.5.0/contrib/tests/000077500000000000000000000000001457605730600160145ustar00rootroot00000000000000varnish-7.5.0/contrib/tests/statdiff_b00000.vtc000066400000000000000000000021251457605730600212170ustar00rootroot00000000000000varnishtest "varnishstatdiff coverage" feature cmd "command -v diff" server s1 { rxreq txresp } -start varnish v1 -vcl+backend "" -start shell { varnishstat -n ${v1_name} -1 \ -I MAIN.n_object -I MAIN.cache_* -I MAIN.client_req | tee stat1.txt } client c1 { txreq rxresp } -start varnish v1 -vsl_catchup shell { varnishstat -n ${v1_name} -1 \ -I MAIN.n_object -I MAIN.cache_* -I MAIN.esi_req | tee stat2.txt } shell -expect Usage: {varnishstatdiff -h} shell -expect "Error: not enough arguments" -err {varnishstatdiff} shell -expect "Error: not enough arguments" -err {varnishstatdiff a} shell -expect "Error: too many arguments" -err {varnishstatdiff a b c} shell { varnishstatdiff stat1.txt stat2.txt | tee diff.txt } shell { sed 's/@/ /' >expected.txt <<-EOF --- stat1.txt +++ stat2.txt @MAIN.cache_miss -0 -0.00 Cache misses @ +1 +0.00 -MAIN.client_req 0 0.00 Good client requests received +MAIN.esi_req 0 0.00 ESI subrequests @MAIN.n_object -0 . object structs made @ +1 . EOF diff -u expected.txt diff.txt } varnish-7.5.0/contrib/varnishstatdiff000077500000000000000000000111261457605730600200000ustar00rootroot00000000000000#!/bin/sh # # Copyright (c) 2022 Varnish Software AS # All rights reserved. # # Author: Dridi Boukelmoune # # SPDX-License-Identifier: BSD-2-Clause # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # 1. Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # 2. Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in the # documentation and/or other materials provided with the distribution. # # THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND # ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE # ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS # OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) # HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY # OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF # SUCH DAMAGE. set -e set -u readonly SCRIPT=$0 readonly TMP=$(mktemp -d) trap 'rm -rf $TMP' EXIT usage() { test $# -eq 1 && printf 'Error: %s.\n\n' "$1" sed 's:@: :' <<-EOF Usage: $SCRIPT $SCRIPT -h Show the differences between two sets of varnish metrics extracted with 'varnishstat -1'. Available options: -h : show this help and exit Considering the following metrics in : FOO.counter 123 12 Only in file 1 BAR.counter 456 45 Counter present in both files BAR.gauge 999 . Gauge present in both files And the following metrics in : BAR.counter 789 79 Counter present in both files BAR.gauge 555 . Gauge present in both files BAZ.gauge 0 . Only in file 2 The output is sorted by metric name and looks like this: --- +++ @BAR.counter -456 -45 Counter present in both files @ +789 +79 @BAR.gauge -999 . Gauge present in both files @ +555 . +BAZ.gauge 0 . Only in file 2 -FOO.counter 123 12 Only in file 1 The output looks like a unified diff except that when metrics are present in both files, the diff is rendered as such only in the metrics columns. EOF exit $# } join_prepare() { # NB: the metrics need to be sorted to later be joined, and since # the metrics descriptions contain spaces, using a delimiter other # than space solves the problem. Hopefully @ never shows up in the # varnishstat -1 output. sort -k 1b,1 "$1" | sed 's: *:@: ; s::@: ; s::@:' } join_render() { # The resulting columns are: # 1: metric name # 2: value in file 1 # 3: rate in file 1 # 4: value in file 2 # 5: rate in file 2 # 6: description in file 1 # 7: description in file 2 join -a1 -a2 -t@ -o '0 1.2 1.3 2.2 2.3 1.4 2.4' -- "$1" "$2" } diff_preamble() { printf "%s %s\n+++ %s\n" --- "$1" "$2" } diff_measure() { awk -F@ ' BEGIN { max[1] = 0 max[2] = 0 max[3] = 0 max[4] = 0 max[5] = 0 } $2 != $4 || $3 != $5 { for (i in max) { if (max[i] < length($i)) max[i] = length($i) } } END { if (max[2] < max[4]) max[2] = max[4] if (max[3] < max[5]) max[3] = max[5] printf "%d %d %d\n", max[1] + 2, max[2] + 2, max[3] + 2 } ' } diff_render() { read l1 l2 l3 awk -F@ -v l1="$l1" -v l2="$l2" -v l3="$l3" ' $2 != "" && $4 != "" && ($2 != $4 || $3 != $5) { # present in both sgn = "-" if ($3 == ".") sgn = " " printf " %-*s-%-*s%s%-*s%s\n", l1, $1, l2, $2, sgn, l3, $3, $6 sgn = "+" if ($5 == ".") sgn = " " printf " %-*s+%-*s%s%s\n", l1, "", l2, $4, sgn, $5 } $2 != "" && $4 == "" { # only in file 1 printf "-%-*s %-*s %-*s%s\n", l1, $1, l2, $2, l3, $3, $6 } $2 == "" && $4 != "" { # only in file 2 printf "+%-*s %-*s %-*s%s\n", l1, $1, l2, $4, l3, $5, $7 } ' <"$1" } while getopts h OPT do case $OPT in h) usage ;; *) usage "wrong usage" >&2 ;; esac done shift $((OPTIND - 1)) test $# -lt 2 && usage "not enough arguments" >&2 test $# -gt 2 && usage "too many arguments" >&2 export LC_ALL=C.utf-8 join_prepare "$1" >"$TMP"/1 join_prepare "$2" >"$TMP"/2 join_render "$TMP"/1 "$TMP"/2 >"$TMP"/join diff_preamble "$1" "$2" diff_measure <"$TMP"/join | diff_render "$TMP"/join varnish-7.5.0/doc/000077500000000000000000000000001457605730600137575ustar00rootroot00000000000000varnish-7.5.0/doc/Makefile.am000066400000000000000000000003401457605730600160100ustar00rootroot00000000000000# # RST2ANY_FLAGS = --halt=2 EXTRA_DIST = changes.rst changes.html changes.html: changes.rst ${RST2HTML} ${RST2ANY_FLAGS} $? $@ # build graphviz before sphinx, so sphinx docs can use svg output SUBDIRS = graphviz sphinx varnish-7.5.0/doc/README.WRITING_RST.rst000066400000000000000000000056571457605730600173350ustar00rootroot00000000000000.. Copyright 2015,2016,2019,2024 UPLEX - Nils Goroll Systemoptimierung SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license THINGS TO CONSIDER WHEN WRITING VARNISH RST DOCUMENTATION ========================================================= Inline Markup ------------- Please try to be consistent with inline markup and fix places which do not follow the style: * VCL language and other literals as ``literal`` * placeholders and emphasis as *emphasis* * no `interpreted text` except where it actually *is* that .. _Reference: http://docutils.sourceforge.net/docs/ref/rst/restructuredtext.html#character-level-inline-markup * exception: Links to manpages outside varnish as `man(section)` * use code blocks for:: Examples and other code References are tricky --------------------- To build html documentation, we want to create cross-document cross-references using:: :ref:`reference name` Trouble is that ``rst2man`` and ``rst2pdf`` working on individual files cannot parse `ref` roles to anything outside the current rst file, so we need to differenciate link targets depending on the kind of documentation: * set link targets on the top of documents ending up in man-pages following the manpage naming scheme, e.g.:: .. _varnishd(1): * set link targets for imporant paramgraphs following the scheme ref-`doc`-`section`, for instance:: .. _ref-varnishd-opt_T: These can be referenced from other documents making up the html documentation, but not from stand-alone documents (like man-pages). * in all documents which are used to create man-pages, add the following definition at the top:: .. role:: ref(emphasis) This will allow the use of `ref` in a compatible manner, IF references follow the man-page naming scheme * to be compatible both with ``sphinx`` and ``rst2man``, use `implicit link targets`_ in stand-alone documents, like this one creating `References are tricky`_:: `References are tricky`_ .. _implicit link targets: http://docutils.sourceforge.net/docs/ref/rst/restructuredtext.html#implicit-hyperlink-targets Document Structure ------------------ While RST supports a great deal of flexibility for adornments of titles and section headers, we should adhere to a consistent style to avoid breaking the document structure unintentially. Within the Varnish-Cache project, we should use these characters: * Over and underline ``=`` for document titles * Over and underline ``-`` for document subtitles * Underline ``=`` for sections * Underline ``-`` for subsection * Underline ``~`` for subsubsections This is in line with the example in https://docutils.sourceforge.io/docs/user/rst/quickstart.html#sections HISTORY ======= This README was initially started by Nils Goroll. COPYRIGHT ========= This document is licensed under the same licence as Varnish itself. See LICENCE for details. * Copyright 2015,2016,2019,2024 UPLEX - Nils Goroll Systemoptimierung varnish-7.5.0/doc/changes.rst000066400000000000000000007425421457605730600161370ustar00rootroot00000000000000.. Copyright (c) 2011-2023 Varnish Software AS Copyright 2016-2023 UPLEX - Nils Goroll Systemoptimierung SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. role:: ref(emphasis) =================== About this document =================== .. keep this section at the top! This document contains notes from the Varnish developers about ongoing development and past versions: * Developers will note here changes which they consider particularly relevant or otherwise noteworthy * This document is not necessarily up-to-date with the code * It serves as a basis for release managers and others involved in release documentation * It is not rendered as part of the official documentation and thus only available in ReStructuredText (rst) format in the source repository and -distribution. Official information about changes in releases and advise on the upgrade process can be found in the ``doc/sphinx/whats-new/`` directory, also available in HTML format at http://varnish-cache.org/docs/trunk/whats-new/index.html and via individual releases. These documents are updated as part of the release process. ================================ Varnish Cache 7.5.0 (2024-03-18) ================================ .. PLEASE keep this roughly in commit order as shown by git-log / tig (new to old) * Add ``h2_window_timeout`` paramater to mitigate CVE-2023-43622 (VSV00014_). * The parameters ``idle_send_timeout`` and ``timeout_idle`` are now limited to a maximum of 1 hour. * The VCL variables ``bereq.connect_timeout``, ``bereq.first_byte_timeout``, ``bereq.between_bytes_timeout``, ``bereq.task_deadline``, ``sess.timeout_idle``, ``sess.timeout_linger``, ``sess.idle_send_timeout`` and ``sess.send_timeout`` can now be ``unset`` to use their default values from parameters. * Timeout and deadline parameters can now be set to a new special value ``never`` to apply an infinitely long timeout. Parameters which used to be of type ``timeout`` but do not accept ``never`` have been moved to the new type ``duration``. VCL variables cannot be set to ``never``. * The implementation of the feature flag ``esi_include_onerror`` changed in Varnish-Cache 7.3.0 has been reverted to more closely match the behavior before that release: By default, fragments are included again, even errors. When ``esi_include_onerror`` is enabled and errors are encountered while processing an ESI fragment, processing only continues if the ``onerror`` attribute of the ```` tag is present. Any response status other than ``200`` or ``204`` counts as an error as well as any fetch error. Streaming responses may continue to be partially delivered. Error behavior has been fixed to be consistent also for zero length fragments. * The new VSC ``n_superseded`` gets incremented every time an object is superseded by a new one, for example when the grace and/or keep timers kept it in cache for longer than the TTL and a fresh copy is fetched. Cache evictions of superseded objects are logged as ``ExpKill`` messages starting with ``VBF_Superseded``. .. _Varnish-Modules #222: https://github.com/varnish/varnish-modules/issues/222 * The implementation of ``PRIV_TASK`` and ``PRIV_TOP`` VMOD function/method arguments has been fixed to also work with ``std.rollback()`` (`Varnish-Modules #222`_) * Transports are now responsible for calling ``VDP_Close()`` in all cases. * The format of ``BackendClose`` VSL records has been changed to use the short reason name for consistency with ``SessClose``. * During ``varnishd`` shutdown, pooled backend connections are now closed bi-directionally. * Mode bits of files opened via the UNIX jail as ``JAIL_FIXFD_FILE`` are now correctly set as ``0600``. * The ``busy_stats_rate`` feature now also works for HTTP/2. * The ``BUILD_VMOD_$NAME`` m4 macro for VMOD Makefiles has been fixed to properly support custom ``CFLAGS``. * Storage engines are now responsible for deciding which ``fetch_chunksize`` to use. When Varnish-Cache does not know the expected object size, it calls the ``objgetspace`` stevedore function with a zero ``sz`` argument. * The ``Timestamp`` SLT with ``Process`` prefix is not emitted any more when processing continues as for restarts, or when ``vcl_deliver`` transitions to ``vcl_synth``. * The ``FetchError`` SLT with ``HTC`` prefix now contains a verbose explanation. * Varnish Test Cases (VTCs) now support an ``include`` statement. * ``varnishncsa`` now supports the ``%{Varnish:default_format}x`` format to use the default format with additions. * A deadlock in ``VRT_AddDirector()`` is now avoided with dynamic backends when the VCL goes cold. * A new variable ``bereq.task_deadline``, available in ``sub vcl_pipe {}`` only for now, allows to limit the total duration of pipe transactions. Its default comes from the ``pipe_task_deadline`` parameter, which itself defaults to ``never``. * The VSC counters ``n_expired``, ``n_purges`` and ``n_obj_purged`` have been fixed for purged objects. * The ``ExpKill`` SLT prefix ``EXP_expire`` has been renamed to ``EXP_Inspect``. * New VSL records of the ``ExpKill`` SLT with ``EXP_Removed`` are now emitted to uniformly log all "object removed from cache" events. * VSL records of the ``ExpKill`` SLT with ``EXP_Expired`` prefix now contain the number of hits on the removed object. * A bug has been fixed in ``varnishstat`` where the description of the last VSC was not shown. * VCL COLD events have been fixed for directors vs. VMODs: VDI COLD now comes before VMOD COLD. * The ``file`` storage engine now fails properly if the file size is too small. * The ``.happy`` stevedore type method now returns ``true`` if not implemented instead of panicking ``varnishd`` (`4036`_) * Use of ``objiterate_f`` on request bodies has been fixed to correctly post ``OBJ_ITER_END``. * Use of ``STV_NewObject()`` has been fixed to correctly request zero bytes for attributes where only a body is to be stored. * ``(struct req).filter_list`` has been renamed to ``vdp_filter_list``. * 304 object copying has been optimized to make optimal use of storage engines' allocations. * Use of the ``trimstore`` storage engine function has been fixed for 304 responses. * A missing ``:scheme`` for HTTP/2 requests is now properly handled. * The ``fold`` flag has been added to Access Control Lists (ACLs) in VCL. When it is activated with ``acl ... +fold {}``, ACL entries get optimized in that subnets contained in other entries are skipped (e.g. if 1.2.3.0/24 is part of the ACL, an entry for 1.2.3.128/25 will not be added) and adjacent entries get folded (e.g. if both 1.2.3.0/25 and 1.2.3.128/25 are added, they will be folded to 1.2.3.0/24) (3563_). Logging under the ``VCL_acl`` tag can change with this flag. Negated ACL entries are never folded. * Fixed handling of failing sub-requests: A VCL failure on the client side or the ``vcl_req_reset`` feature could trigger a panic, because it is not allowed to generate a minimal response. For sub-requests, we now masquerade the fail transition as a deliver and trade the illegal minimal response for the synthetic response (4022_). * The ``param.reset [-j]`` CLI command has been added to reset flags to their default. Consequently, the ``param.set ... default`` special value is now deprecated. * The ``param.set`` CLI command now supports the ``none`` and ``all`` values to achieve setting "absolute" values atomically as in ``param.set foo none,+bar,+baz`` or ``param.set foo all,-bar,-baz``. * A glitch in CLI command parsing has been fixed where individually quoted arguments like ``"help"`` were rejected. * The ``vcl_req_reset`` feature (controllable through the ``feature`` parameter, see `varnishd(1)`) has been added and enabled by default to terminate client side VCL processing early when the client is gone. *req_reset* events trigger a VCL failure and are reported to `vsl(7)` as ``Timestamp: Reset`` and accounted to ``main.req_reset`` in `vsc` as visible through ``varnishstat(1)``. In particular, this feature is used to reduce resource consumption of HTTP/2 "rapid reset" attacks (see below). Note that *req_reset* events may lead to client tasks for which no VCL is called ever. Presumably, this is thus the first time that valid `vcl(7)` client transactions may not contain any ``VCL_call`` records. * Added mitigation options and visibility for HTTP/2 "rapid reset" attacks (CVE-2023-44487_, 3996_, 3997_, 3998_, 3999_). Global rate limit controls have been added as parameters, which can be overridden per HTTP/2 session from VCL using the new vmod ``h2``: * The ``h2_rapid_reset`` parameter and ``h2.rapid_reset()`` function define a threshold duration for an ``RST_STREAM`` to be classified as "rapid": If an ``RST_STREAM`` frame is parsed sooner than this duration after a ``HEADERS`` frame, it is accounted against the rate limit described below. The default is one second. * The ``h2_rapid_reset_limit`` parameter and ``h2.rapid_reset_limit()`` function define how many "rapid" resets may be received during the time span defined by the ``h2_rapid_reset_period`` parameter / ``h2.rapid_reset_period()`` function before the HTTP/2 connection is forcibly closed with a ``GOAWAY`` and all ongoing VCL client tasks of the connection are aborted. The defaults are 100 and 60 seconds, corresponding to an allowance of 100 "rapid" resets per minute. * The ``h2.rapid_reset_budget()`` function can be used to query the number of currently allowed "rapid" resets. * Sessions closed due to rapid reset rate limiting are reported as ``SessClose RAPID_RESET`` in `vsl(7)` and accounted to ``main.sc_rapid_reset`` in `vsc` as visible through ``varnishstat(1)``. * The ``cli_limit`` parameter default has been increased from 48KB to 64KB. * ``VSUB_closefrom()`` now falls back to the base implementation not only if ``close_range()`` was determined to be unusable at compile time, but also at run time. That is to say, even if ``close_range()`` is compiled in, the fallback to the naive implementation remains. * Fixed ``varnishd -I`` error reporting when a final newline or carriage return is missing in the CLI command file (3995_). * Improved and updated the build system with respect to autoconf and automake. * Improved ``VSB_tofile()`` error reporting, added support for partial writes and support of VSBs larger than INT_MAX. * Improved HPACK header validation. * Fixed scopes of protected headers (3984_). .. _CVE-2023-44487: https://nvd.nist.gov/vuln/detail/CVE-2023-44487 .. _4036: https://github.com/varnishcache/varnish-cache/issues/4036 .. _3984: https://github.com/varnishcache/varnish-cache/issues/3984 .. _3995: https://github.com/varnishcache/varnish-cache/issues/3995 .. _3996: https://github.com/varnishcache/varnish-cache/issues/3996 .. _4022: https://github.com/varnishcache/varnish-cache/issues/4022 .. _3563: https://github.com/varnishcache/varnish-cache/pull/3563 .. _3997: https://github.com/varnishcache/varnish-cache/pull/3997 .. _3998: https://github.com/varnishcache/varnish-cache/pull/3998 .. _3999: https://github.com/varnishcache/varnish-cache/pull/3999 .. _VSV00014: https://varnish-cache.org/security/VSV00014.html ================================ Varnish Cache 7.4.0 (2023-09-15) ================================ * The ``VSB_quote_pfx()`` (and, consequently, ``VSB_quote()``) function no longer produces ``\v`` for a vertical tab. This improves compatibility with JSON. * The bundled *zlib* has been updated to match *zlib 1.3*. * The ``VSHA256_*`` functions have been added to libvarnishapi (3946_). * Tabulation of the ``vcl.list`` CLI output has been modified slightly. * VCL now supports "protected headers", which can neither be set nor unset. * The ``Content-Length`` and ``Transfer-Encoding`` headers are now protected. For the common use case of ``unset xxx.http.Content-Length`` to dismiss a body, ``unset xxx.body`` should be used. * Error handling of numeric literals in exponent notation has been improved in the VCL compiler (3971_). * Finalization of the storage private state of busy objects has been fixed. This bug could trigger a panic when ``vcl_synth {}`` was used to replace the object body and storage was changed from one of the built-in storage engines to a storage engine from an extension (3953_). * HTTP/2 header field validation is now more strict with respect to allowed characters (3952_). * A bug has been fixed in the filter handling code which could trigger a panic when ``resp.filters`` was used from ``vcl_synth {}`` (3968_). * The utility macros ``ALLOC_OBJ_EXTRA()`` and ``ALLOC_FLEX_OBJ()`` have been added to ``miniobj.h`` to simplify allocation of objects larger than a struct and such with a flexible array. * The ``varnishapi`` version has been increased to 3.1 and the functions ``VENC_Encode_Base64()`` and ``VENC_Decode_Base64()`` are now exposed. * Two bugs in the ban expression parser have been fixed where one of them could lead to a panic if a ban expression with an empty header name was issued (3962_). * The ``v_cold`` macro has been added to add ``__attribute__((cold))`` on compilers supporting it. It is used for ``VRT_fail()`` to mark failure code paths as cold. * ``varnishtest`` now generates ``User-Agent`` request and ``Server`` response headers with the respective client and server name by default. The ``txreq -nouseragent`` and ``txresp -noserver`` options disable addition of these headers. * Error handling of invalid header names has been improved in the VCL Compiler (3960_). * A race condition has been fixed in the backend probe code which could trigger a panic with dynamic backends (dyn100_). * A bug has been fixed in the ESI code which would prevent use of internal status codes >1000 as their modulus 1000 value (3958_). * The ``varnishd_args_prepend`` and ``varnishd_args_append`` macros have been added to ``varnishtest`` to add arguments to ``varnishd`` invocations before and after the defaults. * A bug has been fixed where ``varnishd`` would hang indefinitely when the worker process would not come up within ``cli_timeout`` (3940_). * The ``startup_timeout`` parameter now specifically replaces ``cli_timeout`` for the initial startup only (3940_). * On Linux, ``close_range()`` is now used if available (3905_). * Error reporting has been improved if the working directory (``varnishd -n`` argument) resides on a file system mounted ``noexec`` (3943_). * The number of backtrace levels in panic reports has been increased from 10 to 20. * The ``PTOK()`` macro has been added to ``vas.h`` to simplify error checking of ``pthread_*`` POSIX functions. * In ``varnishtest``, the basename of the test directory is now available as the ``vtcid`` macro to serve as a unique string across concurrently running tests. * In ``struct vsmwseg`` and ``struct vsm_fantom``, the ``class`` member has been renamed to ``category``. * ESI ``onerror=abort`` handling has been fixed when ``max_esi_depth`` is reached (3938_). * A spurious *Could not delete 'vcl\_...'* error message has been removed (3925_). * A bug has been fixed where ``unset bereq.body`` had no effect when used with a cached body (3914_) * ``.vcc`` files of VMODs are now installed to ``/usr/share/varnish/vcc`` (or equivalent) to enable re-use by other tools like code editors. * The :ref:`vcl-step(7)` manual page has been added to document the VCL state machines. * ``HSH_Cancel()`` has been moved to ``VDP_Close()`` to enable transports to keep references to objects. * VCL tracing now needs to be explicitly activated by setting the ``req.trace`` or ``bereq.trace`` VCL variables, which are initialized from the ``feature +trace`` flag. Only if the trace variables are set will ``VCL_trace`` log records be generated. Consequently, ``VCL_trace`` has been removed from the default ``vsl_mask``, so any trace records will be emitted by default. ``vsl_mask`` can still be used to filter ``VCL_trace`` records. To trace ``vcl_init {}`` and ``vcl_fini {}``, set the ``feature +trace`` flag while the vcl is loaded/discarded. * Varnish Delivery Processors (VDPs) are now also properly closed for error conditions, avoiding potential minor memory leaks. * A regression introduced with Varnish Cache 7.3.0 was fixed: On HTTP/2 connections, URLs starting with ``//`` no longer trigger a protocol error (3911_). * Call sites of VMOD functions and methods can now be restricted to built-in subroutines using the ``$Restrict`` stanza in the VCC file. * The counter ``MAIN.http1_iovs_flush`` has been added to track the number of premature ``writev()`` calls due to an insufficient number of IO vectors. This number is configured through the ``http1_iovs`` parameter for client connections and implicitly defined by the amount of free workspace for backend connections. * Object creation failures by the selected storage engine are now logged under the ``Error`` tag as ``Failed to create object object from %s %s``. * The limit on the size of ``varnishtest`` macros has been raised to 2KB. * The newly introduced abstract socket support was incompatible with other implementations, this has been fixed (3908_). .. _3905: https://github.com/varnishcache/varnish-cache/issues/3905 .. _3908: https://github.com/varnishcache/varnish-cache/pull/3908 .. _3911: https://github.com/varnishcache/varnish-cache/issues/3911 .. _3914: https://github.com/varnishcache/varnish-cache/pull/3914 .. _3925: https://github.com/varnishcache/varnish-cache/issues/3925 .. _3938: https://github.com/varnishcache/varnish-cache/issues/3938 .. _3940: https://github.com/varnishcache/varnish-cache/issues/3940 .. _3943: https://github.com/varnishcache/varnish-cache/issues/3943 .. _3946: https://github.com/varnishcache/varnish-cache/issues/3946 .. _3952: https://github.com/varnishcache/varnish-cache/issues/3952 .. _3953: https://github.com/varnishcache/varnish-cache/issues/3953 .. _3958: https://github.com/varnishcache/varnish-cache/issues/3958 .. _3960: https://github.com/varnishcache/varnish-cache/issues/3960 .. _3962: https://github.com/varnishcache/varnish-cache/issues/3962 .. _3968: https://github.com/varnishcache/varnish-cache/issues/3968 .. _3971: https://github.com/varnishcache/varnish-cache/issues/3971 .. _dyn100: https://github.com/nigoroll/libvmod-dynamic/issues/100 ================================ Varnish Cache 7.3.0 (2023-03-15) ================================ * The macro ``WS_TASK_ALLOC_OBJ`` as been added to handle the common case of allocating mini objects on a workspace. * ``xid`` variables in VCL are now of type ``INT``. * The new ``beresp.transit_buffer`` variable has been added to VCL, which defaults to the newly added parameter ``transit_buffer``. This variable limits the number of bytes varnish pre-fetches for uncacheable streaming fetches. * Varnish now supports abstract unix domain sockets. If the operating system supports them, abstract sockets can be specified using the commonplace ``@`` notation for accept sockets, e.g.:: varnishd -a @kandinsky and backend paths, e.g.:: backend miro { .path = "@miro"; } * For backend requests, the timestamp from the ``Last-Modified`` response header is now only used to create an ``If-Modified-Since`` conditional ``GET`` request if it is at least one second older than the timestamp from the ``Date`` header. * Various interfaces of varnish's own socket address abstraction, VSA, have been changed to return or take pointers to ``const``. ``VSA_free()`` has been added. * Processing of Range requests has been improved: Previously, varnish would send a 200 response with the full body when it could not reliably determine (yet) the object size during streaming. .. `RFC9110`_ : https://httpwg.org/specs/rfc9110.html#field.content-range Now a 206 response is sent even in this case (for HTTP/1.1 as chunked encoding) with ``*`` in place of the ``complete-length`` as per `RFC9110`_. * The ``debug.xid`` CLI command now sets the next XID to be used, rather than "one less than the next XID to be used" * VXIDs are 64 bit now and the binary format of SHM and raw saved VSL files has changed as a consequence. The actual valid range for VXIDs is [1…999999999999999], so it fits in a VRT_INTEGER. At one million cache-missing single request sessions per second VXIDs will roll over in a little over ten years:: (1e15-1) / (3 * 1e6 * 86400 * 365) = 10.57 That should be enough for everybodyâ„¢. You can test if your downstream log-chewing pipeline handle the larger VXIDs correctly using the CLI command:: ``debug.xid 20000000000`` * Consequently, VSL clients (log processing tools) are now incompatible with logs and in-memory data written by previous versions, and vice versa. * Do not ESI:include failed objects unless instructed to. Previously, any ESI:include object would be included, no matter what the status of it were, 200, 503, didn't matter. From now on, by default, only objects with 200 and 204 status will be included and any other status code will fail the parent ESI request. If objects with other status should be delivered, they should have their status changed to 200 in VCL, for instance in ``sub vcl_backend_error{}``, ``vcl_synth{}`` or ``vcl_deliver{}``. If ``param.set feature +esi_include_onerror`` is used, and the ```` tag has a ``onerror="continue"`` attribute, any and all ESI:include objects will be delivered, no matter what their status might be, and not even a partial delivery of them will fail the parent ESI request. To be used with great caution. * Backend implementations are in charge of logging their headers. * VCL backend ``probe``\ s gained an ``.expect_close`` boolean attribute. By setting to to ``false``, backends which fail to honor ``Connection: close`` can be probed. Notice that the probe ``.timeout`` needs to be reached for a probe with ``.expect_close = false`` to return. * Support for backend connections through a proxy with a PROXY2 preamble has been added: * VCL ``backend``\ s gained attributes ``.via`` and ``.authority`` * The ``VRT_new_backend_clustered()`` and ``VRT_new_backend()`` signatures have been changed * Unused log tags (SLTs) have been removed. * Directors which take and hold references to other directors via ``VRT_Assign_Backend()`` (typically any director which has other directors as backends) are now expected to implement the new ``.release`` callback of type ``void vdi_release_f(VCL_BACKEND)``. This function is called by ``VRT_DelDirector()``. The implementation is expected drop any backend references which the director holds (again using ``VRT_Assign_Backend()`` with ``NULL`` as the second argument). Failure to implement this callback can result in deadlocks, in particular during VCL discard. * Handling of the HTTP/2 :path pseudo header has been improved. ================================ Varnish Cache 7.2.0 (2022-09-15) ================================ * Functions ``VRT_AddVDP()``, ``VRT_AddVFP()``, ``VRT_RemoveVDP()`` and ``VRT_RemoveVFP()`` are deprecated. * Cookie headers generated by vmod_cookie no longer have a spurious trailing semi-colon (``';'``) at the end of the string. This could break VCL relying on the previous incorrect behavior. * The ``SessClose`` and ``BackendClose`` reason ``rx_body``, which previously output ``Failure receiving req.body``, has been rewritten to ``Failure receiving body``. * Prototypical Varnish Extensions (VEXT). Similar to VMODs, a VEXT is loaded by the cache process. Unlike VMODs that have the combined lifetime of all the VCLs that reference them, a VEXT has the lifetime of the cache process itself. There are no built-in extensions so far. * The VCC (compilation) process no longer loads VMODs with ``dlopen(3)`` to collect their metadata. * Stevedore initialization via the ``.init()`` callback has been moved to the worker process. * The parameter ``tcp_keepalive_time`` is supported on MacOS. * Duration parameters can optionally take a unit, with the same syntax as duration units in VCL. Example: ``param.set default_grace 1h``. * Calls to ``VRT_CacheReqBody()`` and ``std.cache_req_body`` from outside client vcl subs now fail properly instead of triggering an assertion failure (3846_). * New ``"B"`` string for the package branch in ``VCS_String()``. For the 7.2.0 version, it would yield the 7.2 branch. * The Varnish version and branch are available in ``varnishtest`` through the ``${pkg_version}`` and ``${pkg_branch}`` macros. * New ``${topsrc}`` macro in ``varnishtest -i`` mode. * New ``process pNAME -match-text`` command in ``varnishtest`` to expect text matching a regular expression on screen. * New ``filewrite [-a]`` command in ``varnishtest`` to put or append a string into a file. * The new ``vcc_feature`` bits parameter replaces previous ``vcc_*`` boolean parameters. The latter still exist as deprecated aliases. * The ``-k`` option from ``varnishlog`` is now supported by ``varnishncsa``. * New functions ``std.now()`` and ``std.timed_call()`` in vmod_std. * New ``MAIN.shm_bytes`` counter. * A ``req.http.via`` header is set before entering ``vcl_recv``. Via headers are generated using the ``server.identity`` value. It defaults to the host name and can be turned into a pseudonym with the ``varnishd -i`` option. Via headers are appended in both directions, to work with other hops that may advertise themselves. * A ``resp.http.via`` header is no longer overwritten by varnish, but rather appended to. * The ``server.identity`` syntax is now limited to a "token" as defined in the HTTP grammar to be suitable for Via headers. * In ``varnishtest`` a Varnish instance will use its VTC instance name as its instance name (``varnishd -i``) by default for predictable Via headers in test cases. * VMOD and VEXT authors can use functions from ``vnum.h``. * Do not filter pseudo-headers as regular headers (VSV00009_ / 3830_). * The termination rules for ``WRK_BgThread()`` were relaxed to allow VMODs to use it. * ``(struct worker).handling`` has been moved to the newly introduced ``struct wrk_vpi`` and replaced by a pointer to it, as well as ``(struct vrt_ctx).handling`` has been replaced by that pointer. ``struct wrk_vpi`` is for state at the interface between VRT and VGC and, in particular, is not const as ``struct vrt_ctx`` aka ``VRT_CTX``. * Panics now contain information about VCL source files and lines. * The ``Begin`` log record has a 4th field for subtasks like ESI sub-requests. * The ``-E`` option for log utilities now works as documented, with any type of sub-task based on the ``Begin[4]`` field. This covers ESI like before, and sub-tasks spawned by VMODs (provided that they log the new field). * No more ``req.http.transfer-encoding`` for ESI sub-requests. * New ``tools/coccinelle/vcocci.sh`` refactoring script for internal use. * The thread pool reserve is now limited to tasks that can be queued. A backend background fetch is no longer eligible for queueing. It would otherwise slow a grace hit down significantly when thread pools are saturated. * The unused ``fetch_no_thread`` counter was renamed to ``bgfetch_no_thread`` because regular backend fetch tasks are always scheduled. * The macros ``FEATURE()``, ``EXPERIMENT()``, ``DO_DEBUG()``, ``MGT_FEATURE()``, ``MGT_EXPERIMENT()``, ``MGT_DO_DEBUG()`` and ``MGT_VCC_FEATURE()`` now return a boolean value (``0`` or ``1``) instead of the (private) flag value. * There is a new ``contrib/`` directory in the Varnish source tree. The first contribution is a ``varnishstatdiff`` script. * A regression in the transport code led MAIN.client_req to be incremented for requests coming back from the waiting list, it was fixed. (3841_) .. _3830: https://github.com/varnishcache/varnish-cache/issues/3830 .. _3841: https://github.com/varnishcache/varnish-cache/pull/3841 .. _3846: https://github.com/varnishcache/varnish-cache/issues/3846 .. _VSV00009: https://varnish-cache.org/security/VSV00009.html ================================ Varnish Cache 7.1.0 (2022-03-15) ================================ * The ``cookie.format_rfc1123()`` function was renamed to ``cookie.format_date()``, and the former was retained as a deprecated alias. * The VCC file ``$Alias`` stanza has been added to support vmod alias functions/methods. * VCC now supports alias symbols. * There is a new ``experimental`` parameter that is identical to the ``feature`` parameter, except that it guards features that may not be considered complete or stable. An experimental feature may be promoted to a regular feature or dropped without being considered a breaking change. * ESI includes now support the ``onerror="continue"`` attribute of ```` tags. The ``+esi_include_onerror`` feature flag controls if the attribute is honored: If enabled, failure of an include stops ESI processing unless the ``onerror="continue"`` attribute was set for it. The feature flag is off by default, preserving the existing behavior to continue ESI processing despite include failures. * The deprecated sub-argument of the ``-l`` option was removed, it is now a shorthand for the ``vsl_space`` parameter only. * The ``-T``, ``-M`` and ``-P`` command line options can be used multiple times, instead of retaining only the last occurrence. * The ``debug.xid`` CLI command has been extended to also set and query the VXID cache chunk size. * The ``vtc.barrier_sync()`` VMOD function now also works in ``vcl_init`` * The ``abort`` command in the ``logexpect`` facility of ``varnishtest`` can now be used to trigger an ``abort()`` to help debugging the vsl client library code. * The ``vtc.vsl()`` and ``vtc.vsl_replay()`` functions have been added to the vtc vmod to generate arbitraty log lines for testing. * The limit of the ``vsl_reclen`` parameter has been corrected. * Varnish now closes client connections correctly when request body processing failed. * Filter init methods of types ``vdp_init_f`` and ``vfp_init_f`` gained a ``VRT_CTX`` argument. * The ``param.set`` CLI command accepts a ``-j`` option. In this case the JSON output is the same as ``param.show -j`` of the updated parameter. * A new ``cc_warnings`` parameter contains a subset of the compiler flags extracted from ``cc_command``, which in turn grew new expansions: - ``%d``: the raw default ``cc_command`` - ``%D``: the expanded default ``cc_command`` - ``%w``: the ``cc_warnings`` parameter - ``%n``: the working directory (``-n`` option) * For ``return(pipe)``, the backend transactions now emit a Start timestamp and both client and backend transactions emit the Process timestamp. * ``http_IsHdr()`` is now exposed as part of the strict ABI for VMODs. * The ``req.transport`` VCL variable has been added, which returns "HTTP/1" or "HTTP/2" as appropriate. * The ``vtc.workspace_reserve()`` VMOD function now zeroes memory. * Parameter aliases have been added to facilitate parameter deprecation. * Two bugs in the catflap facility have been fixed which could trigger panics due to the state pointer not being cleared. (3752_, 3755_) * It is now possible to assign to a ``BODY`` variable either a ``STRING`` type or a ``BLOB``. * When the ``vcl.show`` CLI command is invoked without a parameter, it now defaults to the active VCL. * The reporting of ``logexpect`` events in ``varnishtest`` was rearranged for readability. * Workspace debugging as enabled by the ``+workspace`` debug flag is now logged with the corresponding transaction. * VMODs should now register and unregister fetch and delivery filters with ``VRT_AddFilter()`` and ``VRT_RemoveFilter()``. * ``HSH_purge()`` has been rewritten to properly handle concurrent purges on the same object head. * ``VSL_WriteOpen()``, ``varnishlog`` and ``varnishncsa`` have been changed to support writing to stdout with ``-w -`` when not in daemon mode. * In VSL, the case has been optimized that the space remaining in a buffer is close to ``vsl_reclen``. * ``std.ip()`` has been changed to always return a valid (bogo ip) fallback if the fallback argument is invalid. * New VCL variables ``{req,req_top,resp,bereq,beresp,obj}.time`` have been added to track when the respective object was born. * ``VRT_StaticDirector()`` has been added to mark directors with VCL lifetime, to avoid the overhead of reference counting. * Dynamic backends are now reference-counted, and VMOD authors must explicitly track assignments with ``VRT_Assign_Backend()``. * Varnish will use libunwind by default when available at configure time, the ``--without-unwind`` configure flag can prevent this and fall back to libexecinfo to generate backtraces. * A new ``debug.shutdown.delay`` command is available in the Varnish CLI for testing purposes. * New utility macros ``vmin[_t]``, ``vmax[_t]`` and ``vlimit[_t]`` available in ``vdef.h``. * The macros ``TOSTRAND(s)`` and ``TOSTRANDS(x, ...)`` have been added to create a ``struct strands *`` (intended to be used as a ``VCL_STANDS``) from a single string ``s`` or ``x`` strings, respectively. Note that the macros create a compund literal whose scope is the enclosing block. Their value must thus only be used within the same block (it can be passed to called functions) and must not be returned or referenced for use outside the enclosing block. As before, ``VRT_AllocStrandsWS()`` or ``VRT_StrandsWS()`` must be used to create ``VCL_STRANDS`` with *task* scope for use outside the current block. * A bug in the backend connection handling code has been fixed which could trigger an unwarranted assertion failure (3664_). * ``std.strftime()`` has been added. * ``Lck_CondWait()`` has lost the timeout argument and now waits forever. ``Lck_CondWaitUntil()`` and ``Lck_CondWaitTimeout()`` have been added to wait on a condition variable until some point in time or until a timeout expires, respectively. * All mutex locks in core code have been given the ``PTHREAD_MUTEX_ERRORCHECK`` attribute. * ``Host`` and ``Content-Length`` header checks have been moved to protocol independent code and thus implicitly extended to HTTP2. * A potential race on busy objects has been closed. * Use of the ``ObjGetSpace()`` for synthetic objects has been fixed to support stevedores returning less space than requested (as permitted by the API). * The ``FINI_OBJ()`` macro has been added to standardize the common pattern of zeroing a mini object and clearing a pointer to it. * The deprecated ``vsm_space`` parameter was removed. * The ``varnishtest`` ``err_shell`` commando has been removed after having been deprecated since release 5.1.0. .. _3755: https://github.com/varnishcache/varnish-cache/issues/3755 .. _3752: https://github.com/varnishcache/varnish-cache/issues/3752 .. _3664: https://github.com/varnishcache/varnish-cache/issues/3664 ================================ Varnish Cache 7.0.1 (2021-11-23) ================================ * An assertion failure has been fixed which triggered when matching bans on non-existing headers (3706_). * A VCL compilation issue has been fixed when calling builtin functions directly (3719_). * It is now again possible to concatenate static strings to produce combined strings of type VCL_REGEX (3721_). * An issue has been fixed that would cause the VCL dependency checker to incorrectly flag VCLs as dependants of other VCLs when using labels, preventing them from being discarded (3734_). * VCLs loaded through CLI or the use of startup CLI scripts (-I option to `varnishd`) will, when no active VCL has previously been set, no longer automatically set the first VCL loaded to the active VCL. This prevents situations where it was possible to make a cold VCL the active VCL (3737_). * There is now a `configure` build-time requirement on working SO_RCVTIMEO and SO_SNDTIMEO socket options. We no longer check whether they effectively work, so the ``SO_RCVTIMEO_WORKS`` feature check has been removed from ``varnishtest``. * The socket option inheritance checks now correctly identifies situations where UDS and TCP listening sockets behave differently, and are no longer subject to the order the inheritance checks happens to be executed (3732_). * IPv6 listen endpoint address strings are now printed using brackets. .. _3706: https://github.com/varnishcache/varnish-cache/issues/3706 .. _3719: https://github.com/varnishcache/varnish-cache/issues/3719 .. _3721: https://github.com/varnishcache/varnish-cache/issues/3726 .. _3734: https://github.com/varnishcache/varnish-cache/issues/3734 .. _3737: https://github.com/varnishcache/varnish-cache/pull/3737 .. _3732: https://github.com/varnishcache/varnish-cache/pull/3732 ================================ Varnish Cache 7.0.0 (2021-09-15) ================================ * Added convenience ``vrt_null_strands`` and ``vrt_null_blob`` constants. * New VCL flag syntax ``foo +bar -baz { ... }``, starting with ACL flags ``log``, ``pedantic`` and ``table``. * ACLs no longer produce VSL ``VCL_acl`` records by default, this must be explicitly enabled with ``acl +log { ... }``. * ACLs can be compiled into a table format, which runs a little bit slower, but compiles much faster for large ACLs. * ACLs default to ``pedantic`` which is now a per-ACL feature flag. * New ``glob`` flag for VCL ``include`` (3193_). * The maximum number of headers for a request or a response in ``varnishtest`` was increased to 64. * The backend lock class from struct backend was moved to struct director and renamed accordingly. * New ``%{sec,msec,usec,msec_frac,usec_frac}t`` formats in ``varnishncsa``. * ``vstrerror()`` was renamed to ``VAS_errtxt()``. * New ``varnishncsa -j`` option to format for JSON (3595_). * To skip a test in the *presence* of a feature instead of it absence, a new ``feature !`` syntax was added to ``varnishtest``. * Accept-Ranges headers are no longer generated for passed objects, but must either come from the backend or be created in ``vcl_deliver{}`` (3251_). * The busyobj ``do_pass`` flag is gone in favor of ``uncacheable``. * The objcore flag ABANDON was renamed to CANCEL. * 'Scientific Notation' numbers like 6.62607004e-34 are no longer supported in VCL. (The preparation of RFC8941 made it clear that there are neither reason nor any need to support scientific notation in context of HTTP headers. * New ``tunnel`` command in ``varnishtest`` to gain the ability to shape traffic between two peers without having to change their implementation. * Global VCL symbols can be defined after use (3555_). * New ``req.hash_ignore_vary`` flag in VCL. * ``varnishtest`` can register macros backed by functions, which is the case for ``${date}`` and the brand new ``${string,[,...]}`` macro (3627_). * Migration to pcre2 with extensive changes to the VRE API, parameters renamed to ``pcre2_match_limit`` and ``pcre2_depth_limit``, and the addition of a new ``pcre2_jit_compilation`` parameter. The ``varnishtest`` undocumented feature check ``pcre_jit`` is gone (3635_). This change is transparent at the VRT layer and only affects direct VRE consumers. * New inverted mode in ``vtc-bisect.sh`` to find the opposite of regressions. * The default values for ``workspace_client``, ``workspace_backend`` and ``vsl_buffer`` on 64bit systems were increased to respectively 96kB, 96kB and 16kB (3648_). * The deprecated ``WS_Inside()`` was replaced with ``WS_Allocated()`` and ``WS_Front()`` was removed. * VCL header names can be quoted, for example ``req.http."valid.name"``. * Added ``VRT_UnsetHdr()`` and removed ``vrt_magic_string_unset``. * Removed depcreated ``STRING_LIST`` in favor of ``STRANDS``. All functions that previously took a ``STRING_LIST`` had ``const char *, ...`` arguments, they now take ``const char *, VCL_STRANDS`` arguments. The magic cookie ``vrt_magic_string_end`` is gone and ``VRT_CollectStrands()`` was renamed to ``VRT_STRANDS_string()``. * The default value for ``thread_pool_stack`` was increased to 80kB for 64bit systems and 64kB for 32bit to accomodate the PCRE2 jit compiler. * Removed deprecated ``VSB_new()`` and ``VSB_delete()``, which resulted in a major soname bump of libvarnishapi to 3.0.0, instead of the 2.7.0 version initially planned. * The default workdir (the default ``-n`` argument) is now ``/var/run`` instead of ``${prefix}/var`` (3672_). Packages usually configure this to match local customs. * The minimum ``session_workspace`` is now 384 bytes * Emit minimal 500 response if ``vcl_synth`` fails (3441_). * New ``--enable-coverage`` configure flag, and renovated sanitizer setup. * New feature checks in ``varnishtest``: ``sanitizer``, ``asan``, ``lsan``, ``msan``, ``ubsan`` and ``coverage``. * New ``--enable-workspace-emulator`` configure flag to swap the worksapce implementation with a sparse one ideal for fuzzing (3644_). * Strict comparison of items from the HTTP grammar (3650_). * New request body h2 window handling using a buffer to avoid stalling an entire h2 session until the relevant stream starts consuming DATA frames. As a result the minimum value for ``h2_initial_window_size`` is now 65535B to avoid running out of buffer with a negative window that was simpler to not tolerate, and a new ``h2_rxbuf_storage`` parameter was added (3661_). * ``SLT_Hit`` now includes streaming progress when relevant. * The ``http_range_support`` adds consistency checks for pass transactions (3673_). * New ``VNUM_uint()`` and ``VNUM_hex()`` functions geared at token parsing. .. _3193: https://github.com/varnishcache/varnish-cache/issues/3193 .. _3251: https://github.com/varnishcache/varnish-cache/issues/3251 .. _3441: https://github.com/varnishcache/varnish-cache/issues/3441 .. _3555: https://github.com/varnishcache/varnish-cache/issues/3555 .. _3595: https://github.com/varnishcache/varnish-cache/issues/3595 .. _3627: https://github.com/varnishcache/varnish-cache/issues/3627 .. _3635: https://github.com/varnishcache/varnish-cache/issues/3635 .. _3644: https://github.com/varnishcache/varnish-cache/issues/3644 .. _3648: https://github.com/varnishcache/varnish-cache/issues/3648 .. _3650: https://github.com/varnishcache/varnish-cache/issues/3650 .. _3661: https://github.com/varnishcache/varnish-cache/issues/3661 .. _3672: https://github.com/varnishcache/varnish-cache/issues/3672 .. _3673: https://github.com/varnishcache/varnish-cache/issues/3673 ================================ Varnish Cache 6.6.0 (2021-03-15) ================================ * Body bytes accounting has been fixed to always represent the number of bodybytes moved on the wire, exclusive of protocol-specific overhead like HTTP/1 chunked encoding or HTTP/2 framing. This change affects counters like - ``MAIN.s_req_bodybytes``, - ``MAIN.s_resp_bodybytes``, - ``VBE.*.*.bereq_bodybytes`` and - ``VBE.*.*.beresp_bodybytes`` as well as the VSL records - ``ReqAcct``, - ``PipeAcct`` and - ``BereqAcct``. * ``VdpAcct`` log records have been added to output delivery filter (VDP) accounting details analogous to the existing ``VfpAcct``. Both tags are masked by default. * Many filter (VDP/VFP) related signatures have been changed: - ``vdp_init_f`` - ``vdp_fini_f`` - ``vdp_bytes_f`` - ``VDP_bytes()`` as well as ``struct vdp_entry`` and ``struct vdp_ctx`` ``VFP_Push()`` and ``VDP_Push()`` are no longer intended for VMOD use and have been removed from the API. * The VDP code is now more strict about ``VDP_END``, which must be sent down the filter chain at most once. * Core code has been changed to ensure for most cases that ``VDP_END`` gets signaled with the object's last bytes, rather than with an extra zero-data call. * Reason phrases for more HTTP Status codes have been added to core code. * Connection pooling behavior has been improved with respect to ``Connection: close`` (3400_, 3405_). * Handling of the ``Keep-Alive`` HTTP header as hop-by-hop has been fixed (3417_). * Handling of hop-by-hop headers has been fixed for HTTP/2 (3416_). * The stevedore API has been changed: - ``OBJ_ITER_FINAL`` has been renamed to ``OBJ_ITER_END`` - ``ObjExtend()`` signature has been changed to also cover the ``ObjTrimStore()`` use case and - ``ObjTrimStore()`` has been removed. * The ``verrno.h`` header file has been removed and merged into ``vas.h`` * The connection close reason has been fixed to properly report ``SC_RESP_CLOSE`` / ``resp_close`` where previously only ``SC_REQ_CLOSE`` / ``req_close`` was reported. * Unless the new ``validate_headers`` feature is disabled, all newly set headers are now validated to contain only characters allowed by RFC7230. A (runtime) VCL failure is triggered if not (3407_). * ``VRT_ValidHdr()`` has been added for vmods to conduct the same check as the ``validate_headers`` feature, for example when headers are set by vmods using the ``cache_http.c`` Functions like ``http_ForceHeader()`` from untrusted input. * The shard director now supports reconfiguration (adding/removing backends) of several instances without any special ordering requirement. * Calling the shard director ``.reconfigure()`` method is now optional. If not called explicitly, any shard director backend changes are applied at the end of the current task. * Shard director ``Error`` log messages with ``(notice)`` have been turned into ``Notice`` log messages. * All shard ``Error`` and ``Notice`` messages now use the unified prefix ``vmod_directors: shard %s``. * In the shard director, use of parameter sets with ``resolve=NOW`` has been fixed. * Performance of log-processing tools like ``varnishlog`` has been improved by using ``mmap()`` if possible when reading from log files. * An assertion failure has been fixed which could be triggered when a request body was used with restarts (3433_, 3434_). * A signal handling bug in the Varnish Utility API (VUT) has been fixed which caused log-processing utilities to perform poorly after a signal had been received (3436_). * The ``client.identity`` variable is now accessible on the backend side. * Client and backend finite state machine internals (``enum req_step`` and ``enum fetch_step``) have been removed from ``cache.h``. * Three new ``Timestamp`` VSL records have been added to backend request processing: - The ``Process`` timestamp after ``return(deliver)`` or ``return(pass(x))`` from ``vcl_backend_response``, - the ``Fetch`` timestamp before a backend connection is requested and - the ``Connected`` timestamp when a connection to a regular backend (VBE) is established, or when a recycled connection was selected for reuse. * The VRT backend interface has been changed: - ``struct vrt_endpoint`` has been added describing a UDS or TCP endpoint for a backend to connect to. Endpoints also support a preamble to be sent with every new connection. - This structure needs to be passed via the ``endpoint`` member of ``struct vrt_backend`` when creating backends with ``VRT_new_backend()`` or ``VRT_new_backend_clustered()``. * ``VRT_Endpoint_Clone()`` has been added to facilitate working with endpoints. * The variables ``bereq.is_hitpass`` and ``bereq.is_hitmiss`` have been added to the backend side matching ``req.is_hitpass`` and ``req.is_hitmiss`` on the client side. * The ``set_ip_tos()`` function from the bundled ``std`` vmod now sets the IPv6 Taffic Class (TCLASS) when used on an IPv6 connection. * A bug has been fixed which could lead to varnish failing to start after updates due to outdated content of the ``vmod_cache`` directory (3243_). * An issue has been addressed where using VCL with a high number of literal strings could lead to prolonged c-compiler runtimes since Varnish-Cache 6.3 (3392_). * The ``MAIN.esi_req`` counter has been added as a statistic of the number of ESI sub requests created. * The ``vcl.discard`` CLI command can now be used to discard more than one VCL with a single command, which succeeds only if all given VCLs could be discarded (atomic behavior). * The ``vcl.discard`` CLI command now supports glob patterns for vcl names. * The ``vcl.deps`` CLI command has been added to output dependencies between VCLs (because of labels and ``return(vcl)`` statements). * The ``FetchError`` log message ``Timed out reusing backend connection`` has been renamed to ``first byte timeout (reused connection)`` to clarify that it is emit for effectively the same reason as ``first byte timeout``. * Long strings in VCL can now also be denoted using ``""" ... """`` in addition to the existing ``{" ... "}``. * The ``pdiff()`` function declaration has been moved from ``cache.h`` to ``vas.h``. * The interface for private pointers in VMODs has been changed: - The ``free`` pointer in ``struct vmod_priv`` has been replaced with a pointer to ``struct vmod_priv_methods``, to where the pointer to the former free callback has been moved as the ``fini`` member. - The former free callback type has been renamed from ``vmod_priv_free_f`` to ``vmod_priv_fini_f`` and as gained a ``VRT_CTX`` argument * The ``MAIN.s_bgfetch`` counter has been added as a statistic on the number of background fetches issues. * Various improvements have been made to the ``varnishtest`` facility: - the ``loop`` keyword now works everywhere - HTTP/2 logging has been improved - Default HTTP/2 parameters have been tweaked (3442_) - Varnish listen address information is now available by default in the macros ``${vNAME_addr}``, ``${vNAME_port}`` and ``${vNAME_sock}``. Macros by the names ``${vNAME_SOCKET_*}`` contain the address information for each listen socket as created with the ``-a`` argument to ``varnishd``. - Synchronization points for counters (VSCs) have been added as ``varnish vNAME -expect PATTERN OP PATTERN`` - varnishtest now also works with IPv6 setups - ``feature ipv4`` and ``feature ipv6`` can be used to control execution of test cases which require one or the other protocol. - haproxy arguments can now be externally provided through the ``HAPROXY_ARGS`` variable. - logexpect now supports alternatives with the ``expect ? ...`` syntax and negative matches with the ``fail add ...`` and ``fail clear`` syntax. - The overall logexpect match expectation can now be inverted using the ``-err`` argument. - Numeric comparisons for HTTP headers have been added: ``-lt``, ``-le``, ``-eq``, ``-ne``, ``-ge``, ``-gt`` - ``rxdata -some`` has been fixed. * The ``ban_cutoff`` parameter now refers to the overall length of the ban list, including completed bans, where before only non-completed ("active") bans were counted towards ``ban_cutoff``. * A race in the round-robin director has been fixed which could lead to backend requests failing when backends in the director were sick (3473_). * A race in the probe management has been fixed which could lead to a panic when VCLs changed temperature in general and when ``vcl.discard`` was used in particular (3362_). * A bug has been fixed which lead to counters (VSCs) of backends from cold VCLs being presented (3358_). * A bug in ``varnishncsa`` has been fixed which could lead to it crashing when header fields were referenced which did not exist in the processed logs (3485_). * For failing PROXY connections, ``SessClose`` now provides more detailed information on the cause of the failure. * The ``std.ban()`` and ``std.ban_error()`` functions have been added to the ``std`` vmod, allowing VCL to check for ban errors. * Use of the ``ban()`` built-in VCL command is now deprecated. * The source tree has been reorganized with all vmods now moved to a single ``vmod`` directory. * ``vmodtool.py`` has been improved to simplify Makefiles when many VMODs are built in a single directory. * The ``VSA_getsockname()`` and ``VSA_getpeername()`` functions have been added to get address information of file descriptors. * ``varnishd`` now supports the ``-b none`` argument to start with only the builtin VCL and no backend at all (3067_). * Some corner cases of IPv6 support in ``varnishd`` have been fixed. * ``vcl_pipe {}``: ``return(synth)`` and vmod private state support have been fixed. Trying to use ``std.rollback()`` from ``vcl_pipe`` now results in VCL failure (3329_, 3330_, 3385_). * The ``bereq.xid`` variable is now also available in ``vcl_pipe {}`` * The ``VRT_priv_task_get()`` and ``VRT_priv_top_get()`` functions have been added to VRT to allow vmods to retrieve existing ``PRIV_TASK`` / ``PRIV_TOP`` private pointers without creating any. * ``varnishstat`` now avoids display errors of gauges which previously could underflow to negative values, being displayed as extremely high positive values. The ``-r`` option and the ``r`` key binding have been added to return to the previous behavior. When raw mode is active in ``varnishstat`` interactive (curses) mode, the word ``RAW`` is displayed at the right hand side in the lower status line. * The ``VSC_IsRaw()`` function has been added to ``libvarnishapi`` to query if a gauge is being returned raw or adjusted. * The ``busy_stats_rate`` feature flag has been added to ensure statistics updates (as configured using the ``thread_stats_rate`` parameter) even in scenarios where worker threads never run out of tasks and may remain forever busy. * ``ExpKill`` log (VSL) records are now masked by default. See the ``vsl_mask`` parameter. * A bug has been fixed which could lead to panics when ESI was used with ESI-aware VMODs were used because ``PRIV_TOP`` vmod private state was created on a wrong workspace (3496_). * The ``VCL_REGEX`` data type is now supported for VMODs, allowing them to use regular expression literals checked and compiled by the VCL compiler infrastructure. Consequently, the ``VRT_re_init()`` and ``VRT_re_fini()`` functions have been removed, because they are not required and their use was probably wrong anyway. * The ``filter_re``, ``keep_re`` and ``get_re`` functions from the bundled ``cookie`` vmod have been changed to take the ``VCL_REGEX`` type. This implies that their regular expression arguments now need to be literal, whereas before they could be taken from some other variable or function returning ``VCL_STRING``. Note that these functions never actually handled _dynamic_ regexen, the string passed with the first call was compiled to a regex, which was then used for the lifetime of the respective VCL. * The ``%{X}T`` format has been added to ``varnishncsa``, which generalizes ``%D`` and ``%T``, but also support milliseconds (``ms``) output. * Error handling has been fixed when vmod functions/methods with ``PRIV_TASK`` arguments were wrongly called from the backend side (3498_). * The ``varnishncsa`` ``-E`` argument to show ESI requests has been changed to imply ``-c`` (client mode). * Error handling and performance of the VSL (shared log) client code in ``libvarnishapi`` have been improved (3501_). * ``varnishlog`` now supports the ``-u`` option to write to a file specified with ``-w`` unbuffered. * Comparisons of numbers in VSL queries have been improved to match better the behavior which is likely expected by users who have not read the documentation in all detail (3463_). * A bug in the ESI code has been fixed which could trigger a panic when no storage space was available (3502_). * The ``resp.proto`` variable is now read-only as it should have been for long. * ``VTCP_open()`` has been fixed to try all possible addresses from the resolver before giving up (3509_). This bug could cause confusing error messages (3510_). * ``VRT_synth_blob()`` and ``VRT_synth_strands()`` have been added. The latter should now be used instead of ``VRT_synth_page()``. * The ``VCL_SUB`` data type is now supported for VMODs to save references to subroutines to be called later using ``VRT_call()``. Calls from a wrong context (e.g. calling a subroutine accessing ``req`` from the backend side) and recursive calls fail the VCL. See `VMOD - Varnish Modules`_ in the Reference Manual. .. _VMOD - Varnish Modules: https://varnish-cache.org/docs/trunk/reference/vmod.html VMOD functions can also return the ``VCL_SUB`` data type for calls from VCL as in ``call vmod.returning_sub();``. * ``VRT_check_call()`` can be used to check if a ``VRT_call()`` would succeed in order to avoid the potential VCL failure in case it would not. It returns ``NULL`` if ``VRT_call()`` would make the call or an error string why not. * ``VRT_handled()`` has been added, which is now to be used instead of access to the ``handling`` member of ``VRT_CTX``. * The session close reason logging/statistics for HTTP/2 connections have been improved (3393_) * ``varnishadm`` now has the ``-p`` option to disable readline support for use in scripts and as a generic CLI connector. * A log (VSL) ``Notice`` record is now emitted whenever more than ``vary_notice`` variants are encountered in the cache for a specific hash. The new ``vary_notice`` parameter defaults to 10. * The modulus operator ``%`` has been added to VCL. * ``return(retry)`` from ``vcl_backend_error {}`` now correctly resets ``beresp.status`` and ``beresp.reason`` (3525_). * Handling of the ``gunzip`` filter with ESI has been fixed (3529_). * A bug where the ``threads_limited`` counter could be increased without reason has been fixed (3531_). * All varnish tools using the VUT library utilities for argument processing now support the ``--optstring`` argument to return a string suitable for use with ``getopts`` from shell scripts. * An issue with high CPU consumption when the maximum number of threads was reached has been fixed (2942_, 3531_) * HTTP/2 streams are now reset for filter chain (VDP) errors. * The task priority of incoming connections has been fixed. * An issue has been addressed where the watchdog facility could misfire when tasks are queued. * The builtin VCL has been reworked: VCL code has been split into small subroutines, which custom VCL can prepend custom code to. This allows for better integration of custom VCL and the built-in VCL and better reuse. .. _2942: https://github.com/varnishcache/varnish-cache/issues/2942 .. _3067: https://github.com/varnishcache/varnish-cache/issues/3067 .. _3243: https://github.com/varnishcache/varnish-cache/issues/3243 .. _3329: https://github.com/varnishcache/varnish-cache/issues/3329 .. _3330: https://github.com/varnishcache/varnish-cache/issues/3330 .. _3358: https://github.com/varnishcache/varnish-cache/issues/3358 .. _3362: https://github.com/varnishcache/varnish-cache/issues/3362 .. _3385: https://github.com/varnishcache/varnish-cache/issues/3385 .. _3392: https://github.com/varnishcache/varnish-cache/issues/3392 .. _3393: https://github.com/varnishcache/varnish-cache/issues/3393 .. _3400: https://github.com/varnishcache/varnish-cache/issues/3400 .. _3405: https://github.com/varnishcache/varnish-cache/issues/3405 .. _3407: https://github.com/varnishcache/varnish-cache/issues/3407 .. _3416: https://github.com/varnishcache/varnish-cache/issues/3416 .. _3417: https://github.com/varnishcache/varnish-cache/issues/3417 .. _3433: https://github.com/varnishcache/varnish-cache/issues/3433 .. _3434: https://github.com/varnishcache/varnish-cache/issues/3434 .. _3436: https://github.com/varnishcache/varnish-cache/issues/3436 .. _3442: https://github.com/varnishcache/varnish-cache/issues/3442 .. _3463: https://github.com/varnishcache/varnish-cache/issues/3463 .. _3473: https://github.com/varnishcache/varnish-cache/issues/3473 .. _3485: https://github.com/varnishcache/varnish-cache/issues/3485 .. _3496: https://github.com/varnishcache/varnish-cache/issues/3496 .. _3498: https://github.com/varnishcache/varnish-cache/issues/3498 .. _3501: https://github.com/varnishcache/varnish-cache/issues/3501 .. _3502: https://github.com/varnishcache/varnish-cache/issues/3502 .. _3509: https://github.com/varnishcache/varnish-cache/issues/3509 .. _3510: https://github.com/varnishcache/varnish-cache/issues/3510 .. _3525: https://github.com/varnishcache/varnish-cache/issues/3525 .. _3529: https://github.com/varnishcache/varnish-cache/issues/3529 .. _3531: https://github.com/varnishcache/varnish-cache/issues/3531 ================================ Varnish Cache 6.5.1 (2020-09-25) ================================ * Bump the VRT_MAJOR_VERSION from 11 to 12, to reflect the API changes that went into the 6.5.0 release. This step was forgotten for that release. ================================ Varnish Cache 6.5.0 (2020-09-15) ================================ [ABI] marks potentially breaking changes to binary compatibility. [API] marks potentially breaking changes to source compatibility (implies [ABI]). * ``varnishstat`` now has a help screen, available via the ``h`` key in curses mode * The initial ``varnishstat`` verbosity has been changed to ensure any fields specified by the ``-f`` argument are visible (2990_) * Fixed handling of out-of-workspace conditions after ``vcl_backend_response`` and ``vcl_deliver`` during filter initialization (3253_, 3241_) * ``PRIV_TOP`` is now thread-safe to support parallel ESI implementations * ``varnishstat`` JSON format (``-j`` option) has been changed: * on the top level, a ``version`` identifier has been introduced, which will be used to mark future breaking changes to the JSON formatting. It will not be used to mark changes to the counters themselves. The new ``version`` is ``1``. * All counters have been moved down one level to the ``counters`` object. * ``VSA_BuildFAP()`` has been added as a convenience function to build a ``struct suckaddr`` * Depending on the setting of the new ``vcc_acl_pedantic`` parameter, VCC now either emits a warning or fails if network numbers used in ACLs do not have an all-zero host part. For ``vcc_acl_pedantic`` off, the host part is fixed to all-zero and that fact logged with the ``ACL`` VSL tag. * Fixed error handling during object creation after ``vcl_backend_response`` (3273_) * ``obj.can_esi`` has been added to identify if the response can be ESI processed (3002_) * ``resp.filters`` now contains a correct value when the auto-determined filter list is read (3002_) * It is now a VCL (runtime) error to write to ``resp.do_*`` and ``beresp.do_*`` fields which determine the filter list after setting ``resp.filters`` and ``beresp.filters``, respectively * Behavior for 304 responses was changed not to update the ``Content-Encoding`` response header of the stored object. * [ABI] ``struct vfp_entry`` and ``struct vdp_ctx`` changed * [API] VSB_QUOTE_GLOB, which was prematurely added to 6.4, has been removed again. * [API] Add ``VDP_END`` action for delivery processors, which has to be sent with or after the last buffer. * Respect the administrative health for "real" (VBE) backends (3299_) * Fixed handling of illegal (internal) four-digit response codes and with HTTP/2 (3301_) * Fixed backend connection pooling of closed connections (3266_) * Added the ``.resolve`` method for the ``BACKEND`` type to resolve (determine the "real" backend) a director. * Improved ``vmodtool`` support for out-of-tree builds * Added ``VJ_unlink()`` and ``VJ_rmdir()`` jail functions * Fixed workdir cleanup (3307_) * Added ``JAIL_MASTER_SYSTEM`` jail level * The Varnish Jail (least privileges) code for Solaris has been largely rewritten. It now reduces privileges even further and thus should improve the security of Varnish on Solaris even more. * The Varnish Jail for Solaris now accepts an optional ``worker=`` argument which allows to extend the effective privilege set of the worker process. * The shard director and shard director parameter objects should now work in ``vcl_pipe {}`` like in ``vcl_backend_* {}`` subs. * For a failure in ``vcl_recv {}``, the VCL state engine now returns right after return from that subroutine. (3303_) * The shard director now supports weights by scaling the number of replicas of each backend on the consistent hashing ring * Fixed a race in the cache expiry code which could lead to a panic (2999_) * Added ``VRE_quote()`` to facilitate building literal string matches with regular expressions. * The ``BackendReuse`` VSL (log) tag has been retired and replaced with ``BackendClose``, which has been changed to contain either ``close`` or ``recycle`` to signify whether the connection was closed or returned to a pool for later reuse. * ``BackendOpen`` VSL entries have been changed to contain ``reuse`` or ``connect`` in the last column to signify whether the connection was reused from a pool or newly opened. * ``std.rollback()`` of backend requests with ``return(retry)`` has been fixed (3353_) * ``FetchError`` logs now differentiate between ``No backend`` and "none resolved" as ``Director %s returned no backend`` * Added ``VRT_DirectorResolve()`` to resolve a director * Improved VCC handling of symbols and, in particular, type methods * Fixed use of the shard director from ``vcl_pipe {}`` (3361_) * Handle recursive use of vcl ``include`` (3360_) * VCL: Added native support for BLOBs in structured fields notation (``::``) * Fixed handling of the ``Connection:`` header when multiple instances of the named headers existed. * Added support for naming ``PRIV_`` arguments to vmod methods/functions * The varnish binary heap implementation has been renamed to use the ``VBH_`` prefix, complemented with a destructor and added to header files for use with vmods (via include of ``vbh.h``). * A bug in ``vmod_blob`` for base64 decoding with a ``length`` argument and non-padding decoding has been fixed (3378_) * Added ``VRT_BLOB_string()`` to ``vrt.h`` * VSB support for dynamic vs. static allocations has been changed: For dynamic allocations use:: VSB_new_auto() + VSB_destroy() For preexisting buffers use:: VSB_init() + VSB_fini() ``VSB_new()`` + ``VSB_delete()`` are now deprecated. * ``std.blobread()`` has been added * New ``MAIN.beresp_uncacheable`` and ``MAIN.beresp_shortlived`` counters have been added. * The ``I``, ``X`` and ``R`` arguments have been added to the VSC API and ``varnishstat`` for inclusion, exclusion and required glob patterns on the statistic field names. (3394_) * Added the missing ``VSC_OPT_f`` macro and the new ``VSC_OPT_I`` and ``VSC_OPT_X`` to libvarnishapi headers. * Added ``-I`` and ``-X`` options to ``varnishstat``. * Overhaul of the workspace API * The previously deprecated ``WS_Reserve()`` has been removed * The signature of ``WS_Printf()`` has been changed to return ``const char *`` instead of ``void *`` (we do not consider this a breaking change). * Add ``WS_ReservationSize()`` * ``WS_Front()`` is now deprecated and replaced by ``WS_Reservation()`` * Handle a workspace overflow in ``VRY_Validate()`` (3319_) * Fixed the backend probe ``.timeout`` handling for "dripping" responses (3402_) * New ``VARNISH_VMODS_GENERATED()`` macro in ``varnish.m4``. * Prevent pooling of a ``Connection: close`` backend response. When this header is present, be it sent by the backend or added in ``vcl_backend_response {}``, varnish closes the connection after the current request. (3400_) .. _2990: https://github.com/varnishcache/varnish-cache/issues/2990 .. _2999: https://github.com/varnishcache/varnish-cache/issues/2999 .. _3002: https://github.com/varnishcache/varnish-cache/issues/3002 .. _3241: https://github.com/varnishcache/varnish-cache/issues/3241 .. _3253: https://github.com/varnishcache/varnish-cache/issues/3253 .. _3266: https://github.com/varnishcache/varnish-cache/issues/3266 .. _3273: https://github.com/varnishcache/varnish-cache/issues/3273 .. _3299: https://github.com/varnishcache/varnish-cache/issues/3299 .. _3301: https://github.com/varnishcache/varnish-cache/issues/3301 .. _3303: https://github.com/varnishcache/varnish-cache/issues/3303 .. _3307: https://github.com/varnishcache/varnish-cache/issues/3307 .. _3319: https://github.com/varnishcache/varnish-cache/issues/3319 .. _3353: https://github.com/varnishcache/varnish-cache/issues/3353 .. _3360: https://github.com/varnishcache/varnish-cache/issues/3360 .. _3361: https://github.com/varnishcache/varnish-cache/issues/3361 .. _3378: https://github.com/varnishcache/varnish-cache/issues/3378 .. _3394: https://github.com/varnishcache/varnish-cache/issues/3394 .. _3400: https://github.com/varnishcache/varnish-cache/issues/3400 .. _3402: https://github.com/varnishcache/varnish-cache/issues/3402 ================================ Varnish Cache 6.4.0 (2020-03-16) ================================ * The ``MAIN.sess_drop`` counter is gone. * New configure switch: --with-unwind. Alpine linux appears to offer a ``libexecinfo`` implementation that crashes when called by Varnish, this offers the alternative of using ``libunwind`` instead. * backend ``none`` was added for "no backend". * ``std.rollback(bereq)`` is now safe to use, fixed bug 3009_ * Fixed ``varnishstat``, ``varnishtop``, ``varnishhist`` and ``varnishadm`` handling INT, TERM and HUP signals (bugs 3088_ and 3229_) * The hash algorithm of the ``hash`` director was changed, so backend selection will change once only when upgrading. Users of the ``hash`` director are advised to consider using the ``shard`` director, which, amongst other advantages, offers more stable backend selection through consistent hashing. * Log records can safely have empty fields or fields containing blanks if they are delimited by "double quotes". This was applied to ``SessError`` and ``Backend_health``. * It is now possible for VMOD authors to customize the connection pooling of a dynamic backend. A hash is now computed to determine uniqueness and a backend declaration can contribute arbitrary data to influence the pool. * The option ``varnishtest -W`` is gone, the same can be achieved with ``varnishtest -p debug=+witness``. A ``witness.sh`` script is available in the source tree to generate a graphviz dot file and detect potential lock cycles from the test logs. * The ``Process`` timestamp for ``vcl_synth {}`` was wrongly issued before the VCL subroutine, now it gets emitted after VCL returns for consistency with ``vcl_deliver {}``. * Latencies for newly created worker threads to start work on congested systems have been improved. * ``VRB_Iterate()`` signature has changed * ``VRT_fail()`` now also works from director code * Deliberately closing backend requests through ``return(abandon)``, ``return(fail)`` or ``return(error)`` is no longer accounted as a fetch failure * Fixed a bug which could cause probes not to run * The ``if-range`` header is now handled, allowing clients to conditionally request a range based on a date or an ETag. * Introduced ``struct reqtop`` to hold information on the ESI top request and ``PRIV_TOP``, fixed regression 3019_ * Allow numerical expressions in VCL to be negative / negated * Add vi-stype CTRL-f / CTRL-b for page down/up to interactive varnishstat * Fixed wrong handling of an out-of-workspae condition in the proxy vmod and in the workspace allocator, bug 3131_ * Raised the minimum for the ``vcl_cooldown`` parameter to 1s to fix bug 3135_ * Improved creation of additional threads when none are available * Fixed a race between director creation and the ``backend.list`` CLI command - see bug 3094_ * Added error handling to avoid panics for workspace overflows during session attribute allocation - bug 3145_ * Overloaded the ``+=`` operator to also append to headers * Fixed set ``*.body`` commands. * Fixed status for truncated CLI responses, bug 3038_ * New or improved Coccinelle semantic patches that may be useful for VMOD or utilities authors. * Output VCC warnings also for VCLs loaded via the ``varnishd -f`` option, see bug 3160_ * Improved fetch error handling when stale objects are present in cache, see bug 3089_ * Added a ``Notice`` VSL tag (used for ``varnishlog`` logging) * Always refer to ``sub`` as subroutine in the documentation and error messages to avoid confusion with other terms. * New ``pid`` command in the Varnish CLI, to get the master and optionally cache process PIDs, for example from ``varnishadm``. * Fixed a race that could result in a partial response being served in its entirety when it is also compressed with gzip. * Fixed session close reason reporting and accounting, added ``rx_close_idle`` counter for separate accounting when ``timeout_idle`` is reached. Also, ``send_timeout`` is no longer reported as "remote closed". * Fixed handling of request bodies for backend retries * Fix deadlocks when the maximum number of threads has been reached, in particular with http/2, see 2418_ * Add more vcl control over timeouts with ``sess.timeout_linger``, ``sess.send_timeout`` and ``sess.idle_send_timeout`` * Fix panics due to missing EINVAL handling on MacOS, see 1853_ * Added ``VSLs()`` and ``VSLbs()`` functions for logging ``STRANDS`` to VSL * Fixed cases where a workspace overflow would not result in a VCL failure, see 3194_ * Added ``WS_VSB_new()`` / ``WS_VSB_finish()`` for VSBs on workspaces * Imported ``vmod_cookie`` from `varnish_modules`_ The previously deprecated function ``cookie.filter_except()`` has been removed during import. It was replaced by ``cookie.keep()`` * ``body_status`` and ``req_body_status`` have been collapsed into one type. In particular, the ``REQ_BODY_*`` enums now have been replaced with ``BS_*``. .. mention VSB_QUOTE_GLOB ? * Fixed an old regression of the ``Age:`` header for passes, see bug 3221_ * Added ``VRT_AllocStrandsWS()`` as a utility function to allocate STRANDS on a workspace. * Reduced compile time of ``vcl_init{}`` / ``vcl_fini{}`` with gcc, added ``v_dont_optimize`` attribute macro * Fixed a case where ``send_timeout`` would have no effect when streaming from a backend fetch, see bug 3189_ *NOTE* Users upgrading varnish should re-check ``send_timeout`` with respect to long pass and streaming fetches and watch out for increased session close rates. * Added ``VSB_tofile()`` to ``libvarnishapi``, see 3238_ .. _1853: https://github.com/varnishcache/varnish-cache/issues/1853 .. _2418: https://github.com/varnishcache/varnish-cache/issues/2418 .. _3009: https://github.com/varnishcache/varnish-cache/issues/3009 .. _3019: https://github.com/varnishcache/varnish-cache/issues/3019 .. _3038: https://github.com/varnishcache/varnish-cache/issues/3038 .. _3088: https://github.com/varnishcache/varnish-cache/issues/3088 .. _3089: https://github.com/varnishcache/varnish-cache/issues/3089 .. _3094: https://github.com/varnishcache/varnish-cache/issues/3094 .. _3131: https://github.com/varnishcache/varnish-cache/issues/3131 .. _3135: https://github.com/varnishcache/varnish-cache/issues/3135 .. _3145: https://github.com/varnishcache/varnish-cache/issues/3145 .. _3160: https://github.com/varnishcache/varnish-cache/issues/3160 .. _3189: https://github.com/varnishcache/varnish-cache/issues/3189 .. _3194: https://github.com/varnishcache/varnish-cache/issues/3194 .. _3221: https://github.com/varnishcache/varnish-cache/issues/3221 .. _3229: https://github.com/varnishcache/varnish-cache/issues/3229 .. _3238: https://github.com/varnishcache/varnish-cache/issues/3238 .. _varnish_modules: https://github.com/varnish/varnish-modules ================================ Varnish Cache 6.3.0 (2019-09-15) ================================ In addition to a significant number of bug fixes, these are the most important changes in 6.3: * The Host: header is folded to lower-case in the builtin_vcl. * Improved performance of shared memory statistics counters. * Synthetic objects created from ``vcl_backend_error {}`` now replace existing stale objects as ordinary backend fetches would, unless: - abandoning the bereq or - leaving ``vcl_backend_error {}`` with ``return (deliver) and ``beresp.ttl == 0s`` or - there is a waitinglist on the object, in which case, by default, the synthetic object is created with ``ttl = 1s`` / ``grace = 5s`` / ``keep = 5s`` avoid hammering on failing backends (note this is existing behavior). * Retired the ``BackendStart`` log tag - ``BackendOpen`` contains all the information from it APIs / VMODs ------------ * ``WS_Reserve()`` is now deprecated and any use should trigger a compiler warning. It is to be replaced by - ``WS_ReserveAll()`` to reserve all of the remaining workspace It will always leave the workspace reserved even if 0 bytes are available, so it must always be followed by a call to ``WS_Release()`` - ``WS_ReserveSize()`` to reserve a fixed amount. It will only leave the workspace reserved if the reservation request could be fulfilled. We provide a script to help automate this change in the ``tools/coccinelle`` subdirectory of the source tree. * The RST references generated by ``vmodtool.py`` have been changed to match better the VCL syntax to avoid overhead where references are used. The new scheme for a vmod called *name* is: * ``$Function``: *name*\ .\ *function*\ () * ``$Object`` constructor: *name*\ .\ *object*\ () * ``$Method``: x\ *object*\ .\ *method*\ () To illustrate, the old references:: :ref:`vmod_name.function` :ref:`vmod_name.obj` :ref:`vmod_name.obj.method` now are renamed to:: :ref:`name.function()` :ref:`name.obj()` :ref:`xobj.method()` ``tools/vmod_ref_rename.sh`` is provided to automate this task ================================ Varnish Cache 6.2.0 (2019-03-15) ================================ * Extend JSON support in the CLI (2783_) * Improve accuracy of statistics (VSC) * In ``Error: out of workspace`` log entries, the workspace name is now reported in lowercase * Adjust code generator python tools to python 3 and prefer python 3 over python 2 where available * Added a thread pool watchdog which will restart the worker process if scheduling tasks onto worker threads appears stuck. The new parameter ``thread_pool_watchdog`` configures it. (2418_) * Changed ``ExpKill`` log tags to emit microsecond-precision timestamps instead of nanoseconds (2792_) * Changed the default of the ``thread_pool_watchdog`` parameter to 60 seconds to match the ``cli_timeout`` default * VSB quoted output has been unified to three-digit octal, VSB_QUOTE_ESCHEX has been added to prefer hex over octal quoting * Retired long deprecated parameters (VIP16_). Replacement mapping is: ``shm_reclen`` -> ``vsl_reclen`` ``vcl_dir`` -> ``vcl_path`` ``vmod_dir`` -> ``vmod_path`` * The width of the columns of the ``backend.list`` cli command output is now dynamic. For best forward compatibility, we recommend that scripts parse JSON output as obtained using the ``-j`` option. See release notes for details. * The format of the ``backend.list -j`` (JSON) cli command output has changed. See release notes for details. * The undocumented ``-v`` option to the ``backend.list`` cli command has been removed * Changed the formatting of the ``vcl.list`` command from:: status state/temperature busy name [labelinfo] to:: status state temperature busy name [<-|->] [info] Column width is now dynamic. Field values remain unchanged except for the label information, see varnish-cli(7) for details. * The ban facility has been extended by bans access to obj.ttl, obj.age, obj.grace and obj.keep and additional inequality operators. * Many cache lookup optimizations. * Display the VCL syntax during a panic. * Update to the VCL diagrams to include hit-for-miss. VCL --- * Added ``req.is_hitmiss`` and ``req.is_hitpass`` (2743_) bundled vmods ------------- * Added ``directors.lookup()`` bundled tools ------------- * Improved varnish log client performance (2788_) * For ``varnishtest -L``, also keep VCL C source files * Add ``param.reset`` command to ``varnishadm`` * Add VSL rate limiting (2837_) This adds rate limiting to varnishncsa and varnishlog. * Make it possible to change ``varnishstat`` update rate. (2741_) C APIs (for vmod and utility authors) ------------------------------------- * ``libvarnish``: ``VRT_VSA_GetPtr`` renamed to ``VSA_GetPtr`` * Included ``vtree.h`` in the distribution for vmods and renamed the red/black tree macros from ``VRB_*`` to ``VRBT_*`` to disambiguate from the acronym for Varnish Request Body. Changed the internal organisation of dynamic PRIVs (``PRIV_TASK``, ``PRIV_TOP`` from a list to a red/black tree) for performance. (2813_) * Vmod developers are advised that anything returned by a vmod function/method is assumed to be immutable. In other words, a vmod `must not` modify any data which was previously returned. * Tolerate null IP addresses for ACL matches. * Added ``vstrerror()`` as a safe wrapper for ``strerror()`` to avoid a NULL pointer dereference under rare conditions where the latter could return NULL. (2815_) * Varnish-based tools using the VUT interface should now consider using the ``VUT_Usage()`` function for consistency * The name of the `event_function` callback for VCL events in vmods is now prefixed by `$Prefix`\ ``_``\ ` if `$Prefix` is defined in the ``.vcc`` file, or ``vmod_`` by default. So, for example, with ``$Event foo`` and no `$Prefix`, the event function will be called ``vmod_foo`` and with ``$Prefix bar`` it will be called ``bar_foo``. * In the `vmodtool`\ -generated ReStructuredText documentation, anchors have been renamed * from ``obj_``\ `class` to `vmodname`\ ``.``\ `class` for constructors and * from ``func_``\ `class` to `vmodname`\ ``.``\ `function` for functions and * from ``func_``\ `class` to `vmodname`\ ``.``\ `class`\ ``.``\ `method` for methods, repsectively. In short, the anchor is now named equal to VCL syntax for constructors and functions and similarly to VCL syntax for methods. * VRT API has been updated to 9.0 * ``HTTP_Copy()`` was removed, ``HTTP_Dup()`` and ``HTTP_Clone()`` were added * Previously, ``VCL_BLOB`` was implemented as ``struct vmod_priv``, which had the following shortcomings: * blobs are immutable, but that was not reflected by the ``priv`` pointer * the existence of a free pointer suggested automatic memory management, which did never and will not exist for blobs. The ``VCL_BLOB`` type is now implemented as ``struct vrt_blob``, with the ``blob`` member replacing the former ``priv`` pointer and the ``free`` pointer removed. A ``type`` member was added for lightweight type checking similar to the miniobject ``magic`` member, but in contrast to it, ``type`` should never be asserted upon. ``VRT_blob()`` was updated accordingly. * ``req->req_bodybytes`` was removed. Replacement code snippet:: AZ(ObjGetU64(req->wrk, req->body_oc, OA_LEN, &u)); * ``VRT_SetHealth()`` has been removed and ``VRT_SetChanged()`` added. ``VRT_LookupDirector()`` (only to be called from CLI contexts) as been added. See release notes for details * vmodtool has been changed significantly to avoid various name clashes. Rather than using literal prefixes/suffixes, vmod authors should now (and might have to for making existing code continue to compile) use the following macros * ``VPFX(name)`` to prepend the vmod prefix (``vmod_`` by default) * ``VARGS(name)`` as the name of a function/method's argument struct, e.g.:: VCL_VOID vmod_test(VRT_CTX, struct VARGS(test) *args) { ... * ``VENUM(name)`` to access the enum by the name `name` Fixed bugs ---------- * Fixed ``varnishhist`` display error (2780_) * Fix ``varnishstat -f`` in curses mode (interactively, without ``-1``, 2787_) * Handle an out-of-workspace condition in HTTP/2 delivery more gracefully (2589_) * Fixed regression introduced just before 6.1.0 release which caused an unnecessary incompatibility with VSL files written by previous versions. (2790_) * Fix warmup/rampup of the shard director (2823_) * Fix VRT_priv_task for calls from vcl_pipe {} (2820_) * Fix assinging == (2809_) * Fix vmod object constructor documentation in the ``vmodtool.py`` - generated RST files * Fix some stats metrics (vsc) which were wrongly marked as _gauge_ * Fix ``varnishd -I`` (2782_) * Add error handling for STV_NewObject() (2831_) * Fix VRT_fail for 'if'/'elseif' conditional expressions (2840_) .. _2418: https://github.com/varnishcache/varnish-cache/issues/2418 .. _2589: https://github.com/varnishcache/varnish-cache/issues/2589 .. _2741: https://github.com/varnishcache/varnish-cache/pull/2741 .. _2743: https://github.com/varnishcache/varnish-cache/issues/2743 .. _2780: https://github.com/varnishcache/varnish-cache/issues/2780 .. _2782: https://github.com/varnishcache/varnish-cache/issues/2782 .. _2783: https://github.com/varnishcache/varnish-cache/pull/2783 .. _2787: https://github.com/varnishcache/varnish-cache/issues/2787 .. _2788: https://github.com/varnishcache/varnish-cache/issues/2788 .. _2790: https://github.com/varnishcache/varnish-cache/issues/2790 .. _2792: https://github.com/varnishcache/varnish-cache/pull/2792 .. _2809: https://github.com/varnishcache/varnish-cache/issues/2809 .. _2813: https://github.com/varnishcache/varnish-cache/pull/2813 .. _2815: https://github.com/varnishcache/varnish-cache/issues/2815 .. _2820: https://github.com/varnishcache/varnish-cache/issues/2820 .. _2823: https://github.com/varnishcache/varnish-cache/issues/2823 .. _2831: https://github.com/varnishcache/varnish-cache/issues/2831 .. _2837: https://github.com/varnishcache/varnish-cache/pull/2837 .. _2840: https://github.com/varnishcache/varnish-cache/issues/2840 .. _VIP16: https://github.com/varnishcache/varnish-cache/wiki/VIP16%3A-Retire-parameters-aliases ================================ Varnish Cache 6.1.0 (2018-09-17) ================================ * Added -p max_vcl and -p max_vcl_handling for warnings/errors when there are too many undiscarded VCL instances. (2713_) * ``Content-Length`` header is not rewritten in response to a HEAD request, allows responses to HEAD requests to be cached independently from GET responses. .. _2713: https://github.com/varnishcache/varnish-cache/issues/2713 VCL --- * ``return(fail("mumble"))`` can have a string argument that is emitted by VCC as an error message if the VCL load fails due to the return. (2694_) * Improved VCC error messages (2696_) * Fixed ``obj.hits`` in ``vcl_hit`` (had been always 0) (2746_) * req.ttl is fully supported again .. _2746: https://github.com/varnishcache/varnish-cache/issues/2746 .. _2696: https://github.com/varnishcache/varnish-cache/issues/2696 .. _2694: https://github.com/varnishcache/varnish-cache/issues/2694 bundled tools ------------- * ``varnishhist``: Improved test coverage * ``varnishtest``: Added haproxy CLI send/expect facility C APIs (for vmod and utility authors) ------------------------------------- * libvarnishapi so version bumped to 2.0.0 (2718_) * For VMOD methods/functions with PRIV_TASK or PRIV_TOP arguments, the struct vrt_priv is allocated on the appropriate workspace. In the out-of-workspace condition, VCL failure is invoked, and the VMOD method/function is not called. (2708_) * Improved support for the VCL STRANDS type, VMOD blob refactored to use STRANDS (2745_) .. _2718: https://github.com/varnishcache/varnish-cache/pull/2718 .. _2745: https://github.com/varnishcache/varnish-cache/issues/2745 .. _2708: https://github.com/varnishcache/varnish-cache/issues/2708 Fixed bugs ---------- * A series of bug fixes related to excessive object accumulation and Transient storage use in the hit-for-miss case (2760_, 2754_, 2654_, 2763_) * A series of fixes related to Python and the vmodtool (2761_, 2759_, 2742_) * UB in varnishhist (2773_) * Allow to not have randomness in file_id (2436_) * b64.vtc unstable (2753_) * VCL_Poll ctx scope (2749_) .. _2436: https://github.com/varnishcache/varnish-cache/issues/2436 .. _2654: https://github.com/varnishcache/varnish-cache/issues/2654 .. _2742: https://github.com/varnishcache/varnish-cache/issues/2742 .. _2749: https://github.com/varnishcache/varnish-cache/issues/2749 .. _2753: https://github.com/varnishcache/varnish-cache/issues/2753 .. _2754: https://github.com/varnishcache/varnish-cache/issues/2754 .. _2759: https://github.com/varnishcache/varnish-cache/pull/2759 .. _2760: https://github.com/varnishcache/varnish-cache/pull/2760 .. _2761: https://github.com/varnishcache/varnish-cache/issues/2761 .. _2763: https://github.com/varnishcache/varnish-cache/issues/2763 .. _2773: https://github.com/varnishcache/varnish-cache/issues/2773 ================================ Varnish Cache 6.0.1 (2018-08-29) ================================ * Added std.fnmatch() (2737_) * The variable req.grace is back. (2705_) * Importing the same VMOD multiple times is now allowed, if the file_id is identical. .. _2705: https://github.com/varnishcache/varnish-cache/pull/2705 .. _2737: https://github.com/varnishcache/varnish-cache/pull/2737 varnishstat ----------- * The counters * ``sess_fail_econnaborted`` * ``sess_fail_eintr`` * ``sess_fail_emfile`` * ``sess_fail_ebadf`` * ``sess_fail_enomem`` * ``sess_fail_other`` now break down the detailed reason for session accept failures, the sum of which continues to be counted in ``sess_fail``. VCL and bundled VMODs --------------------- * VMOD unix now supports the ``getpeerucred(3)`` case. bundled tools ------------- * ``varnishhist``: The format of the ``-P`` argument has been changed for custom profile definitions to also contain a prefix to match the tag against. * ``varnishtest``: syslog instances now have to start with a capital S. Fixed bugs which may influence VCL behavior -------------------------------------------- * When an object is out of grace but in keep, the client context goes straight to vcl_miss instead of vcl_hit. The documentation has been updated accordingly. (2705_) Fixed bugs ---------- * Several H2 bugs (2285_, 2572_, 2623_, 2624_, 2679_, 2690_, 2693_) * Make large integers work in VCL. (2603_) * Print usage on unknown or missing arguments (2608_) * Assert error in VPX_Send_Proxy() with proxy backends in pipe mode (2613_) * Holddown times for certain backend connection errors (2622_) * Enforce Host requirement for HTTP/1.1 requests (2631_) * Introduction of '-' CLI prefix allowed empty commands to sneak through. (2647_) * VUT apps can be stopped cleanly via vtc process -stop (2649_, 2650_) * VUT apps fail gracefully when removing a PID file fails * varnishd startup log should mention version (2661_) * In curses mode, always filter in the counters necessary for the header lines. (2678_) * Assert error in ban_lurker_getfirst() (2681_) * Missing command entries in varnishadm help menu (2682_) * Handle string literal concatenation correctly (2685_) * varnishtop -1 does not work as documented (2686_) * Handle sigbus like sigsegv (2693_) * Panic on return (retry) of a conditional fetch (2700_) * Wrong turn at cache/cache_backend_probe.c:255: Unknown family (2702_, 2726_) * VCL failure causes TASK_PRIV reference on reset workspace (2706_) * Accurate ban statistics except for a few remaining corner cases (2716_) * Assert error in vca_make_session() (2719_) * Assert error in vca_tcp_opt_set() (2722_) * VCL compiling error on parenthesis (2727_) * Assert error in HTC_RxPipeline() (2731_) .. _2285: https://github.com/varnishcache/varnish-cache/issues/2285 .. _2572: https://github.com/varnishcache/varnish-cache/issues/2572 .. _2603: https://github.com/varnishcache/varnish-cache/issues/2603 .. _2608: https://github.com/varnishcache/varnish-cache/issues/2608 .. _2613: https://github.com/varnishcache/varnish-cache/issues/2613 .. _2622: https://github.com/varnishcache/varnish-cache/issues/2622 .. _2623: https://github.com/varnishcache/varnish-cache/issues/2623 .. _2624: https://github.com/varnishcache/varnish-cache/issues/2624 .. _2631: https://github.com/varnishcache/varnish-cache/issues/2631 .. _2647: https://github.com/varnishcache/varnish-cache/issues/2647 .. _2649: https://github.com/varnishcache/varnish-cache/issues/2649 .. _2650: https://github.com/varnishcache/varnish-cache/pull/2650 .. _2651: https://github.com/varnishcache/varnish-cache/pull/2651 .. _2661: https://github.com/varnishcache/varnish-cache/issues/2661 .. _2678: https://github.com/varnishcache/varnish-cache/issues/2678 .. _2679: https://github.com/varnishcache/varnish-cache/issues/2679 .. _2681: https://github.com/varnishcache/varnish-cache/issues/2681 .. _2682: https://github.com/varnishcache/varnish-cache/issues/2682 .. _2685: https://github.com/varnishcache/varnish-cache/issues/2685 .. _2686: https://github.com/varnishcache/varnish-cache/issues/2686 .. _2690: https://github.com/varnishcache/varnish-cache/issues/2690 .. _2693: https://github.com/varnishcache/varnish-cache/issues/2693 .. _2695: https://github.com/varnishcache/varnish-cache/issues/2695 .. _2700: https://github.com/varnishcache/varnish-cache/issues/2700 .. _2702: https://github.com/varnishcache/varnish-cache/issues/2702 .. _2706: https://github.com/varnishcache/varnish-cache/issues/2706 .. _2716: https://github.com/varnishcache/varnish-cache/issues/2716 .. _2719: https://github.com/varnishcache/varnish-cache/issues/2719 .. _2722: https://github.com/varnishcache/varnish-cache/issues/2722 .. _2726: https://github.com/varnishcache/varnish-cache/pull/2726 .. _2727: https://github.com/varnishcache/varnish-cache/issues/2727 .. _2731: https://github.com/varnishcache/varnish-cache/issues/2731 ================================ Varnish Cache 6.0.0 (2018-03-15) ================================ Usage ----- * Fixed implementation of the ``max_restarts`` limit: It used to be one less than the number of allowed restarts, it now is the number of ``return(restart)`` calls per request. * The ``cli_buffer`` parameter has been removed * Added back ``umem`` storage for Solaris descendants * The new storage backend type (stevedore) ``default`` now resolves to either ``umem`` (where available) or ``malloc``. * Since varnish 4.1, the thread workspace as configured by ``workspace_thread`` was not used as documented, delivery also used the client workspace. We are now taking delivery IO vectors from the thread workspace, so the parameter documentation is in sync with reality again. Users who need to minimize memory footprint might consider decreasing ``workspace_client`` by ``workspace_thread``. * The new parameter ``esi_iovs`` configures the amount of IO vectors used during ESI delivery. It should not be tuned unless advised by a developer. * Support Unix domain sockets for the ``-a`` and ``-b`` command-line arguments, and for backend declarations. This requires VCL >= 4.1. VCL and bundled VMODs --------------------- * ``return (fetch)`` is no longer allowed in ``vcl_hit {}``, use ``return (miss)`` instead. Note that ``return (fetch)`` has been deprecated since 4.0. * Fix behaviour of restarts to how it was originally intended: Restarts now leave all the request properties in place except for ``req.restarts`` and ``req.xid``, which need to change by design. * ``req.storage``, ``req.hash_ignore_busy`` and ``req.hash_always_miss`` are now accessible from all of the client side subs, not just ``vcl_recv{}`` * ``obj.storage`` is now available in ``vcl_hit{}`` and ``vcl_deliver{}``. * Removed ``beresp.storage_hint`` for VCL 4.1 (was deprecated since Varnish 5.1) For VCL 4.0, compatibility is preserved, but the implementation is changed slightly: ``beresp.storage_hint`` is now referring to the same internal data structure as ``beresp.storage``. In particular, it was previously possible to set ``beresp.storage_hint`` to an invalid storage name and later retrieve it back. Doing so will now yield the last successfully set stevedore or the undefined (``NULL``) string. * IP-valued elements of VCL are equivalent to ``0.0.0.0:0`` when the connection in question was addressed as a UDS. This is implemented with the ``bogo_ip`` in ``vsa.c``. * ``beresp.backend.ip`` is retired as of VCL 4.1. * workspace overflows in ``std.log()`` now trigger a VCL failure. * workspace overflows in ``std.syslog()`` are ignored. * added ``return(restart)`` from ``vcl_recv{}``. * The ``alg`` argument of the ``shard`` director ``.reconfigure()`` method has been removed - the consistent hashing ring is now always generated using the last 32 bits of a SHA256 hash of ``"ident%d"`` as with ``alg=SHA256`` or the default. We believe that the other algorithms did not yield sufficiently dispersed placement of backends on the consistent hashing ring and thus retire this option without replacement. Users of ``.reconfigure(alg=CRC32)`` or ``.reconfigure(alg=RS)`` be advised that when upgrading and removing the ``alg`` argument, consistent hashing values for all backends will change once and only once. * The ``alg`` argument of the ``shard`` director ``.key()`` method has been removed - it now always hashes its arguments using SHA256 and returns the last 32 bits for use as a shard key. Backwards compatibility is provided through `vmod blobdigest`_ with the ``key_blob`` argument of the ``shard`` director ``.backend()`` method: * for ``alg=CRC32``, replace:: .backend(by=KEY, key=.key(, CRC32)) with:: .backend(by=BLOB, key_blob=blobdigest.hash(ICRC32, blob.decode(encoded=))) `Note:` The `vmod blobdigest`_ hash method corresponding to the shard director CRC32 method is called **I**\ CRC32 .. _vmod blobdigest: https://code.uplex.de/uplex-varnish/libvmod-blobdigest/blob/master/README.rst * for ``alg=RS``, replace:: .backend(by=KEY, key=.key(, RS)) with:: .backend(by=BLOB, key_blob=blobdigest.hash(RS, blob.decode(encoded=))) * The ``shard`` director now offers resolution at the time the actual backend connection is made, which is how all other bundled directors work as well: With the ``resolve=LAZY`` argument, other shard parameters are saved for later reference and a director object is returned. This enables layering the shard director below other directors. * The ``shard`` director now also supports getting other parameters from a parameter set object: Rather than passing the required parameters with each ``.backend()`` call, an object can be associated with a shard director defining the parameters. The association can be changed in ``vcl_backend_fetch()`` and individual parameters can be overridden in each ``.backend()`` call. The main use case is to segregate shard parameters from director selection: By associating a parameter object with many directors, the same load balancing decision can easily be applied independent of which set of backends is to be used. * To support parameter overriding, support for positional arguments of the shard director ``.backend()`` method had to be removed. In other words, all parameters to the shard director ``.backend()`` method now need to be named. * Integers in VCL are now 64 bits wide across all platforms (implemented as ``int64_t`` C type), but due to implementation specifics of the VCL compiler (VCC), integer literals' precision is limited to that of a VCL real (``double`` C type, roughly 53 bits). In effect, larger integers are not represented accurately (they get rounded) and may even have their sign changed or trigger a C compiler warning / error. * Add VMOD unix. * Add VMOD proxy. Logging / statistics -------------------- * Turned off PROXY protocol debugging by default, can be enabled with the ``protocol`` debug flag. * added ``cache_hit_grace`` statistics counter. * added ``n_lru_limited`` counter. * The byte counters in ReqAcct now show the numbers reported from the operating system rather than what we anticipated to send. This will give more accurate numbers when e.g. the client hung up early without receiving the entire response. Also these counters now show how many bytes was attributed to the body, including any protocol overhead (ie chunked encoding). bundled tools ------------- * ``varnishncsa`` refuses output formats (as defined with the ``-F`` command line argument) for tags which could contain control or binary characters. At the time of writing, these are: ``%{H2RxHdr}x``, ``%{H2RxBody}x``, ``%{H2TxHdr}x``, ``%{H2TxBody}x``, ``%{Debug}x``, ``%{HttpGarbage}x`` and ``%{Hash}x`` * The vtc ``server -listen`` command supports UDS addresses, as does the ``client -connect`` command. vtc ``remote.path`` and ``remote.port`` have the values ``0.0.0.0`` and ``0`` when the peer address is UDS. Added ``remote.path`` to vtc, whose value is the path when the address is UDS, and NULL (matching ) for IP addresses. C APIs (for vmod and utility authors) ------------------------------------- * We have now defined three API Stability levels: ``VRT``, ``PACKAGE``, ``SOURCE``. * New API namespace rules, see `phk_api_spaces_` * Rules for including API headers have been changed: * many headers can now only be included once * some headers require specific include ordering * only ``cache.h`` _or_ ``vrt.h`` can be included * Signatures of functions in the VLU API for bytestream into text serialization have been changed * vcl.h now contains convenience macros ``VCL_MET_TASK_B``, ``VCL_MET_TASK_C`` and ``VCL_MET_TASK_H`` for checking ``ctx->method`` for backend, client and housekeeping (vcl_init/vcl_fini) task context * vcc files can now contain a ``$Prefix`` stanza to define the prefix for vmod function names (which was fixed to ``vmod`` before) * vcc files can contain a ``$Synopsis`` stanza with one of the values ``auto`` or ``manual``, default ``auto``. With ``auto``, a more comprehensive SYNOPSIS is generated in the doc output with an overview of objects, methods, functions and their signatures. With ``manual``, the auto-SYNOPSIS is left out, for VMOD authors who prefer to write their own. * All Varnish internal ``SHA256*`` symbols have been renamed to ``VSHA256*`` * libvarnish now has ``VNUM_duration()`` to convert from a VCL duration like 4h or 5s * director health state queries have been merged to ``VRT_Healthy()`` * Renamed macros: * ``__match_proto__()`` -> ``v_matchproto_()`` * ``__v_printflike()`` -> ``v_printflike_()`` * ``__state_variable__()`` -> ``v_statevariable_()`` * ``__unused`` -> ``v_unused_`` * ``__attribute__((__noreturn__)`` -> ``v_noreturn_`` * ENUMs are now fixed pointers per vcl. * Added ``VRT_blob()`` utility function to create a blob as a copy of some chunk of data on the workspace. * Directors now have their own admin health information and always need to have the ``(struct director).admin_health`` initialized to ``VDI_AH_*`` (usually ``VDI_AH_HEALTHY``). Other changes relevant for VMODs -------------------------------- * ``PRIV_*`` function/method arguments are not excluded from auto-generated vmod documentation. Fixed bugs which may influence VCL behaviour -------------------------------------------- * After reusing a backend connection fails once, a fresh connection will be opened (2135_). .. _2135: https://github.com/varnishcache/varnish-cache/pull/2135 Fixed bugs ---------- * Honor first_byte_timeout for recycled backend connections. (1772_) * Limit backend connection retries to a single retry (2135_) * H2: Move the req-specific PRIV pointers to struct req. (2268_) * H2: Don't panic if we reembark with a request body (2305_) * Clear the objcore attributes flags when (re)initializing an stv object. (2319_) * H2: Fail streams with missing :method or :path. (2351_) * H2: Enforce sequence requirement of header block frames. (2387_) * H2: Hold the sess mutex when evaluating r2->cond. (2434_) * Use the idle read timeout only on empty requests. (2492_) * OH leak in http1_reembark. (2495_) * Fix objcore reference count leak. (2502_) * Close a race between backend probe and vcl.state=Cold by removing the be->vsc under backend mtx. (2505_) * Fail gracefully if shard.backend() is called in housekeeping subs (2506_) * Fix issue #1799 for keep. (2519_) * oc->last_lru as float gives too little precision. (2527_) * H2: Don't HTC_RxStuff with a non-reserved workspace. (2539_) * Various optimizations of VSM. (2430_, 2470_, 2518_, 2535_, 2541_, 2545_, 2546_) * Problems during late socket initialization performed by the Varnish child process can now be reported back to the management process with an error message. (2551_) * Fail if ESI is attempted on partial (206) objects. * Assert error in ban_mark_completed() - ban lurker edge case. (2556_) * Accurate byte counters (2558_). See Logging / statistics above. * H2: Fix reembark failure handling. (2563_ and 2592_) * Working directory permissions insufficient when starting with umask 027. (2570_) * Always use HTTP/1.1 on backend connections for pass & fetch. (2574_) * EPIPE is a documented errno in tcp(7) on linux. (2582_) * H2: Handle failed write(2) in h2_ou_session. (2607_) .. _1772: https://github.com/varnishcache/varnish-cache/issues/1772 .. _2135: https://github.com/varnishcache/varnish-cache/pull/2135 .. _2268: https://github.com/varnishcache/varnish-cache/issues/2268 .. _2305: https://github.com/varnishcache/varnish-cache/issues/2305 .. _2319: https://github.com/varnishcache/varnish-cache/issues/2319 .. _2351: https://github.com/varnishcache/varnish-cache/issues/2351 .. _2387: https://github.com/varnishcache/varnish-cache/issues/2387 .. _2430: https://github.com/varnishcache/varnish-cache/issues/2430 .. _2434: https://github.com/varnishcache/varnish-cache/issues/2434 .. _2470: https://github.com/varnishcache/varnish-cache/issues/2470 .. _2492: https://github.com/varnishcache/varnish-cache/issues/2492 .. _2495: https://github.com/varnishcache/varnish-cache/issues/2495 .. _2502: https://github.com/varnishcache/varnish-cache/issues/2502 .. _2505: https://github.com/varnishcache/varnish-cache/issues/2505 .. _2506: https://github.com/varnishcache/varnish-cache/issues/2506 .. _2518: https://github.com/varnishcache/varnish-cache/issues/2518 .. _2519: https://github.com/varnishcache/varnish-cache/pull/2519 .. _2527: https://github.com/varnishcache/varnish-cache/issues/2527 .. _2535: https://github.com/varnishcache/varnish-cache/issues/2535 .. _2539: https://github.com/varnishcache/varnish-cache/issues/2539 .. _2541: https://github.com/varnishcache/varnish-cache/issues/2541 .. _2545: https://github.com/varnishcache/varnish-cache/pull/2545 .. _2546: https://github.com/varnishcache/varnish-cache/issues/2546 .. _2551: https://github.com/varnishcache/varnish-cache/issues/2551 .. _2554: https://github.com/varnishcache/varnish-cache/pull/2554 .. _2556: https://github.com/varnishcache/varnish-cache/issues/2556 .. _2558: https://github.com/varnishcache/varnish-cache/pull/2558 .. _2563: https://github.com/varnishcache/varnish-cache/issues/2563 .. _2570: https://github.com/varnishcache/varnish-cache/issues/2570 .. _2574: https://github.com/varnishcache/varnish-cache/issues/2574 .. _2582: https://github.com/varnishcache/varnish-cache/issues/2582 .. _2592: https://github.com/varnishcache/varnish-cache/issues/2592 .. _2607: https://github.com/varnishcache/varnish-cache/issues/2607 ================================ Varnish Cache 5.2.1 (2017-11-14) ================================ Bugs fixed ---------- * 2429_ - Avoid buffer read overflow on vcl_backend_error and -sfile * 2492_ - Use the idle read timeout only on empty requests. .. _2429: https://github.com/varnishcache/varnish-cache/pull/2429 .. _2492: https://github.com/varnishcache/varnish-cache/issues/2492 ================================ Varnish Cache 5.2.0 (2017-09-15) ================================ * The ``cli_buffer`` parameter has been deprecated (2382_) .. _2382: https://github.com/varnishcache/varnish-cache/pull/2382 ================================== Varnish Cache 5.2-RC1 (2017-09-04) ================================== Usage ----- * The default for the the -i argument is now the hostname as returned by gethostname(3) * Where possible (on platforms with setproctitle(3)), the -i argument rather than the -n argument is used for process names * varnishd -f honors ``vcl_path`` (#2342) * The ``MAIN.s_req`` statistic has been removed, as it was identical to ``MAIN.client_req``. VSM consumers should be changed to use the latter if necessary. * A listen address can take a name in the -a argument. This name is used in the logs and later will possibly be available in VCL. VCL --- * VRT_purge fails a transaction if used outside of ``vcl_hit`` and ``vcl_miss`` (#2339) * Added ``bereq.is_bgfetch`` which is true for background fetches. * Added VMOD purge (#2404) * Added VMOD blob (#2407) C APIs (for vmod and utility authors) ------------------------------------- * The VSM API for accessing the shared memory segment has been totally rewritten. Things should be simpler and more general. * VSC shared memory layout has changed and the VSC API updated to match it. This paves the way for user defined VSC counters in VMODS and later possibly also in VCL. * New vmod vtc for advanced varnishtest usage (#2276) ================================ Varnish Cache 5.1.3 (2017-08-02) ================================ Bugs fixed ---------- * 2379_ - Correctly handle bogusly large chunk sizes (VSV00001) .. _2379: https://github.com/varnishcache/varnish-cache/issues/2379 ================================ Varnish Cache 5.1.2 (2017-04-07) ================================ * Fix an endless loop in Backend Polling (#2295) * Fix a Chunked bug in tight workspaces (#2207, #2275) * Fix a bug relating to req.body when on waitinglist (#2266) * Handle EPIPE on broken TCP connections (#2267) * Work around the x86 arch's turbo-double FP format in parameter setup code. (#1875) * Fix race related to backend probe with proxy header (#2278) * Keep VCL temperature consistent between mgt/worker also when worker protests. * A lot of HTTP/2 fixes. ================================ Varnish Cache 5.1.1 (2017-03-16) ================================ * Fix bug introduced by stubborn old bugger right before release 5.1.0 was cut. ================================ Varnish Cache 5.1.0 (2017-03-15) ================================ * Added varnishd command-line options -I, -x and -?, and tightened restrictions on permitted combinations of options. * More progress on support for HTTP/2. * Add ``return(fail)`` to almost all VCL subroutines. * Restored the old hit-for-pass, invoked with ``return(pass(DURATION))`` from ``vcl_backend_response``. hit-for-miss remains the default. Added the cache_hitmiss stat, and cache_hitpass only counts the new/old hit-for-pass cases. Restored HitPass to the Varnish log, and added HitMiss. Added the HFP prefix to TTL log entries to log a hit-for-pass duration. * Rolled back the fix for #1206. Client delivery decides solely whether to send a 304 client response, based on client request and response headers. * Added vtest.sh. * Added vxid as a lefthand side for VSL queries. * Added the setenv and write_body commands for Varnish test cases (VTCs). err_shell is deprecated. Also added the operators -cliexpect, -match and -hdrlen, and -reason replaces -msg. Added the ${bad_backend} macro. * varnishtest can be stopped with the TERM, INT and KILL signals, but not with HUP. * The fallback director has now an extra, optional parameter to keep using the current backend until it falls sick. * VMOD shared libraries are now copied to the workdir, to avoid problems when VMODs are updated via packaging systems. * Bump the VRT version to 6.0. * Export more symbols from libvarnishapi.so. * The size of the VSL log is limited to 4G-1b, placing upper bounds on the -l option and the vsl_space and vsm_space parameters. * Added parameters clock_step, thread_pool_reserve and ban_cutoff. * Parameters vcl_dir and vmod_dir are deprecated, use vcl_path and vmod_path instead. * All parameters are defined, even on platforms that don't support them. An unsupported parameter is documented as such in param.show. Setting such a parameter is not an error, but has no effect. * Clarified the interpretations of the + and - operators in VCL with operands of the various data types. * DURATION types may be used in boolean contexts. * INT, DURATION and REAL values can now be negative. * Response codes 1000 or greater may now be set in VCL internally. resp.status is delivered modulo 1000 in client responses. * IP addresses can be compared for equality in VCL. * Introduce the STEVEDORE data type, and the objects storage.SNAME in VCL. Added req.storage and beresp.storage; beresp.storage_hint is deprecated. * Retired the umem stevedore. * req.ttl is deprecated. * Added std.getenv() and std.late_100_continue(). * The fetch_failed stat is incremented for any kind of fetch failure. * Added the stats n_test_gunzip and bans_lurker_obj_killed_cutoff. * Clarified the meanings of the %r, %{X}i and %{X}o formatters in varnishncsa. Bugs fixed ---------- * 2251_ - varnishapi.pc and varnishconfdir * 2250_ - vrt.h now depends on vdef.h making current vmod fail. * 2249_ - "logexpect -wait" doesn't fail * 2245_ - Varnish doesn't start, if use vmod (vmod_cache dir was permission denied) * 2241_ - VSL fails to get hold of SHM * 2233_ - Crash on "Assert error in WS_Assert(), cache/cache_ws.c line 59" * 2227_ - -C flag broken in HEAD * 2217_ - fix argument processing -C regression * 2207_ - Assert error in V1L_Write() * 2205_ - Strange bug when I set client.ip with another string * 2203_ - unhandled SIGPIPE * 2200_ - Assert error in vev_compact_pfd(), vev.c line 394 * 2197_ - ESI parser panic on malformed src URL * 2190_ - varnishncsa: The %r formatter is NOT equivalent to "%m http://%{Host}i%U%q %H" * 2186_ - Assert error in sml_iterator(), storage/storage_simple.c line 263 * 2184_ - Cannot subtract a negative number * 2177_ - Clarify interactions between restarts and labels * 2175_ - Backend leak between a top VCL and a label * 2174_ - Cflags overhaul * 2167_ - VCC will not parse a literal negative number where INT is expected * 2155_ - vmodtool removes text following $Event from RST docs * 2151_ - Health probes do not honor a backend's PROXY protocol setting * 2142_ - ip comparison fails * 2148_ - varnishncsa cannot decode Authorization header if the format is incorrect. * 2143_ - Assert error in exp_inbox(), cache/cache_expire.c line 195 * 2134_ - Disable Nagle's * 2129_ - stack overflow with >4 level esi * 2128_ - SIGSEGV NULL Pointer in STV__iter() * 2118_ - "varnishstat -f MAIN.sess_conn -1" produces empty output * 2117_ - SES_Close() EBADF / Wait_Enter() wp->fd <= 0 * 2115_ - VSM temporary files are not always deleted * 2110_ - [CLI] vcl.inline failures * 2104_ - Assert error in VFP_Open(), cache/cache_fetch_proc.c line 139: Condition((vc->wrk->vsl) != 0) not true * 2099_ - VCC BACKEND/HDR comparison produces duplicate gethdr_s definition * 2096_ - H2 t2002 fail on arm64/arm32 * 2094_ - H2 t2000 fail on arm64/arm32 * 2078_ - VCL comparison doesn't fold STRING_LIST * 2052_ - d12.vtc flaky when compiling with suncc * 2042_ - Send a 304 response for a just-gone-stale hitpass object when appropriate * 2041_ - Parent process should exit if it fails to start child * 2035_ - varnishd stalls with two consecutive Range requests using HTTP persistent connections * 2026_ - Add restart of poll in read_tmo * 2021_ - vcc "used before defined" check * 2017_ - "%r" field is wrong * 2016_ - confusing vcc error when acl referenced before definition * 2014_ - req.ttl: retire or document+vtc * 2010_ - varnishadm CLI behaving weirdly * 1991_ - Starting varnish on Linux with boot param ipv6.disable=1 fails * 1988_ - Lost req.url gives misleading error * 1914_ - set a custom storage for cache_req_body * 1899_ - varnishadm vcl.inline is overly obscure * 1874_ - clock-step related crash * 1865_ - Panic accessing beresp.backend.ip in vcl_backend_error{} * 1856_ - LostHeader setting req.url to an empty string * 1834_ - WS_Assert(), cache/cache_ws.c line 59 * 1830_ - VSL API: "duplicate link" errors in request grouping when vsl_buffer is increased * 1764_ - nuke_limit is not honored * 1750_ - Fail more gracefully on -l >= 4GB * 1704_ - fetch_failed not incremented .. _2251: https://github.com/varnishcache/varnish-cache/issues/2251 .. _2250: https://github.com/varnishcache/varnish-cache/issues/2250 .. _2249: https://github.com/varnishcache/varnish-cache/issues/2249 .. _2245: https://github.com/varnishcache/varnish-cache/issues/2245 .. _2241: https://github.com/varnishcache/varnish-cache/issues/2241 .. _2233: https://github.com/varnishcache/varnish-cache/issues/2233 .. _2227: https://github.com/varnishcache/varnish-cache/issues/2227 .. _2217: https://github.com/varnishcache/varnish-cache/issues/2217 .. _2207: https://github.com/varnishcache/varnish-cache/issues/2207 .. _2205: https://github.com/varnishcache/varnish-cache/issues/2205 .. _2203: https://github.com/varnishcache/varnish-cache/issues/2203 .. _2200: https://github.com/varnishcache/varnish-cache/issues/2200 .. _2197: https://github.com/varnishcache/varnish-cache/issues/2197 .. _2190: https://github.com/varnishcache/varnish-cache/issues/2190 .. _2186: https://github.com/varnishcache/varnish-cache/issues/2186 .. _2184: https://github.com/varnishcache/varnish-cache/issues/2184 .. _2177: https://github.com/varnishcache/varnish-cache/issues/2177 .. _2175: https://github.com/varnishcache/varnish-cache/issues/2175 .. _2174: https://github.com/varnishcache/varnish-cache/issues/2174 .. _2167: https://github.com/varnishcache/varnish-cache/issues/2167 .. _2155: https://github.com/varnishcache/varnish-cache/issues/2155 .. _2151: https://github.com/varnishcache/varnish-cache/issues/2151 .. _2142: https://github.com/varnishcache/varnish-cache/issues/2142 .. _2148: https://github.com/varnishcache/varnish-cache/issues/2148 .. _2143: https://github.com/varnishcache/varnish-cache/issues/2143 .. _2134: https://github.com/varnishcache/varnish-cache/issues/2134 .. _2129: https://github.com/varnishcache/varnish-cache/issues/2129 .. _2128: https://github.com/varnishcache/varnish-cache/issues/2128 .. _2118: https://github.com/varnishcache/varnish-cache/issues/2118 .. _2117: https://github.com/varnishcache/varnish-cache/issues/2117 .. _2115: https://github.com/varnishcache/varnish-cache/issues/2115 .. _2110: https://github.com/varnishcache/varnish-cache/issues/2110 .. _2104: https://github.com/varnishcache/varnish-cache/issues/2104 .. _2099: https://github.com/varnishcache/varnish-cache/issues/2099 .. _2096: https://github.com/varnishcache/varnish-cache/issues/2096 .. _2094: https://github.com/varnishcache/varnish-cache/issues/2094 .. _2078: https://github.com/varnishcache/varnish-cache/issues/2078 .. _2052: https://github.com/varnishcache/varnish-cache/issues/2052 .. _2042: https://github.com/varnishcache/varnish-cache/issues/2042 .. _2041: https://github.com/varnishcache/varnish-cache/issues/2041 .. _2035: https://github.com/varnishcache/varnish-cache/issues/2035 .. _2026: https://github.com/varnishcache/varnish-cache/issues/2026 .. _2021: https://github.com/varnishcache/varnish-cache/issues/2021 .. _2017: https://github.com/varnishcache/varnish-cache/issues/2017 .. _2016: https://github.com/varnishcache/varnish-cache/issues/2016 .. _2014: https://github.com/varnishcache/varnish-cache/issues/2014 .. _2010: https://github.com/varnishcache/varnish-cache/issues/2010 .. _1991: https://github.com/varnishcache/varnish-cache/issues/1991 .. _1988: https://github.com/varnishcache/varnish-cache/issues/1988 .. _1914: https://github.com/varnishcache/varnish-cache/issues/1914 .. _1899: https://github.com/varnishcache/varnish-cache/issues/1899 .. _1874: https://github.com/varnishcache/varnish-cache/issues/1874 .. _1865: https://github.com/varnishcache/varnish-cache/issues/1865 .. _1856: https://github.com/varnishcache/varnish-cache/issues/1856 .. _1834: https://github.com/varnishcache/varnish-cache/issues/1834 .. _1830: https://github.com/varnishcache/varnish-cache/issues/1830 .. _1764: https://github.com/varnishcache/varnish-cache/issues/1764 .. _1750: https://github.com/varnishcache/varnish-cache/issues/1750 .. _1704: https://github.com/varnishcache/varnish-cache/issues/1704 ================================ Varnish Cache 5.0.0 (2016-09-15) ================================ * Documentation updates, especially the what's new and upgrade sections. * Via: header made by Varnish now says 5.0. * VMOD VRT ABI level increased. * [vcl] obj.(ttl|age|grace|keep) is now readable in vcl_deliver. * Latest devicedetect.vcl imported from upstream. * New system wide VCL directory: ``/usr/share/varnish/vcl/`` * std.integer() can now convert from REAL. Bugs fixed ---------- * 2086_ - Ignore H2 upgrades if the feature is not enabled. * 2054_ - Introduce new macros for out-of-tree VMODs * 2022_ - varnishstat -1 -f field inclusion glob doesn't allow VBE backend fields * 2008_ - Panic: Assert error in VBE_Delete() * 1800_ - PRIV_TASK in vcl_init/fini .. _2086: https://github.com/varnishcache/varnish-cache/issues/2086 .. _2054: https://github.com/varnishcache/varnish-cache/issues/2054 .. _2022: https://github.com/varnishcache/varnish-cache/issues/2022 .. _2008: https://github.com/varnishcache/varnish-cache/issues/2008 .. _1800: https://github.com/varnishcache/varnish-cache/issues/1800 ====================================== Varnish Cache 5.0.0-beta1 (2016-09-09) ====================================== This is the first beta release of the upcoming 5.0 release. The list of changes are numerous and will not be expanded on in detail. The release notes contain more background information and are highly recommended reading before using any of the new features. Major items: * VCL labels, allowing for per-vhost (or per-anything) separate VCL files. * (Very!) experimental support for HTTP/2. * Always send the request body to the backend, making possible to cache responses of POST, PATCH requests etc with appropriate custom VCL and/or VMODs. * hit-for-pass is now actually hit-for-miss. * new shard director for loadbalancing by consistent hashing * ban lurker performance improvements * access to obj.ttl, obj.age, obj.grace and obj.keep in vcl_deliver News for Vmod Authors --------------------- * workspace and PRIV_TASK for vcl cli events (init/fini methods) * PRIV_* now also work for object methods with unchanged scope. ================================ Varnish Cache 4.1.9 (2017-11-14) ================================ Changes since 4.1.8: * Added ``bereq.is_bgfetch`` which is true for background fetches. * Add the vtc feature ignore_unknown_macro. * Expose to VCL whether or not a fetch is a background fetch (bgfetch) * Ignore req.ttl when keeping track of expired objects (see 2422_) * Move a cli buffer to VSB (from stack). * Use a separate stack for signals. .. _2422: https://github.com/varnishcache/varnish-cache/pull/2422 Bugs fixed ---------- * 2337_ and 2366_ - Both Upgrade and Connection headers are needed for WebSocket now * 2372_ - Fix problem with purging and the n_obj_purged counter * 2373_ - VSC n_vcl, n_vcl_avail, n_vcl_discard are gauge * 2380_ - Correct regexp in examples. * 2390_ - Straighten locking wrt vcl_active * 2429_ - Avoid buffer read overflow on vcl_backend_error and -sfile * 2492_ - Use the idle read timeout only on empty requests .. _2337: https://github.com/varnishcache/varnish-cache/issues/2337 .. _2366: https://github.com/varnishcache/varnish-cache/issues/2366 .. _2372: https://github.com/varnishcache/varnish-cache/pull/2372 .. _2373: https://github.com/varnishcache/varnish-cache/issues/2373 .. _2380: https://github.com/varnishcache/varnish-cache/issues/2380 .. _2390: https://github.com/varnishcache/varnish-cache/issues/2390 .. _2429: https://github.com/varnishcache/varnish-cache/pull/2429 .. _2492: https://github.com/varnishcache/varnish-cache/issues/2492 ================================ Varnish Cache 4.1.8 (2017-08-02) ================================ Changes since 4.1.7: * Update in the documentation of timestamps Bugs fixed ---------- * 2379_ - Correctly handle bogusly large chunk sizes (VSV00001) .. _2379: https://github.com/varnishcache/varnish-cache/issues/2379 ================================ Varnish Cache 4.1.7 (2017-06-28) ================================ Changes since 4.1.7-beta1: * Add extra locking to protect the pools list and refcounts * Don't panic on a null ban Bugs fixed ---------- * 2321_ - Prevent storage backends name collisions .. _2321: https://github.com/varnishcache/varnish-cache/issues/2321 ====================================== Varnish Cache 4.1.7-beta1 (2017-06-15) ====================================== Changes since 4.1.6: * Add -vsl_catchup to varnishtest * Add record-prefix support to varnishncsa Bugs fixed ---------- * 1764_ - Correctly honor nuke_limit parameter * 2022_ - varnishstat -1 -f field inclusion glob doesn't allow VBE backend fields * 2069_ - Health probes fail when HTTP response does not contain reason phrase * 2118_ - "varnishstat -f MAIN.sess_conn -1" produces empty output * 2219_ - Remember to reset workspace * 2320_ - Rework and fix varnishstat counter filtering * 2329_ - Docfix: Only root can jail .. _1764: https://github.com/varnishcache/varnish-cache/issues/1764 .. _2022: https://github.com/varnishcache/varnish-cache/issues/2022 .. _2069: https://github.com/varnishcache/varnish-cache/issues/2069 .. _2118: https://github.com/varnishcache/varnish-cache/issues/2118 .. _2219: https://github.com/varnishcache/varnish-cache/issues/2219 .. _2320: https://github.com/varnishcache/varnish-cache/issues/2320 .. _2329: https://github.com/varnishcache/varnish-cache/issues/2329 ================================ Varnish Cache 4.1.6 (2017-04-26) ================================ * Introduce a vxid left hand side for VSL queries. This allows matching on records matching a known vxid. * Environment variables are now available in the stdandard VMOD; std.getenv() * Add setenv command to varnishtest Bugs fixed ---------- * 2200_ - Dramatically simplify VEV, fix assert in vev.c * 2216_ - Make sure Age is always less than max-age * 2233_ - Correct check when parsing the query string * 2241_ - VSL fails to get hold of SHM * 2270_ - Newly loaded auto VCLs don't get their go_cold timer set * 2273_ - Master cooling problem * 2275_ - If the client workspace is almost, but not quite exhaused, we may not be able to get enough iovec's to do Chunked transmission. * 2295_ - Spinning loop in VBE_Poll causes master to kill child on CLI timeout * 2301_ - Don't attempt to check if varnishd is still running if we have already failed. * 2313_ - Cannot link to varnishapi, symbols missing .. _2200: https://github.com/varnishcache/varnish-cache/issues/2200 .. _2216: https://github.com/varnishcache/varnish-cache/pull/2216 .. _2233: https://github.com/varnishcache/varnish-cache/issues/2233 .. _2241: https://github.com/varnishcache/varnish-cache/issues/2241 .. _2270: https://github.com/varnishcache/varnish-cache/issues/2270 .. _2273: https://github.com/varnishcache/varnish-cache/pull/2273 .. _2275: https://github.com/varnishcache/varnish-cache/issues/2275 .. _2295: https://github.com/varnishcache/varnish-cache/issues/2295 .. _2301: https://github.com/varnishcache/varnish-cache/issues/2301 .. _2313: https://github.com/varnishcache/varnish-cache/issues/2313 ================================ Varnish Cache 4.1.5 (2017-02-09) ================================ * No code changes since 4.1.5-beta2. ====================================== Varnish Cache 4.1.5-beta2 (2017-02-08) ====================================== * Update devicedetect.vcl Bugs fixed ---------- * 1704_ - Reverted the docfix and made the fech_failed counter do what the documentation says it should do * 1865_ - Panic accessing beresp.backend.ip in vcl_backend_error * 2167_ - VCC will not parse a literal negative number where INT is expected * 2184_ - Cannot subtract a negative number .. _1704: https://github.com/varnishcache/varnish-cache/issues/1704 .. _1865: https://github.com/varnishcache/varnish-cache/issues/1865 .. _2167: https://github.com/varnishcache/varnish-cache/issues/2167 .. _2184: https://github.com/varnishcache/varnish-cache/issues/2184 ====================================== Varnish Cache 4.1.5-beta1 (2017-02-02) ====================================== Bugs fixed ---------- * 1704_ - (docfix) Clarify description of fetch_failed counter * 1834_ - Panic in workspace exhaustion conditions * 2106_ - 4.1.3: Varnish crashes with "Assert error in CNT_Request(), cache/cache_req_fsm.c line 820" * 2134_ - Disable Nagle's * 2148_ - varnishncsa cannot decode Authorization header if the format is incorrect. * 2168_ - Compare 'bereq.backend' / 'req.backend_hint' myDirector.backend() does not work * 2178_ - 4.1 branch does not compile on FreeBSD * 2188_ - Fix vsm_free (never incremented) * 2190_ - (docfix)varnishncsa: The %r formatter is NOT equivalent to... * 2197_ - ESI parser panic on malformed src URL .. _1704: https://github.com/varnishcache/varnish-cache/issues/1704 .. _1834: https://github.com/varnishcache/varnish-cache/issues/1834 .. _2106: https://github.com/varnishcache/varnish-cache/issues/2106 .. _2134: https://github.com/varnishcache/varnish-cache/issues/2134 .. _2148: https://github.com/varnishcache/varnish-cache/issues/2148 .. _2168: https://github.com/varnishcache/varnish-cache/issues/2168 .. _2178: https://github.com/varnishcache/varnish-cache/issues/2178 .. _2188: https://github.com/varnishcache/varnish-cache/pull/2188 .. _2190: https://github.com/varnishcache/varnish-cache/issues/2190 .. _2197: https://github.com/varnishcache/varnish-cache/issues/2197 ================================ Varnish Cache 4.1.4 (2016-12-01) ================================ Bugs fixed ---------- * 2035_ - varnishd stalls with two consecutive Range requests using HTTP persistent connections .. _2035: https://github.com/varnishcache/varnish-cache/issues/2035 ====================================== Varnish Cache 4.1.4-beta3 (2016-11-24) ====================================== * Include the current time of the panic in the panic output * Keep a reserve of idle threads for vital tasks Bugs fixed ---------- * 1874_ - clock-step related crash * 1889_ - (docfix) What does -p flag for backend.list command means * 2115_ - VSM temporary files are not always deleted * 2129_ - (docfix) stack overflow with >4 level esi .. _1874: https://github.com/varnishcache/varnish-cache/issues/1874 .. _1889: https://github.com/varnishcache/varnish-cache/issues/1889 .. _2115: https://github.com/varnishcache/varnish-cache/issues/2115 .. _2129: https://github.com/varnishcache/varnish-cache/issues/2129 ====================================== Varnish Cache 4.1.4-beta2 (2016-10-13) ====================================== Bugs fixed ---------- * 1830_ - VSL API: "duplicate link" errors in request grouping when vsl_buffer is increased * 2010_ - varnishadm CLI behaving weirdly * 2017_ - varnishncsa docfix: "%r" field is wrong * 2107_ - (docfix) HEAD requestes changed to GET .. _1830: https://github.com/varnishcache/varnish-cache/issues/1830 .. _2010: https://github.com/varnishcache/varnish-cache/issues/2010 .. _2017: https://github.com/varnishcache/varnish-cache/issues/2017 .. _2107: https://github.com/varnishcache/varnish-cache/issues/2107 ====================================== Varnish Cache 4.1.4-beta1 (2016-09-14) ====================================== * [varnishhist] Various improvements * [varnishtest] A `cmd` feature for custom shell-based checks * Documentation improvements (do_stream, sess_herd, timeout_linger, thread_pools) * [varnishtop] Documented behavior when both -p and -1 are specified Bugs fixed ---------- * 2027_ - Racy backend selection * 2024_ - panic vmod_rr_resolve() round_robin.c line 75 (be) != NULL * 2011_ - VBE.*.conn (concurrent connections to backend) not working as expected * 2008_ - Assert error in VBE_Delete() * 2007_ - Update documentation part about CLI/management port authentication parameter * 1881_ - std.cache_req_body() w/ return(pipe) is broken .. _2027: https://github.com/varnishcache/varnish-cache/issues/2027 .. _2024: https://github.com/varnishcache/varnish-cache/issues/2024 .. _2011: https://github.com/varnishcache/varnish-cache/issues/2011 .. _2008: https://github.com/varnishcache/varnish-cache/issues/2008 .. _2007: https://github.com/varnishcache/varnish-cache/issues/2007 .. _1881: https://github.com/varnishcache/varnish-cache/issues/1881 ================================ Varnish Cache 4.1.3 (2016-07-06) ================================ * Be stricter when parsing request headers to harden against smuggling attacks. ====================================== Varnish Cache 4.1.3-beta2 (2016-06-28) ====================================== * New parameter `vsm_free_cooldown`. Specifies how long freed VSM memory (shared log) will be kept around before actually being freed. * varnishncsa now accepts `-L` argument to configure the limit on incomplete transactions kept. (Issue 1994_) Bugs fixed ---------- * 1984_ - Make the counter vsm_cooling act according to spec * 1963_ - Avoid abort when changing to a VCL name which is a path * 1933_ - Don't trust dlopen refcounting .. _1994: https://github.com/varnishcache/varnish-cache/issues/1994 .. _1984: https://github.com/varnishcache/varnish-cache/issues/1984 .. _1963: https://github.com/varnishcache/varnish-cache/issues/1963 .. _1933: https://github.com/varnishcache/varnish-cache/issues/1933 ====================================== Varnish Cache 4.1.3-beta1 (2016-06-15) ====================================== * varnishncsa can now access and log backend requests. (PR #1905) * [varnishncsa] New output formatters %{Varnish:vxid}x and %{VSL:Tag}x. * [varnishlog] Added log tag BackendStart on backend transactions. * On SmartOS, use ports instead of epoll by default. * Add support for TCP Fast Open where available. Disabled by default. * [varnishtest] New syncronization primitive barriers added, improving coordination when test cases call external programs. .. _1905: https://github.com/varnishcache/varnish-cache/pull/1905 Bugs fixed ---------- * 1971_ - Add missing Wait_HeapDelete * 1967_ - [ncsa] Remove implicit line feed when using formatfile * 1955_ - 4.1.x sometimes duplicates Age and Accept-Ranges headers * 1954_ - Correctly handle HTTP/1.1 EOF response * 1953_ - Deal with fetch failures in ved_stripgzip * 1931_ - Allow VCL set Last-Modified to be used for I-M-S processing * 1928_ - req->task members must be set in case we get onto the waitinglist * 1924_ - Make std.log() and std.syslog() work from vcl_{init,fini} * 1919_ - Avoid ban lurker panic with empty olist * 1918_ - Correctly handle EOF responses with HTTP/1.1 * 1912_ - Fix (insignificant) memory leak with mal-formed ESI directives. * 1904_ - Release memory instead of crashing on malformed ESI * 1885_ - [vmodtool] Method names should start with a period * 1879_ - Correct handling of duplicate headers on IMS header merge * 1878_ - Fix a ESI+gzip corner case which had escaped notice until now * 1873_ - Check for overrun before looking at the next vsm record * 1871_ - Missing error handling code in V1F_Setup_Fetch * 1869_ - Remove temporary directory iff called with -C * 1883_ - Only accept C identifiers as acls * 1855_ - Truncate output if it's wider than 12 chars * 1806_ - One minute delay on return (pipe) and a POST-Request * 1725_ - Revive the backend_conn counter .. _1971: https://github.com/varnishcache/varnish-cache/issues/1971 .. _1967: https://github.com/varnishcache/varnish-cache/issues/1967 .. _1955: https://github.com/varnishcache/varnish-cache/issues/1955 .. _1954: https://github.com/varnishcache/varnish-cache/issues/1954 .. _1953: https://github.com/varnishcache/varnish-cache/issues/1953 .. _1931: https://github.com/varnishcache/varnish-cache/issues/1931 .. _1928: https://github.com/varnishcache/varnish-cache/issues/1928 .. _1924: https://github.com/varnishcache/varnish-cache/issues/1924 .. _1919: https://github.com/varnishcache/varnish-cache/issues/1919 .. _1918: https://github.com/varnishcache/varnish-cache/issues/1918 .. _1912: https://github.com/varnishcache/varnish-cache/issues/1912 .. _1904: https://github.com/varnishcache/varnish-cache/issues/1904 .. _1885: https://github.com/varnishcache/varnish-cache/issues/1885 .. _1883: https://github.com/varnishcache/varnish-cache/issues/1883 .. _1879: https://github.com/varnishcache/varnish-cache/issues/1879 .. _1878: https://github.com/varnishcache/varnish-cache/issues/1878 .. _1873: https://github.com/varnishcache/varnish-cache/issues/1873 .. _1871: https://github.com/varnishcache/varnish-cache/issues/1871 .. _1869: https://github.com/varnishcache/varnish-cache/issues/1869 .. _1855: https://github.com/varnishcache/varnish-cache/issues/1855 .. _1806: https://github.com/varnishcache/varnish-cache/issues/1806 .. _1725: https://github.com/varnishcache/varnish-cache/issues/1725 ================================ Varnish Cache 4.1.2 (2016-03-04) ================================ * [vmods] vmodtool improvements for multiple VMODs in a single directory. Bugs fixed ---------- * 1860_ - ESI-related memory leaks * 1863_ - Don't reset the oc->ban pointer from BAN_CheckObject * 1864_ - Avoid panic if the lurker is working on a ban to be checked. .. _1860: https://www.varnish-cache.org/trac/ticket/1860 .. _1863: https://www.varnish-cache.org/trac/ticket/1863 .. _1864: https://www.varnish-cache.org/trac/ticket/1864 ====================================== Varnish Cache 4.1.2-beta2 (2016-02-25) ====================================== * [vmods] Passing VCL ACL to a VMOD is now possible. * [vmods] VRT_MINOR_VERSION increase due to new function: VRT_acl_match() * Some test case stabilization fixes and minor documentation updates. * Improved handling of workspace exhaustion when fetching objects. Bugs fixed ---------- * 1858_ - Hit-for-pass objects are not IMS candidates .. _1858: https://www.varnish-cache.org/trac/ticket/1858 ====================================== Varnish Cache 4.1.2-beta1 (2016-02-17) ====================================== * Be stricter when parsing a HTTP request to avoid potential HTTP smuggling attacks against vulnerable backends. * Some fixes to minor/trivial issues found with clang AddressSanitizer. * Arithmetric on REAL data type in VCL is now possible. * vmodtool.py improvements to allow VMODs for 4.0 and 4.1 to share a source tree. * Off-by-one in WS_Reset() fixed. * "https_scheme" parameter added. Enables graceful handling of compound request URLs with HTTPS scheme. (Bug 1847_) Bugs fixed ---------- * 1739_ - Workspace overflow handling in VFP_Push() * 1837_ - Error compiling VCL if probe is referenced before it is defined * 1841_ - Replace alien FD's with /dev/null rather than just closing them * 1843_ - Fail HTTP/1.0 POST and PUT requests without Content-Length * 1844_ - Correct ENUM handling in object constructors * 1851_ - Varnish 4.1.1 fails to build on i386 * 1852_ - Add a missing VDP flush operation after ESI:includes. * 1857_ - Fix timeout calculation for session herding. .. _1739: https://www.varnish-cache.org/trac/ticket/1739 .. _1837: https://www.varnish-cache.org/trac/ticket/1837 .. _1841: https://www.varnish-cache.org/trac/ticket/1841 .. _1843: https://www.varnish-cache.org/trac/ticket/1843 .. _1844: https://www.varnish-cache.org/trac/ticket/1844 .. _1851: https://www.varnish-cache.org/trac/ticket/1851 .. _1852: https://www.varnish-cache.org/trac/ticket/1852 .. _1857: https://www.varnish-cache.org/trac/ticket/1857 .. _1847: https://www.varnish-cache.org/trac/ticket/1847 ================================ Varnish Cache 4.1.1 (2016-01-28) ================================ * No code changes since 4.1.1-beta2. ====================================== Varnish Cache 4.1.1-beta2 (2016-01-22) ====================================== * Improvements to VCL temperature handling added. This opens for reliably deny warming a cooling VCL from a VMOD. Bugs fixed ---------- * 1802_ - Segfault after VCL change * 1825_ - Cannot Start Varnish After Just Restarting The Service * 1842_ - Handle missing waiting list gracefully. * 1845_ - Handle whitespace after floats in test fields .. _1802: https://www.varnish-cache.org/trac/ticket/1802 .. _1825: https://www.varnish-cache.org/trac/ticket/1825 .. _1842: https://www.varnish-cache.org/trac/ticket/1842 .. _1845: https://www.varnish-cache.org/trac/ticket/1845 ====================================== Varnish Cache 4.1.1-beta1 (2016-01-15) ====================================== - Format of "ban.list" has changed slightly. - [varnishncsa] -w is now required when running deamonized. - [varnishncsa] Log format can now be read from file. - Port fields extracted from PROXY1 header now work as expected. - New VCL state "busy" introduced (mostly for VMOD writers). - Last traces of varnishreplay removed. - If-Modified-Since is now ignored if we have If-None-Match. - Zero Content-Length is no longer sent on 304 responses. - vcl_dir and vmod_dir now accept a colon separated list of directories. - Nested includes starting with "./" are relative to the including VCL file now. Bugs fixed ---------- - 1796_ - Don't attempt to allocate a V1L from the workspace if it is overflowed. - 1794_ - Fail if multiple -a arguments return the same suckaddr. - 1763_ - Restart epoll_wait on EINTR error - 1788_ - ObjIter has terrible performance profile when busyobj != NULL - 1798_ - Varnish requests painfully slow with large files - 1816_ - Use a weak comparison function for If-None-Match - 1818_ - Allow grace-hits on hit-for-pass objects, [..] - 1821_ - Always slim private & pass objects after delivery. - 1823_ - Rush the objheader if there is a waiting list when it is deref'ed. - 1826_ - Ignore 0 Content-Lengths in 204 responses - 1813_ - Fail if multiple -a arguments return the same suckaddr. - 1810_ - Improve handling of HTTP/1.0 clients - 1807_ - Return 500 if we cannot decode the stored object into the resp.* - 1804_ - Log proxy related messages on the session, not on the request. - 1801_ - Relax IP constant parsing .. _1796: https://www.varnish-cache.org/trac/ticket/1796 .. _1794: https://www.varnish-cache.org/trac/ticket/1794 .. _1763: https://www.varnish-cache.org/trac/ticket/1763 .. _1788: https://www.varnish-cache.org/trac/ticket/1788 .. _1798: https://www.varnish-cache.org/trac/ticket/1798 .. _1816: https://www.varnish-cache.org/trac/ticket/1816 .. _1818: https://www.varnish-cache.org/trac/ticket/1818 .. _1821: https://www.varnish-cache.org/trac/ticket/1821 .. _1823: https://www.varnish-cache.org/trac/ticket/1823 .. _1826: https://www.varnish-cache.org/trac/ticket/1826 .. _1813: https://www.varnish-cache.org/trac/ticket/1813 .. _1810: https://www.varnish-cache.org/trac/ticket/1810 .. _1807: https://www.varnish-cache.org/trac/ticket/1807 .. _1804: https://www.varnish-cache.org/trac/ticket/1804 .. _1801: https://www.varnish-cache.org/trac/ticket/1801 ================================ Varnish Cache 4.1.0 (2015-09-30) ================================ - Documentation updates. - Stabilization fixes on testcase p00005.vtc. - Avoid compiler warning in zlib. - Bug 1792_: Avoid using fallocate() with -sfile on non-EXT4. .. _1792: https://www.varnish-cache.org/trac/ticket/1792 ====================================== Varnish Cache 4.1.0-beta1 (2015-09-11) ====================================== - Redhat packaging files are now separate from the normal tree. - Client workspace overflow should now result in a 500 response instead of panic. - [varnishstat] -w option has been retired. - libvarnishapi release number is increased. - Body bytes sent on ESI subrequests with gzip are now counted correctly. - [vmod-std] Data type conversion functions now take additional fallback argument. Bugs fixed ---------- - 1777_ - Disable speculative Range handling on streaming transactions. - 1778_ - [varnishstat] Cast to integer to prevent negative values messing the statistics - 1781_ - Propagate gzip CRC upwards from nested ESI includes. - 1783_ - Align code with RFC7230 section 3.3.3 which allows POST without a body. .. _1777: https://www.varnish-cache.org/trac/ticket/1777 .. _1778: https://www.varnish-cache.org/trac/ticket/1778 .. _1781: https://www.varnish-cache.org/trac/ticket/1781 .. _1783: https://www.varnish-cache.org/trac/ticket/1783 ==================================== Varnish Cache 4.1.0-tp1 (2015-07-08) ==================================== Changes between 4.0 and 4.1 are numerous. Please read the upgrade section in the documentation for a general overview. ============================================ Changes from 4.0.3-rc3 to 4.0.3 (2015-02-17) ============================================ * No changes. ================================================ Changes from 4.0.3-rc2 to 4.0.3-rc3 (2015-02-11) ================================================ - Superseded objects are now expired immediately. Bugs fixed ---------- - 1462_ - Use first/last log entry in varnishncsa. - 1539_ - Avoid panic when expiry thread modifies a candidate object. - 1637_ - Fail the fetch processing if the vep callback failed. - 1665_ - Be more accurate when computing client RX_TIMEOUT. - 1672_ - Do not panic on unsolicited 304 response to non-200 bereq. .. _1462: https://www.varnish-cache.org/trac/ticket/1462 .. _1539: https://www.varnish-cache.org/trac/ticket/1539 .. _1637: https://www.varnish-cache.org/trac/ticket/1637 .. _1665: https://www.varnish-cache.org/trac/ticket/1665 .. _1672: https://www.varnish-cache.org/trac/ticket/1672 ================================================ Changes from 4.0.3-rc1 to 4.0.3-rc2 (2015-01-28) ================================================ - Assorted documentation updates. Bugs fixed ---------- - 1479_ - Fix out-of-tree builds. - 1566_ - Escape VCL string question marks. - 1616_ - Correct header file placement. - 1620_ - Fail miss properly if out of backend threads. (Also 1621_) - 1628_ - Avoid dereferencing null in VBO_DerefBusyObj(). - 1629_ - Ditch rest of waiting list on failure to reschedule. - 1660_ - Don't attempt range delivery on a synth response .. _1479: https://www.varnish-cache.org/trac/ticket/1479 .. _1566: https://www.varnish-cache.org/trac/ticket/1578 .. _1616: https://www.varnish-cache.org/trac/ticket/1616 .. _1620: https://www.varnish-cache.org/trac/ticket/1620 .. _1621: https://www.varnish-cache.org/trac/ticket/1621 .. _1628: https://www.varnish-cache.org/trac/ticket/1628 .. _1629: https://www.varnish-cache.org/trac/ticket/1629 .. _1660: https://www.varnish-cache.org/trac/ticket/1660 ============================================ Changes from 4.0.2 to 4.0.3-rc1 (2015-01-15) ============================================ - Support older autoconf (< 2.63b) (el5) - A lot of minor documentation fixes. - bereq.uncacheable is now read-only. - obj.uncacheable is now readable in vcl_deliver. - [varnishadm] Prefer exact matches for backend.set_healthy. Bug 1349_. - Hard-coded -sfile default size is removed. - [packaging] EL6 packages are once again built with -O2. - [parameter] fetch_chunksize default is reduced to 16KB. (from 128KB) - Added std.time() which converts strings to VCL_TIME. - [packaging] packages now Provide strictABI (gitref) and ABI (VRT major/minor) for VMOD use. Bugs fixed ---------- * 1378_ - Properly escape non-printable characters in varnishncsa. * 1596_ - Delay HSH_Complete() until the storage sanity functions has finished. * 1506_ - Keep Content-Length from backend if we can. * 1602_ - Fix a cornercase related to empty pass objects. * 1607_ - Don't leak reqs on failure to revive from waitinglist. * 1610_ - Update forgotten varnishlog example to 4.0 syntax. * 1612_ - Fix a cornercase related to empty pass objects. * 1623_ - Fix varnishhist -d segfault. * 1636_ - Outdated paragraph in Vary: documentation * 1638_ - Fix panic when retrying a failed backend fetch. * 1639_ - Restore the default SIGSEGV handler during pan_ic * 1647_ - Relax an assertion for the IMS update candidate object. * 1648_ - Avoid partial IMS updates to replace old object. * 1650_ - Collapse multiple X-Forwarded-For headers .. _1349: https://www.varnish-cache.org/trac/ticket/1349 .. _1378: https://www.varnish-cache.org/trac/ticket/1378 .. _1596: https://www.varnish-cache.org/trac/ticket/1596 .. _1506: https://www.varnish-cache.org/trac/ticket/1506 .. _1602: https://www.varnish-cache.org/trac/ticket/1602 .. _1607: https://www.varnish-cache.org/trac/ticket/1607 .. _1610: https://www.varnish-cache.org/trac/ticket/1610 .. _1612: https://www.varnish-cache.org/trac/ticket/1612 .. _1623: https://www.varnish-cache.org/trac/ticket/1623 .. _1636: https://www.varnish-cache.org/trac/ticket/1636 .. _1638: https://www.varnish-cache.org/trac/ticket/1638 .. _1639: https://www.varnish-cache.org/trac/ticket/1639 .. _1647: https://www.varnish-cache.org/trac/ticket/1647 .. _1648: https://www.varnish-cache.org/trac/ticket/1648 .. _1650: https://www.varnish-cache.org/trac/ticket/1650 ============================================ Changes from 4.0.2-rc1 to 4.0.2 (2014-10-08) ============================================ New since 4.0.2-rc1: - [varnishlog] -k argument is back. (exit after n records) - [varnishadm] vcl.show is now listed in help. ============================================ Changes from 4.0.1 to 4.0.2-rc1 (2014-09-23) ============================================ New since 4.0.1: - [libvmod-std] New function strstr() for matching substrings. - server.(hostname|identity) is now available in all VCL functions. - VCL variable type BYTES was added. - `workspace_client` default is now 9k. - [varnishstat] Update interval can now be subsecond. - Document that reloading VCL does not reload a VMOD. - Guru meditation page is now valid HTML5. - [varnishstat] hitrate calculation is back. - New parameter `group_cc` adds a GID to the grouplist of VCL compiler sandbox. - Parameter shm_reclen is now an alias for vsl_reclen. - Workspace overflows are now handled with a 500 client response. - VCL variable type added: HTTP, representing a HTTP header set. - It is now possible to return(synth) from vcl_deliver. - [varnishadm] vcl.show now has a -v option that output the complete set of VCL and included VCL files. - RHEL7 packaging (systemd) was added. - [libvmod-std] querysort() fixed parameter limit has been lifted. - Fix small memory leak in ESI parser. - Fix unreported race/assert in V1D_Deliver(). Bugs fixed ---------- * 1553_ - Fully reset workspace (incl. Vary state) before reusing it. * 1551_ - Handle workspace exhaustion during purge. * 1591_ - Group entries correctly in varnishtop. * 1592_ - Bail out on workspace exhaustion in VRT_IP_string. * 1538_ - Relax VMOD ABI check for release branches. * 1584_ - Don't log garbage/non-HTTP requests. [varnishncsa] * 1407_ - Don't rename VSM file until child has started. * 1466_ - Don't leak request structs on restart after waitinglist. * 1580_ - Output warning if started without -b and -f. [varnishd] * 1583_ - Abort on fatal sandbox errors on Solaris. (Related: 1572_) * 1585_ - Handle fatal sandbox errors. * 1572_ - Exit codes have been cleaned up. * 1569_ - Order of symbols should not influence compilation result. * 1579_ - Clean up type inference in VCL. * 1578_ - Don't count Age twice when computing new object TTL. * 1574_ - std.syslog() logged empty strings. * 1555_ - autoconf editline/readline build issue. * 1568_ - Skip NULL arguments when hashing. * 1567_ - Compile on systems without SO_SNDTIMEO/SO_RCVTIMEO. * 1512_ - Changes to bereq are lost between v_b_r and v_b_f. * 1563_ - Increase varnishtest read timeout. * 1561_ - Never call a VDP with zero length unless done. * 1562_ - Fail correctly when rereading a failed client request body. * 1521_ - VCL compilation fails on OSX x86_64. * 1547_ - Panic when increasing shm_reclen. * 1503_ - Document return(retry). * 1581_ - Don't log duplicate Begin records to shmlog. * 1588_ - Correct timestamps on pipelined requests. * 1575_ - Use all director backends when looking for a healthy one. * 1577_ - Read the full request body if shunted to synth. * 1532_ - Use correct VCL representation of reals. * 1531_ - Work around libedit bug in varnishadm. .. _1553: https://www.varnish-cache.org/trac/ticket/1553 .. _1551: https://www.varnish-cache.org/trac/ticket/1551 .. _1591: https://www.varnish-cache.org/trac/ticket/1591 .. _1592: https://www.varnish-cache.org/trac/ticket/1592 .. _1538: https://www.varnish-cache.org/trac/ticket/1538 .. _1584: https://www.varnish-cache.org/trac/ticket/1584 .. _1407: https://www.varnish-cache.org/trac/ticket/1407 .. _1466: https://www.varnish-cache.org/trac/ticket/1466 .. _1580: https://www.varnish-cache.org/trac/ticket/1580 .. _1583: https://www.varnish-cache.org/trac/ticket/1583 .. _1585: https://www.varnish-cache.org/trac/ticket/1585 .. _1572: https://www.varnish-cache.org/trac/ticket/1572 .. _1569: https://www.varnish-cache.org/trac/ticket/1569 .. _1579: https://www.varnish-cache.org/trac/ticket/1579 .. _1578: https://www.varnish-cache.org/trac/ticket/1578 .. _1574: https://www.varnish-cache.org/trac/ticket/1574 .. _1555: https://www.varnish-cache.org/trac/ticket/1555 .. _1568: https://www.varnish-cache.org/trac/ticket/1568 .. _1567: https://www.varnish-cache.org/trac/ticket/1567 .. _1512: https://www.varnish-cache.org/trac/ticket/1512 .. _1563: https://www.varnish-cache.org/trac/ticket/1563 .. _1561: https://www.varnish-cache.org/trac/ticket/1561 .. _1562: https://www.varnish-cache.org/trac/ticket/1562 .. _1521: https://www.varnish-cache.org/trac/ticket/1521 .. _1547: https://www.varnish-cache.org/trac/ticket/1547 .. _1503: https://www.varnish-cache.org/trac/ticket/1503 .. _1581: https://www.varnish-cache.org/trac/ticket/1581 .. _1588: https://www.varnish-cache.org/trac/ticket/1588 .. _1575: https://www.varnish-cache.org/trac/ticket/1575 .. _1577: https://www.varnish-cache.org/trac/ticket/1577 .. _1532: https://www.varnish-cache.org/trac/ticket/1532 .. _1531: https://www.varnish-cache.org/trac/ticket/1531 ======================================== Changes from 4.0.0 to 4.0.1 (2014-06-24) ======================================== New since 4.0.0: - New functions in vmod_std: real2time, time2integer, time2real, real. - Chunked requests are now supported. (pass) - Add std.querysort() that sorts GET query arguments. (from libvmod-boltsort) - Varnish will no longer reply with "200 Not Modified". - Backend IMS is now only attempted when last status was 200. - Packaging now uses find-provides instead of find-requires. [redhat] - Two new counters: n_purges and n_obj_purged. - Core size can now be set from /etc/sysconfig/varnish [redhat] - Via header set is now RFC compliant. - Removed "purge" keyword in VCL. Use return(purge) instead. - fallback director is now documented. - %D format flag in varnishncsa is now truncated to an integer value. - persistent storage backend is now deprecated. https://www.varnish-cache.org/docs/trunk/phk/persistent.html - Added format flags %I (total bytes received) and %O (total bytes sent) for varnishncsa. - python-docutils >= 0.6 is now required. - Support year (y) as a duration in VCL. - VMOD ABI requirements are relaxed, a VMOD no longer have to be run on the same git revision as it was compiled for. Replaced by a major/minor ABI counter. Bugs fixed ---------- * 1269_ - Use correct byte counters in varnishncsa when piping a request. * 1524_ - Chunked requests should be pipe-able. * 1530_ - Expire old object on successful IMS fetch. * 1475_ - time-to-first-byte in varnishncsa was potentially dishonest. * 1480_ - Porting guide for 4.0 is incomplete. * 1482_ - Inherit group memberships of -u specified user. * 1473_ - Fail correctly in configure when rst2man is not found. * 1486_ - Truncate negative Age values to zero. * 1488_ - Don't panic on high request rates. * 1489_ - req.esi should only be available in client threads. * 1490_ - Fix thread leak when reducing number of threads. * 1491_ - Reorder backend connection close procedure to help test cases. * 1498_ - Prefix translated VCL names to avoid name clashes. * 1499_ - Don't leak an objcore when HSH_Lookup returns expired object. * 1493_ - vcl_purge can return synth or restart. * 1476_ - Cope with systems having sys/endian.h and endian.h. * 1496_ - varnishadm should be consistent in argv ordering. * 1494_ - Don't panic on VCL-initiated retry after a backend 500 error. * 1139_ - Also reset keep (for IMS) time when purging. * 1478_ - Avoid panic when delivering an object that expires during delivery. * 1504_ - ACLs can be unreferenced with vcc_err_unref=off set. * 1501_ - Handle that a director couldn't pick a backend. * 1495_ - Reduce WRK_SumStat contention. * 1510_ - Complain on symbol reuse in VCL. * 1514_ - Document storage.NAME.free_space and .used_space [docs] * 1518_ - Suppress body on 304 response when using ESI. * 1519_ - Round-robin director does not support weight. [docs] .. _1269: https://www.varnish-cache.org/trac/ticket/1269 .. _1524: https://www.varnish-cache.org/trac/ticket/1524 .. _1530: https://www.varnish-cache.org/trac/ticket/1530 .. _1475: https://www.varnish-cache.org/trac/ticket/1475 .. _1480: https://www.varnish-cache.org/trac/ticket/1480 .. _1482: https://www.varnish-cache.org/trac/ticket/1482 .. _1473: https://www.varnish-cache.org/trac/ticket/1473 .. _1486: https://www.varnish-cache.org/trac/ticket/1486 .. _1488: https://www.varnish-cache.org/trac/ticket/1488 .. _1489: https://www.varnish-cache.org/trac/ticket/1489 .. _1490: https://www.varnish-cache.org/trac/ticket/1490 .. _1491: https://www.varnish-cache.org/trac/ticket/1491 .. _1498: https://www.varnish-cache.org/trac/ticket/1498 .. _1499: https://www.varnish-cache.org/trac/ticket/1499 .. _1493: https://www.varnish-cache.org/trac/ticket/1493 .. _1476: https://www.varnish-cache.org/trac/ticket/1476 .. _1496: https://www.varnish-cache.org/trac/ticket/1496 .. _1494: https://www.varnish-cache.org/trac/ticket/1494 .. _1139: https://www.varnish-cache.org/trac/ticket/1139 .. _1478: https://www.varnish-cache.org/trac/ticket/1478 .. _1504: https://www.varnish-cache.org/trac/ticket/1504 .. _1501: https://www.varnish-cache.org/trac/ticket/1501 .. _1495: https://www.varnish-cache.org/trac/ticket/1495 .. _1510: https://www.varnish-cache.org/trac/ticket/1510 .. _1518: https://www.varnish-cache.org/trac/ticket/1518 .. _1519: https://www.varnish-cache.org/trac/ticket/1519 ============================================== Changes from 4.0.0 beta1 to 4.0.0 (2014-04-10) ============================================== New since 4.0.0-beta1: - improved varnishstat documentation. - In VCL, req.backend_hint is available in vcl_hit - ncurses is now a dependency. Bugs fixed ---------- * 1469_ - Fix build error on PPC * 1468_ - Set ttl=0 on failed objects * 1462_ - Handle duplicate ReqURL in varnishncsa. * 1467_ - Fix missing clearing of oc->busyobj on HSH_Fail. .. _1469: https://www.varnish-cache.org/trac/ticket/1469 .. _1468: https://www.varnish-cache.org/trac/ticket/1468 .. _1462: https://www.varnish-cache.org/trac/ticket/1462 .. _1467: https://www.varnish-cache.org/trac/ticket/1467 ================================================== Changes from 4.0.0 TP2 to 4.0.0 beta1 (2014-03-27) ================================================== New since TP2: - Previous always-appended code called default.vcl is now called builtin.vcl. The new example.vcl is recommended as a starting point for new users. - vcl_error is now called vcl_synth, and does not any more mandate closing the client connection. - New VCL function vcl_backend_error, where you can change the 503 prepared if all your backends are failing. This can then be cached as a regular object. - Keyword "remove" in VCL is replaced by "unset". - new timestamp and accounting records in varnishlog. - std.timestamp() is introduced. - stored objects are now read only, meaning obj.hits now counts per objecthead instead. obj.lastuse saw little use and has been removed. - builtin VCL now does return(pipe) for chunked POST and PUT requests. - python-docutils and rst2man are now build requirements. - cli_timeout is now 60 seconds to avoid slaughtering the child process in times of high IO load/scheduling latency. - return(purge) from vcl_recv is now valid. - return(hash) is now the default return action from vcl_recv. - req.backend is now req.backend_hint. beresp.storage is beresp.storage_hint. Bugs fixed ---------- * 1460_ - tools now use the new timestamp format. * 1450_ - varnishstat -l segmentation fault. * 1320_ - Work around Content-Length: 0 and Content-Encoding: gzip gracefully. * 1458_ - Panic on busy object. * 1417_ - Handle return(abandon) in vcl_backend_response. * 1455_ - vcl_pipe now sets Connection: close by default on backend requests. * 1454_ - X-Forwarded-For is now done in C, before vcl_recv is run. * 1436_ - Better explanation when missing an import in VCL. * 1440_ - Serve ESI-includes from a different backend. * 1441_ - Incorrect grouping when logging ESI subrequests. * 1434_ - std.duration can now do ms/milliseconds. * 1419_ - Don't put objcores on the ban list until they go non-BUSY. * 1405_ - Ban lurker does not always evict all objects. .. _1460: https://www.varnish-cache.org/trac/ticket/1460 .. _1450: https://www.varnish-cache.org/trac/ticket/1450 .. _1320: https://www.varnish-cache.org/trac/ticket/1320 .. _1458: https://www.varnish-cache.org/trac/ticket/1458 .. _1417: https://www.varnish-cache.org/trac/ticket/1417 .. _1455: https://www.varnish-cache.org/trac/ticket/1455 .. _1454: https://www.varnish-cache.org/trac/ticket/1454 .. _1436: https://www.varnish-cache.org/trac/ticket/1436 .. _1440: https://www.varnish-cache.org/trac/ticket/1440 .. _1441: https://www.varnish-cache.org/trac/ticket/1441 .. _1434: https://www.varnish-cache.org/trac/ticket/1434 .. _1419: https://www.varnish-cache.org/trac/ticket/1419 .. _1405: https://www.varnish-cache.org/trac/ticket/1405 ================================================ Changes from 4.0.0 TP1 to 4.0.0 TP2 (2014-01-23) ================================================ New since from 4.0.0 TP1 ------------------------ - New VCL_BLOB type to pass binary data between VMODs. - New format for VMOD description files. (.vcc) Bugs fixed ---------- * 1404_ - Don't send Content-Length on 304 Not Modified responses. * 1401_ - Varnish would crash when retrying a backend fetch too many times. * 1399_ - Memory get freed while in use by another thread/object * 1398_ - Fix NULL deref related to a backend we don't know any more. * 1397_ - Crash on backend fetch while LRUing. * 1395_ - End up in vcl_error also if fetch fails vcl_backend_response. * 1391_ - Client abort and retry during a streaming fetch would make Varnish assert. * 1390_ - Fix assert if the ban lurker is overtaken by new duplicate bans. * 1385_ - ban lurker doesn't remove (G)one bans * 1383_ - varnishncsa logs requests for localhost regardless of host header. * 1382_ - varnishncsa prints nulls as part of request string. * 1381_ - Ensure vmod_director is installed * 1323_ - Add a missing boundary check for Range requests * 1268_ - shortlived parameter now uses TTL+grace+keep instead of just TTL. * Fix build error on OpenBSD (TCP_KEEP) * n_object wasn't being decremented correctly on object expire. * Example default.vcl in distribution is now 4.0-ready. Open issues ----------- * 1405_ - Ban lurker does not always evict all objects. .. _1405: https://www.varnish-cache.org/trac/ticket/1405 .. _1404: https://www.varnish-cache.org/trac/ticket/1404 .. _1401: https://www.varnish-cache.org/trac/ticket/1401 .. _1399: https://www.varnish-cache.org/trac/ticket/1399 .. _1398: https://www.varnish-cache.org/trac/ticket/1398 .. _1397: https://www.varnish-cache.org/trac/ticket/1397 .. _1395: https://www.varnish-cache.org/trac/ticket/1395 .. _1391: https://www.varnish-cache.org/trac/ticket/1391 .. _1390: https://www.varnish-cache.org/trac/ticket/1390 .. _1385: https://www.varnish-cache.org/trac/ticket/1385 .. _1383: https://www.varnish-cache.org/trac/ticket/1383 .. _1382: https://www.varnish-cache.org/trac/ticket/1382 .. _1381: https://www.varnish-cache.org/trac/ticket/1381 .. _1323: https://www.varnish-cache.org/trac/ticket/1323 .. _1268: https://www.varnish-cache.org/trac/ticket/1268 ============================================ Changes from 3.0.7-rc1 to 3.0.7 (2015-03-23) ============================================ - No changes. ============================================ Changes from 3.0.6 to 3.0.7-rc1 (2015-03-18) ============================================ - Requests with multiple Content-Length headers will now fail. - Stop recognizing a single CR (\r) as a HTTP line separator. This opened up a possible cache poisoning attack in stacked installations where sslterminator/varnish/backend had different CR handling. - Improved error detection on master-child process communication, leading to faster recovery (child restart) if communication loses sync. - Fix a corner-case where Content-Length was wrong for HTTP 1.0 clients, when using gzip and streaming. Bug 1627_. - More robust handling of hop-by-hop headers. - [packaging] Coherent Redhat pidfile in init script. Bug 1690_. - Avoid memory leak when adding bans. .. _1627: http://varnish-cache.org/trac/ticket/1627 .. _1690: http://varnish-cache.org/trac/ticket/1690 =========================================== Changes from 3.0.6rc1 to 3.0.6 (2014-10-16) =========================================== - Minor changes to documentation. - [varnishadm] Add termcap workaround for libedit. Bug 1531_. =========================================== Changes from 3.0.5 to 3.0.6rc1 (2014-06-24) =========================================== - Document storage..* VCL variables. Bug 1514_. - Fix memory alignment panic when http_max_hdr is not a multiple of 4. Bug 1327_. - Avoid negative ReqEnd timestamps with ESI. Bug 1297_. - %D format for varnishncsa is now an integer (as documented) - Fix compile errors with clang. - Clear objectcore flags earlier in ban lurker to avoid spinning thread. Bug 1470_. - Patch embedded jemalloc to avoid segfault. Bug 1448_. - Allow backend names to start with if, include or else. Bug 1439_. - Stop handling gzip after gzip body end. Bug 1086_. - Document %D and %T for varnishncsa. .. _1514: https://www.varnish-cache.org/trac/ticket/1514 .. _1327: https://www.varnish-cache.org/trac/ticket/1327 .. _1297: https://www.varnish-cache.org/trac/ticket/1297 .. _1470: https://www.varnish-cache.org/trac/ticket/1470 .. _1448: https://www.varnish-cache.org/trac/ticket/1448 .. _1439: https://www.varnish-cache.org/trac/ticket/1439 .. _1086: https://www.varnish-cache.org/trac/ticket/1086 ============================================= Changes from 3.0.5 rc 1 to 3.0.5 (2013-12-02) ============================================= varnishd -------- - Always check the local address of a socket. This avoids a crash if server.ip is accessed after a client has closed the connection. `Bug #1376` .. _bug #1376: https://www.varnish-cache.org/trac/ticket/1376 ================================ Changes from 3.0.4 to 3.0.5 rc 1 ================================ varnishd -------- - Stop printing error messages on ESI parse errors - Fix a problem where Varnish would segfault if the first part of a synthetic page was NULL. `Bug #1287` - If streaming was used, you could in some cases end up with duplicate content headers being sent to clients. `Bug #1272` - If we receive a completely garbled request, don't pass through vcl_error, since we could then end up in vcl_recv through a restart and things would go downhill from there. `Bug #1367` - Prettify backtraces on panic slightly. .. _bug #1287: https://www.varnish-cache.org/trac/ticket/1287 .. _bug #1272: https://www.varnish-cache.org/trac/ticket/1272 .. _bug #1367: https://www.varnish-cache.org/trac/ticket/1367 varnishlog ---------- - Correct an error where -m, -c and -b would interact badly, leading to lack of matches. Also, emit BackendXID to signify the start of a transaction. `Bug #1325` .. _bug #1325: https://www.varnish-cache.org/trac/ticket/1325 varnishadm ---------- - Handle input from stdin properly. `Bug #1314` .. _bug #1314: https://www.varnish-cache.org/trac/ticket/1314 ============================================= Changes from 3.0.4 rc 1 to 3.0.4 (2013-06-14) ============================================= varnishd -------- - Set the waiter pipe as non-blocking and record overflows. `Bug #1285` - Fix up a bug in the ACL compile code that could lead to false negatives. CVE-2013-4090. `Bug #1312` - Return an error if the client sends multiple Host headers. .. _bug #1285: https://www.varnish-cache.org/trac/ticket/1285 .. _bug #1312: https://www.varnish-cache.org/trac/ticket/1312 ================================ Changes from 3.0.3 to 3.0.4 rc 1 ================================ varnishd -------- - Fix error handling when uncompressing fetched objects for ESI processing. `Bug #1184` - Be clearer about which timeout was reached in logs. - Correctly decrement n_waitinglist counter. `Bug #1261` - Turn off Nagle/set TCP_NODELAY. - Avoid panic on malformed Vary headers. `Bug #1275` - Increase the maximum length of backend names. `Bug #1224` - Add support for banning on http.status. `Bug #1076` - Make hit-for-pass correctly prefer the transient storage. .. _bug #1076: https://www.varnish-cache.org/trac/ticket/1076 .. _bug #1184: https://www.varnish-cache.org/trac/ticket/1184 .. _bug #1224: https://www.varnish-cache.org/trac/ticket/1224 .. _bug #1261: https://www.varnish-cache.org/trac/ticket/1261 .. _bug #1275: https://www.varnish-cache.org/trac/ticket/1275 varnishlog ---------- - If -m, but neither -b or -c is given, assume both. This filters out a lot of noise when -m is used to filter. `Bug #1071` .. _bug #1071: https://www.varnish-cache.org/trac/ticket/1071 varnishadm ---------- - Improve tab completion and require libedit/readline to build. varnishtop ---------- - Reopen log file if Varnish is restarted. varnishncsa ----------- - Handle file descriptors above 64k (by ignoring them). Prevents a crash in some cases with corrupted shared memory logs. - Add %D and %T support for more timing information. Other ----- - Documentation updates. - Fixes for OSX - Disable PCRE JIT-er, since it's broken in some PCRE versions, at least on i386. - Make libvarnish prefer exact hits when looking for VSL tags. ======================================== Changes from 3.0.2 to 3.0.3 (2012-08-20) ======================================== varnishd -------- - Fix a race on the n_sess counter. This race made varnish do excessive session workspace allocations. `Bug #897`_. - Fix some crashes in the gzip code when it runs out of memory. `Bug #1037`_. `Bug #1043`_. `Bug #1044`_. - Fix a bug where the regular expression parser could end up in an infinite loop. `Bug #1047`_. - Fix a memory leak in the regex code. - DNS director now uses port 80 by default if not specified. - Introduce `idle_send_timeout` and increase default value for `send_timeout` to 600s. This allows a long send timeout for slow clients while still being able to disconnect idle clients. - Fix an issue where did not remove HTML comments. `Bug #1092`_. - Fix a crash when passing with streaming on. - Fix a crash in the idle session timeout code. - Fix an issue where the poll waiter did not timeout clients if all clients were idle. `Bug #1023`_. - Log regex errors instead of crashing. - Introduce `pcre_match_limit`, and `pcre_match_limit_recursion` parameters. - Add CLI commands to manually control health state of a backend. - Fix an issue where the s_bodybytes counter is not updated correctly on gunzipped delivery. - Fix a crash when we couldn't allocate memory for a fetched object. `Bug #1100`_. - Fix an issue where objects could end up in the transient store with a long TTL, when memory could not be allocated for them in the requested store. `Bug #1140`_. - Activate req.hash_ignore_busy when req.hash_always_miss is activated. `Bug #1073`_. - Reject invalid tcp port numbers for listen address. `Bug #1035`_. - Enable JIT for better performing regular expressions. `Bug #1080`_. - Return VCL errors in exit code when using -C. `Bug #1069`_. - Stricter validation of acl syntax, to avoid a crash with 5-octet IPv4 addresses. `Bug #1126`_. - Fix a crash when first argument to regsub was null. `Bug #1125`_. - Fix a case where varnish delivered corrupt gzip content when using ESI. `Bug #1109`_. - Fix a case where varnish didn't remove the old Date header and served it alongside the varnish-generated Date header. `Bug #1104`_. - Make saint mode work, for the case where we have no object with that hash. `Bug #1091`_. - Don't save the object body on hit-for-pass objects. - n_ban_gone counter added to count the number of "gone" bans. - Ban lurker rewritten to properly sleep when no bans are present, and otherwise to process the list at the configured speed. - Fix a case where varnish delivered wrong content for an uncompressed page with compressed ESI child. `Bug #1029`_. - Fix an issue where varnish runs out of thread workspace when processing many ESI includes on an object. `Bug #1038`_. - Fix a crash when streaming was enabled for an empty body. - Better error reporting for some fetch errors. - Small performance optimizations. .. _bug #897: https://www.varnish-cache.org/trac/ticket/897 .. _bug #1023: https://www.varnish-cache.org/trac/ticket/1023 .. _bug #1029: https://www.varnish-cache.org/trac/ticket/1029 .. _bug #1035: https://www.varnish-cache.org/trac/ticket/1035 .. _bug #1037: https://www.varnish-cache.org/trac/ticket/1037 .. _bug #1038: https://www.varnish-cache.org/trac/ticket/1038 .. _bug #1043: https://www.varnish-cache.org/trac/ticket/1043 .. _bug #1044: https://www.varnish-cache.org/trac/ticket/1044 .. _bug #1047: https://www.varnish-cache.org/trac/ticket/1047 .. _bug #1069: https://www.varnish-cache.org/trac/ticket/1069 .. _bug #1073: https://www.varnish-cache.org/trac/ticket/1073 .. _bug #1080: https://www.varnish-cache.org/trac/ticket/1080 .. _bug #1091: https://www.varnish-cache.org/trac/ticket/1091 .. _bug #1092: https://www.varnish-cache.org/trac/ticket/1092 .. _bug #1100: https://www.varnish-cache.org/trac/ticket/1100 .. _bug #1104: https://www.varnish-cache.org/trac/ticket/1104 .. _bug #1109: https://www.varnish-cache.org/trac/ticket/1109 .. _bug #1125: https://www.varnish-cache.org/trac/ticket/1125 .. _bug #1126: https://www.varnish-cache.org/trac/ticket/1126 .. _bug #1140: https://www.varnish-cache.org/trac/ticket/1140 varnishncsa ----------- - Support for \t\n in varnishncsa format strings. - Add new format: %{VCL_Log:foo}x which output key:value from std.log() in VCL. - Add user-defined date formatting, using %{format}t. varnishtest ----------- - resp.body is now available for inspection. - Make it possible to test for the absence of an HTTP header. `Bug #1062`_. - Log the full panic message instead of shortening it to 512 characters. .. _bug #1062: https://www.varnish-cache.org/trac/ticket/1062 varnishstat ----------- - Add json output (-j). Other ----- - Documentation updates. - Bump minimum number of threads to 50 in RPM packages. - RPM packaging updates. - Fix some compilation warnings on Solaris. - Fix some build issues on Open/Net/DragonFly-BSD. - Fix build on FreeBSD 10-current. - Fix libedit detection on \*BSD OSes. `Bug #1003`_. .. _bug #1003: https://www.varnish-cache.org/trac/ticket/1003 ============================================= Changes from 3.0.2 rc 1 to 3.0.2 (2011-10-26) ============================================= varnishd -------- - Make the size of the synthetic object workspace equal to `http_resp_size` and add workaround to avoid a crash when setting too long response strings for synthetic objects. - Ensure the ban lurker always sleeps the advertised 1 second when it does not have anything to do. - Remove error from `vcl_deliver`. Previously this would assert while it will now give a syntax error. varnishncsa ----------- - Add default values for some fields when logging incomplete records and document the default values. Other ----- - Documentation updates - Some Solaris portability updates. ============================================= Changes from 3.0.1 to 3.0.2 rc 1 (2011-10-06) ============================================= varnishd -------- - Only log the first 20 bytes of extra headers to prevent overflows. - Fix crasher bug which sometimes happened if responses are queued and the backend sends us Vary. `Bug #994`_. - Log correct size of compressed when uncompressing them for clients that do not support compression. `Bug #996`_. - Only send Range responses if we are going to send a body. `Bug #1007`_. - When varnishd creates a storage file, also unlink it to avoid leaking disk space over time. `Bug #1008`_. - The default size of the `-s file` parameter has been changed to 100MB instead of 50% of the available disk space. - The limit on the number of objects we remove from the cache to make room for a new one was mistakenly lowered to 10 in 3.0.1. This has been raised back to 50. `Bug #1012`_. - `http_req_size` and `http_resp_size` have been increased to 8192 bytes. This better matches what other HTTPds have. `Bug #1016`_. .. _bug #994: https://www.varnish-cache.org/trac/ticket/994 .. _bug #992: https://www.varnish-cache.org/trac/ticket/992 .. _bug #996: https://www.varnish-cache.org/trac/ticket/996 .. _bug #1007: https://www.varnish-cache.org/trac/ticket/1007 .. _bug #1008: https://www.varnish-cache.org/trac/ticket/1008 .. _bug #1012: https://www.varnish-cache.org/trac/ticket/1012 .. _bug #1016: https://www.varnish-cache.org/trac/ticket/1016 VCL --- - Allow relational comparisons of floating point types. - Make it possible for VMODs to fail loading and so cause the VCL loading to fail. varnishncsa ----------- - Fixed crash when client was sending illegal HTTP headers. - `%{Varnish:handling}` in format strings was broken, this has been fixed. Other ----- - Documentation updates - Some Solaris portability updates. ============================================= Changes from 3.0.1 rc 1 to 3.0.1 (2011-08-30) ============================================= varnishd -------- - Fix crash in streaming code. - Add `fallback` director, as a variant of the `round-robin` director. - The parameter `http_req_size` has been reduced on 32 bit machines. VCL --- - Disallow error in the `vcl_init` and `vcl_fini` VCL functions. varnishncsa ----------- - Fixed crash when using `-X`. - Fix error when the time to first byte was in the format string. Other ----- - Documentation updates ============================================= Changes from 3.0.0 to 3.0.1 rc 1 (2011-08-24) ============================================= varnishd -------- - Avoid sending an empty end-chunk when sending bodyless responses. - `http_resp_hdr_len` and `http_req_hdr_len` were set to too low values leading to clients receiving `HTTP 400 Bad Request` errors. The limit has been increased and the error code is now `HTTP 413 Request entity too large`. - Objects with grace or keep set were mistakenly considered as candidates for the transient storage. They now have their grace and keep limited to limit the memory usage of the transient stevedore. - If a request was restarted from `vcl_miss` or `vcl_pass` it would crash. This has been fixed. `Bug #965`_. - Only the first few clients waiting for an object from the backend would be woken up when object arrived and this lead to some clients getting stuck for a long time. This has now been fixed. `Bug #963`_. - The `hash` and `client` directors would mistakenly retry fetching an object from the same backend unless health probes were enabled. This has been fixed and it will now retry a different backend. .. _bug #965: https://www.varnish-cache.org/trac/ticket/965 .. _bug #963: https://www.varnish-cache.org/trac/ticket/963 VCL --- - Request specific variables such as `client.*` and `server.*` are now correctly marked as not available in `vcl_init` and `vcl_fini`. - The VCL compiler would fault if two IP comparisons were done on the same line. This now works correctly. `Bug #948`_. .. _bug #948: https://www.varnish-cache.org/trac/ticket/948 varnishncsa ----------- - Add support for logging arbitrary request and response headers. - Fix crashes if `hitmiss` and `handling` have not yet been set. - Avoid printing partial log lines if there is an error in a format string. - Report user specified format string errors better. varnishlog ---------- - `varnishlog -r` now works correctly again and no longer opens the shared log file of the running Varnish. Other ----- - Various documentation updates. - Minor compilation fixes for newer compilers. - A bug in the ESI entity replacement parser has been fixed. `Bug #961`_. - The ABI of VMODs are now checked. This will require a rebuild of all VMODs against the new version of Varnish. .. _bug #961: https://www.varnish-cache.org/trac/ticket/961 ============================================= Changes from 3.0 beta 2 to 3.0.0 (2011-06-16) ============================================= varnishd -------- - Avoid sending an empty end-chunk when sending bodyless responses. VCL --- - The `synthetic` keyword has now been properly marked as only available in `vcl_deliver`. `Bug #936`_. .. _bug #936: https://www.varnish-cache.org/trac/ticket/936 varnishadm ---------- - Fix crash if the secret file was unreadable. `Bug #935`_. - Always exit if `varnishadm` can't connect to the backend for any reason. .. _bug #935: https://www.varnish-cache.org/trac/ticket/935 ===================================== Changes from 3.0 beta 1 to 3.0 beta 2 ===================================== varnishd -------- - thread_pool_min and thread_pool_max now each refer to the number of threads per pool, rather than being inconsistent as they were before. - 307 Temporary redirect is now considered cacheable. `Bug #908`_. - The `stats` command has been removed from the CLI interface. With the new counters, it would mean implementing more and more of varnishstat in the master CLI process and the CLI is single-threaded so we do not want to do this work there in the first place. Use varnishstat instead. .. _bug #908: https://www.varnish-cache.org/trac/ticket/908 VCL --- - VCL now treats null arguments (unset headers for instance) as empty strings. `Bug #913`_. - VCL now has vcl_init and vcl_fini functions that are called when a given VCL has been loaded and unloaded. - There is no longer any interpolation of the right hand side in bans where the ban is a single string. This was confusing and you now have to make sure bits are inside or outside string context as appropriate. - Varnish is now stricter in enforcing no duplication of probes, backends and ACLs. .. _bug #913: https://www.varnish-cache.org/trac/ticket/913 varnishncsa ----------- - varnishncsa now ignores piped requests, since we have no way of knowing their return status. VMODs ----- - The std module now has proper documentation, including a manual page ================================ Changes from 2.1.5 to 3.0 beta 1 ================================ Upcoming changes ---------------- - The interpretation of bans will change slightly between 3.0 beta 1 and 3.0 release. Currently, doing ``ban("req.url == req.url")`` will cause the right hand req.url to be interpreted in the context of the request creating the ban. This will change so you will have to do ``ban("req.url == " + req.url)`` instead. This syntax already works and is recommended. varnishd -------- - Add streaming on ``pass`` and ``miss``. This is controlled by the ``beresp.do_stream`` boolean. This includes support for compression/uncompression. - Add support for ESI and gzip. - Handle objects larger than 2G. - HTTP Range support is now enabled by default - The ban lurker is enabled by default - if there is a backend or director with the name ``default``, use that as the default backend, otherwise use the first one listed. - Add many more stats counters. Amongst those, add per storage backend stats and per-backend statistics. - Syslog the platform we are running on - The ``-l`` (shared memory log file) argument has been changed, please see the varnishd manual for the new syntax. - The ``-S`` and ``-T`` arguments are now stored in the shmlog - Fix off-by-one error when exactly filling up the workspace. `Bug #693`_. - Make it possible to name storage backends. The names have to be unique. - Update usage output to match the code. `Bug #683`_ - Add per-backend health information to shared memory log. - Always recreate the shared memory log on startup. - Add a ``vcl_dir`` parameter. This is used to resolve relative path names for ``vcl.load`` and ``include`` in .vcl files. - Make it possible to specify ``-T :0``. This causes varnishd to look for a free port automatically. The port is written in the shared memory log so varnishadm can find it. - Classify locks into kinds and collect stats for each kind, recording the data in the shared memory log. - Auto-detect necessary flags for pthread support and ``VCC_CC`` flags. This should make Varnish somewhat happier on Solaris. `Bug #663`_ - The ``overflow_max`` parameter has been renamed to ``queue_max``. - If setting a parameter fails, report which parameter failed as this is not obvious during startup. - Add a parameter named ``shortlived``. Objects whose TTL is less than the parameter go into transient (malloc) storage. - Reduce the default ``thread_add_delay`` to 2ms. - The ``max_esi_includes`` parameter has been renamed to ``max_esi_depth``. - Hash string components are now logged by default. - The default connect timeout parameter has been increased to 0.7 seconds. - The ``err_ttl`` parameter has been removed and is replaced by a setting in default.vcl. - The default ``send_timeout`` parameter has been reduced to 1 minute. - The default ``ban_lurker`` sleep has been set to 10ms. - When an object is banned, make sure to set its grace to 0 as well. - Add ``panic.show`` and ``panic.clear`` CLI commands. - The default ``http_resp_hdr_len`` and ``http_req_hdr_len`` has been increased to 2048 bytes. - If ``vcl_fetch`` results in ``restart`` or ``error``, close the backend connection rather than fetching the object. - If allocating storage for an object, try reducing the chunk size before evicting objects to make room. `Bug #880`_ - Add ``restart`` from ``vcl_deliver``. `Bug #411`_ - Fix an off-by-up-to-one-minus-epsilon bug where if an object from the backend did not have a last-modified header we would send out a 304 response which did include a ``Last-Modified`` header set to when we received the object. However, we would compare the timestamp to the fractional second we got the object, meaning any request with the exact timestamp would get a ``200`` response rather than the correct ``304``. - Fix a race condition in the ban lurker where a serving thread and the lurker would both look at an object at the same time, leading to Varnish crashing. - If a backend sends a ``Content-Length`` header and we are streaming and we are not uncompressing it, send the ``Content-Length`` header on, allowing browsers to diplay a progress bar. - All storage must be at least 1M large. This is to prevent administrator errors when specifying the size of storage where the admin might have forgotten to specify units. .. _bug #693: https://www.varnish-cache.org/trac/ticket/693 .. _bug #683: https://www.varnish-cache.org/trac/ticket/683 .. _bug #663: https://www.varnish-cache.org/trac/ticket/663 .. _bug #880: https://www.varnish-cache.org/trac/ticket/880 .. _bug #411: https://www.varnish-cache.org/trac/ticket/411 Tools ----- common ****** - Add an ``-m $tag:$regex`` parameter, used for selecting some transactions. The parameter can be repeated, in which case it is logically and-ed together. varnishadm ********** - varnishadm will now pick up the -S and -T arguments from the shared memory log, meaning just running it without any arguments will connect to the running varnish. `Bug #875`_ - varnishadm now accepts an -n argument to specify the location of the shared memory log file - add libedit support .. _bug #875: https://www.varnish-cache.org/trac/ticket/875 varnishstat *********** - reopen shared memory log if the varnishd process is restarted. - Improve support for selecting some, but not all fields using the ``-f`` argument. Please see the documentation for further details on the use of ``-f``. - display per-backend health information varnishncsa *********** - Report error if called with ``-i`` and ``-I`` as they do not make any sense for varnishncsa. - Add custom log formats, specified with ``-F``. Most of the Apache log formats are supported, as well as some Varnish-specific ones. See the documentation for further information. `Bug #712`_ and `bug #485`_ .. _bug #712: https://www.varnish-cache.org/trac/ticket/712 .. _bug #485: https://www.varnish-cache.org/trac/ticket/485 varnishtest *********** - add ``-l`` and ``-L`` switches which leave ``/tmp/vtc.*`` behind on error and unconditionally respectively. - add ``-j`` parameter to run tests in parallell and use this by default. varnishtop ********** - add ``-p $period`` parameter. The units in varnishtop were previously undefined, they are now in requests/period. The default period is 60 seconds. varnishlog ********** - group requests by default. This can be turned off by using ``-O`` - the ``-o`` parameter is now a no-op and is ignored. VMODs ----- - Add a std VMOD which includes a random function, log, syslog, fileread, collect, VCL --- - Change string concatenation to be done using ``+`` rather than implicitly. - Stop using ``%xx`` escapes in VCL strings. - Change ``req.hash += value`` to ``hash_data(value)`` - Variables in VCL now have distinct read/write access - ``bereq.connect_timeout`` is now available in ``vcl_pipe``. - Make it possible to declare probes outside of a director. Please see the documentation on how to do this. - The VCL compiler has been reworked greatly, expanding its abilities with regards to what kinds of expressions it understands. - Add ``beresp.backend.name``, ``beresp.backend.ip`` and ``beresp.backend.port`` variables. They are only available from ``vcl_fetch`` and are read only. `Bug #481`_ - The default VCL now calls pass for any objects where ``beresp.http.Vary == "*"``. `Bug #787`_ - The ``log`` keyword has been moved to the ``std`` VMOD. - It is now possible to choose which storage backend to be used - Add variables ``storage.$name.free_space``, ``storage.$name.used_space`` and ``storage.$name.happy`` - The variable ``req.can_gzip`` tells us whether the client accepts gzipped objects or not. - ``purge`` is now called ``ban``, since that is what it really is and has always been. - ``req.esi_level`` is now available. `Bug #782`_ - esi handling is now controlled by the ``beresp.do_esi`` boolean rather than the ``esi`` function. - ``beresp.do_gzip`` and ``beresp.do_gunzip`` now control whether an uncompressed object should be compressed and a compressed object should be uncompressed in the cache. - make it possible to control compression level using the ``gzip_level`` parameter. - ``obj.cacheable`` and ``beresp.cacheable`` have been removed. Cacheability is now solely through the ``beresp.ttl`` and ``beresp.grace`` variables. - setting the ``obj.ttl`` or ``beresp.ttl`` to zero now also sets the corresponding grace to zero. If you want a non-zero grace, set grace after setting the TTL. - ``return(pass)`` in ``vcl_fetch`` has been renamed to ``return(hit_for_pass)`` to make it clear that pass in ``vcl_fetch`` and ``vcl_recv`` are different beasts. - Add actual purge support. Doing ``purge`` will remove an object and all its variants. .. _bug #481: https://www.varnish-cache.org/trac/ticket/481 .. _bug #787: https://www.varnish-cache.org/trac/ticket/787 .. _bug #782: https://www.varnish-cache.org/trac/ticket/782 Libraries --------- - ``libvarnishapi`` has been overhauled and the API has been broken. Please see git commit logs and the support tools to understand what's been changed. - Add functions to walk over all the available counters. This is needed because some of the counter names might only be available at runtime. - Limit the amount of time varnishapi waits for a shared memory log to appear before returning an error. - All libraries but ``libvarnishapi`` have been moved to a private directory as they are not for public consumption and have no ABI/API guarantees. Other ----- - Python is now required to build - Varnish Cache is now consistently named Varnish Cache. - The compilation process now looks for kqueue on NetBSD - Make it possible to use a system jemalloc rather than the bundled version. - The documentation has been improved all over and should now be in much better shape than before ======================================== Changes from 2.1.4 to 2.1.5 (2011-01-25) ======================================== varnishd -------- - On pass from vcl\_recv, we did not remove the backends Content-Length header before adding our own. This could cause confusion for browsers and has been fixed. - Make pass with content-length work again. An issue with regards to 304, Content-Length and pass has been resolved. - An issue relating to passed requests with If-Modified-Since headers has been fixed. Varnish did not recognize that the 304-response did not have a body. - A potential lock-inversion with the ban lurker thread has been resolved. - Several build-dependency issues relating to rst2man have been fixed. Varnish should now build from source without rst2man if you are using tar-balls. - Ensure Varnish reads the expected last CRLF after chunked data from the backend. This allows re-use of the connection. - Remove a GNU Make-ism during make dist to make BSD happier. - Document the log, set, unset, return and restart statements in the VCL documentation. - Fix an embarrassingly old bug where Varnish would run out of workspace when requests come in fast over a single connection, typically during synthetic benchmarks. - Varnish will now allow If-Modified-Since requests to objects without a Last-Modified-header, and instead use the time the object was cached instead. - Do not filter out Content-Range headers in pass. - Require -d, -b, -f, -S or -T when starting varnishd. In human terms, this means that it is legal to start varnishd without a Vcl or backend, but only if you have a CLI channel of some kind. - Don't suppress Cache-Control headers in pass responses. - Merge multi-line Cache-Control and Vary header fields. Until now, no browsers have needed this, but Chromium seems to find it necessary to spread its Cache-Control across two lines, and we get to deal with it. - Make new-purge not touch busy objects. This fixes a potential crash when calling VRT\_purge. - If there are everal grace-able objects, pick the least expired one. - Fix an issue with varnishadm -T :6082 shorthand. - Add bourn-shell like "here" documents on the CLI. Typical usage: vcl.inline vcl\_new << 42 backend foo {...} sub vcl\_recv {...} 42 - Add CLI version to the CLI-banner, starting with version 1.0 to mark here-documents. - Fix a problem with the expiry thread slacking off during high load. varnishtest ----------- - Remove no longer existing -L option. =========================== Changes from 2.1.3 to 2.1.4 =========================== varnishd -------- - An embarrasing typo in the new binary heap layout caused inflated obj/objcore/objhdr counts and could cause odd problems when the LRU expunge mechanism was invoked. This has been fixed. - We now have updated documentation in the reStructuredText format. Manual pages and reference documentation are both built from this. - We now include a DNS director which uses DNS for choosing which backend to route requests to. Please see the documentation for more details. - If you restarted a request, the HTTP header X-Forwarded-For would be updated multiple times. This has been fixed. - If a VCL contained a % sign, and the vcl.show CLI command was used, varnishd would crash. This has been fixed. - When doing a pass operation, we would remove the Content-Length, Age and Proxy-Auth headers. We are no longer doing this. - now has a string representation, making it easier to construct Expires headers in VCL. - In a high traffic environment, we would sometimes reuse a file descriptor before flushing the logs from a worker thread to the shared log buffer. This would cause confusion in some of the tools. This has been fixed by explicitly flushing the log when a backend connection is closed. - If the communication between the management and the child process gets out of sync, we have no way to recover. Previously, varnishd would be confused, but we now just kill the child and restart it. - If the backend closes the connection on us just as we sent a request to it, we retry the request. This should solve some interoperability problems with Apache and the mpm-itk multi processing module. - varnishd now only provides help output the current CLI session is authenticated for. - If the backend does not tell us which length indication it is using, we now assume the resource ends EOF at. - The client director now has a variable client.identity which is used to choose which backend should receive a given request. - The Solaris port waiter has been updated, and other portability fixes for Solaris. - There was a corner case in the close-down processing of pipes, this has now been fixed. - Previously, if we stopped polling a backend which was sick, it never got marked as healthy. This has now been changed. - It is now possible to specify ports as part of the .host field in VCL. - The synthetic counters were not locked properly, and so the sms\_ counters could underflow. This has now been fixed. - The value of obj.status as a string in vcl\_error would not be correct in all cases. This has been fixed. - Varnish would try to trim storage segments completely filled when using the malloc stevedore and the object was received chunked encoding. This has been fixed. - If a buggy backend sends us a Vary header with two colons, we would previously abort. We now rather fix this up and ignore the extra colon. - req.hash\_always\_miss and req.hash\_ignore\_busy has been added, to make preloading or periodically refreshing content work better. varnishncsa ----------- - varnishncsa would in some cases be confused by ESI requests and output invalid lines. This has now been fixed. varnishlog ---------- - varnishlog now allows -o and -u together. varnishtop ---------- - varnishtop would crash on 32 bit architectures. This has been fixed. libvarnishapi ------------- - Regex inclusion and exclusion had problems with matching particular parts of the string being matched. This has been fixed. =========================== Changes from 2.1.2 to 2.1.3 =========================== varnishd -------- - Improve scalability of critbit. - The critbit hash algorithm has now been tightened to make sure the tree is in a consistent state at all points, and the time we wait for an object to cool off after it is eligible for garbage collection has been tweaked. - Add log command to VCL. This emits a VCL\_log entry into the shared memory log. - Only emit Length and ReqEnd log entries if we actually have an XID. This should get rid of some empty log lines in varnishncsa. - Destroy directors in a predictable fashion, namely reverse of creation order. - Fix bug when ESI elements spanned storage elements causing a panic. - In some cases, the VCL compiler would panic instead of giving sensible messages. This has now been fixed. - Correct an off-by-one error when the requested range exceeds the size of an object. - Handle requests for the end of an object correctly. - Allow tabulator characters in the third field of the first line of HTTP requests - On Solaris, if the remote end sends us an RST, all system calls related to that socket will return EINVAL. We now handle this better. libvarnishapi ------------- - The -X parameter didn't work correctly. This has been fixed. =========================== Changes from 2.1.1 to 2.1.2 =========================== varnishd -------- - When adding Range support for 2.1.1, we accidentally introduced a bug which would append garbage to objects larger than the chunk size, by default 128k. Browsers would do the right thing due to Content-Length, but some load balancers would get very confused. =========================== Changes from 2.1.1 to 2.1.1 =========================== varnishd -------- - The changelog in 2.1.0 included syntax errors, causing the generated changelog to be empty. - The help text for default\_grace was wrongly formatted and included a syntax error. This has now been fixed. - varnishd now closes the file descriptor used to read the management secret file (from the -S parameter). - The child would previously try to close every valid file descriptor, something which could cause problems if the file descriptor ulimit was set too high. We now keep track of all the file descriptors we open and only close up to that number. - ESI was partially broken in 2.1.0 due to a bug in the rollback of session workspace. This has been fixed. - Reject the authcommand rather than crash if there is no -S parameter given. - Align pointers in allocated objects. This will in theory make Varnish a tiny bit faster at the expense of slightly more memory usage. - Ensure the master process process id is updated in the shared memory log file after we go into the background. - HEAD requests would be converted to GET requests too early, which affected pass and pipe. This has been fixed. - Update the documentation to point out that the TTL is no longer taken into account to decide whether an object is cacheable or not. - Add support for completely obliterating an object and all variants of it. Currently, this has to be done using inline C. - Add experimental support for the Range header. This has to be enabled using the parameter http\_range\_support. - The critbit hasher could get into a deadlock and had a race condition. Both those have now been fixed. varnishsizes -----------~ - varnishsizes, which is like varnishhist, but for the length of objects, has been added.. =========================== Changes from 2.0.6 to 2.1.0 =========================== varnishd -------- - Persistent storage is now experimentally supported using the persistent stevedore. It has the same command line arguments as the file stevedore. - obj.\* is now called beresp.\* in vcl\_fetch, and obj.\* is now read-only. - The regular expression engine is now PCRE instead of POSIX regular expressions. - req.\* is now available in vcl\_deliver. - Add saint mode where we can attempt to grace an object if we don't like the backend response for some reason. Related, add saintmode\_threshold which is the threshold for the number of objects to be added to the trouble list before the backend is considered sick. - Add a new hashing method called critbit. This autoscales and should work better on large object workloads than the classic hash. Critbit has been made the default hash algorithm. - When closing connections, we experimented with sending RST to free up load balancers and free up threads more quickly. This caused some problems with NAT routers and so has been reverted for now. - Add thread that checks objects against ban list in order to prevent ban list from growing forever. Note that this needs purges to be written so they don't depend on req.\*. Enabled by setting ban\_lurker\_sleep to a nonzero value. - The shared memory log file format was limited to maximum 64k simultaneous connections. This is now a 32 bit field which removes this limitation. - Remove obj\_workspace, this is now sized automatically. - Rename acceptors to waiters - vcl\_prefetch has been removed. It was never fully implemented. - Add support for authenticating CLI connections. - Add hash director that chooses which backend to use depending on req.hash. - Add client director that chooses which backend to use depending on the client's IP address. Note that this ignores the X-Forwarded-For header. - varnishd now displays a banner by default when you connect to the CLI. - Increase performance somewhat by moving statistics gathering into a per-worker structure that is regularly flushed to the global stats. - Make sure we store the header and body of object together. This may in some cases improve performance and is needed for persistence. - Remove client-side address accounting. It was never used for anything and presented a performance problem. - Add a timestamp to bans, so you can know how old they are. - Quite a few people got confused over the warning about not being able to lock the shared memory log into RAM, so stop warning about that. - Change the default CLI timeout to 10 seconds. - We previously forced all inserts into the cache to be GET requests. This has been changed to allow POST as well in order to be able to implement purge-on-POST semantics. - The CLI command stats now only lists non-zero values. - The CLI command stats now only lists non-zero values. - Use daemon(3) from libcompat on Darwin. - Remove vcl\_discard as it causes too much complexity and never actually worked particularly well. - Remove vcl\_timeout as it causes too much complexity and never actually worked particularly well. - Update the documentation so it refers to sess\_workspace, not http\_workspace. - Document the -i switch to varnishd as well as the server.identity and server.hostname VCL variables. - purge.hash is now deprecated and no longer shown in help listings. - When processing ESI, replace the five mandatory XML entities when we encounter them. - Add string representations of time and relative time. - Add locking for n\_vbe\_conn to make it stop underflowing. - When ESI-processing content, check for illegal XML character entities. - Varnish can now connect its CLI to a remote instance when starting up, rather than just being connected to. - It is no longer needed to specify the maximum number of HTTP headers to allow from backends. This is now a run-time parameter. - The X-Forwarded-For header is now generated by vcl\_recv rather than the C code. - It is now possible to not send all CLI traffic to syslog. - It is now possible to not send all CLI traffic to syslog. - In the case of varnish crashing, it now outputs a identifying string with the OS, OS revision, architecture and storage parameters together with the backtrace. - Use exponential backoff when we run out of file descriptors or sessions. - Allow setting backend timeouts to zero. - Count uptime in the shared memory log. - Try to detect the case of two running varnishes with the same shmlog and storage by writing the master and child process ids to the shmlog and refusing to start if they are still running. - Make sure to use EOF mode when serving ESI content to HTTP/1.0 clients. - Make sure we close the connection if it either sends Connection: close or it is a HTTP/1.0 backend that does not send Connection: keep-alive. - Increase the default session workspace to 64k on 64-bit systems. - Make the epoll waiter use level triggering, not edge triggering as edge triggering caused problems on very busy servers. - Handle unforeseen client disconnections better on Solaris. - Make session lingering apply to new sessions, not just reused sessions. varnishstat ----------- - Make use of the new uptime field in the shared memory log rather than synthesizing it from the start time. varnishlog ---------- - Exit at the end of the file when started with -d. varnishadm ---------- - varnishadm can now have a timeout when trying to connect to the running varnishd. - varnishadm now knows how to respond to the secret from a secured varnishd =========================== Changes from 2.0.5 to 2.0.6 =========================== varnishd -------- - 2.0.5 had an off-by-one error in the ESI handling causing includes to fail a large part of the time. This has now been fixed. - Try harder to not confuse backends when sending them backend probes. We half-closed the connection, something some backends thought meant we had dropped the connection. Stop doing so, and add the capability for specifying the expected response code. - In 2.0.5, session lingering was turned on. This caused statistics to not be counted often enough in some cases. This has now been fixed. - Avoid triggering an assert if the other end closes the connection while we are lingering and waiting for another request from them. - When generating backtraces, prefer the built-in backtrace function if such exists. This fixes a problem compiling 2.0.5 on Solaris. - Make it possible to specify the per-thread stack size. This might be useful on 32 bit systems with their limited address space. - Document the -C option to varnishd. =========================== Changes from 2.0.4 to 2.0.5 =========================== varnishd -------- - Handle object workspace overruns better. - Allow turning off ESI processing per request by using set req.esi = off. - Tell the kernel that we expect to use the mmap-ed file in a random fashion. On Linux, this turns off/down readahead and increases performance. - Make it possible to change the maximum number of HTTP headers we allow by passing --with-max-header-fields=NUM rather than changing the code. - Implement support for HTTP continuation lines. - Change how connections are closed and only use SO\_LINGER for orderly connection closure. This should hopefully make worker threads less prone to hangups on network problems. - Handle multi-element purges correctly. Previously we ended up with parse errors when this was done from VCL. - Handle illegal responses from the backend better by serving a 503 page rather than panic-ing. - When we run into an assertion that is not true, Varnish would previously dump a little bit of information about itself. Extend that information with a backtrace. Note that this relies on the varnish binary being unstripped. - Add a session\_max parameter that limits the maximum number of sessions we keep open before we start dropping new connections summarily. - Try to consume less memory when doing ESI processing by properly rolling back used workspace after processing an object. This should make it possible to turn sess\_workspace quite a bit for users with ESI-heavy pages. - Turn on session\_linger by default. Tests have shown that session\_linger helps a fair bit with performance. - Rewrite the epoll acceptor for better performance. This should lead to both higher processing rates and maximum number of connections on Linux. - Add If-None-Match support, this gives significant bandwidth savings for users with compliant browsers. - RFC2616 specifies that ETag, Content-Location, Expires, Cache-Control and Vary should be emitted when delivering a response with the 304 response code. - Various fixes which makes Varnish compile and work on AIX. - Turn on TCP\_DEFER\_ACCEPT on Linux. This should make us less suspecible to denial of service attacks as well as give us slightly better performance. - Add an .initial property to the backend probe specification. This is the number of good probes we pretend to have seen. The default is one less than .threshold, which means the first probe will decide if we consider the backend healthy. - Make it possible to compare strings against other string-like objects, not just plain strings. This allows you to compare two headers, for instance. - When support for restart in vcl\_error was added, there was no check to prevent infinte recursion. This has now been fixed. - Turn on purge\_dups by default. This should make us consume less memory when there are many bans for the same pattern added. - Add a new log tag called FetchError which tries to explain why we could not fetch an object from the backend. - Change the default srcaddr\_ttl to 0. It is not used by anything and has been removed in the development version. This will increase performance somewhat. varnishtop ---------- - varnishtop did not handle variable-length log fields correctly. This is now fixed. - varnishtop previously did not print the name of the tag, which made it very hard to understand. We now print out the tag name. =========================== Changes from 2.0.3 to 2.0.4 =========================== varnishd -------- - Make Varnish more portable by pulling in fixes for Solaris and NetBSD. - Correct description of -a in the manual page. - Ensure we are compiling in C99 mode. - If error was called with a null reason, we would crash on Solaris. Make sure this no longer happens. - Varnish used to crash if you asked it to use a non-existent waiter. This has now been fixed. - Add documentation to the default VCL explaining that using Connection: close in vcl\_pipe is generally a good idea. - Add minimal facility for dealing with TELNET option negotiation by returning WONT to DO and DONT requests. - If the backend is unhealthy, use a graced object if one is available. - Make server.hostname and server.identity available to VCL. The latter can be set with the -i parameter to varnishd. - Make restart available from vcl\_error. - Previously, only the TTL of an object was considered in whether it would be marked as cacheable. This has been changed to take the grace into consideration as well. - Previously, if an included ESI fragment had a zero size, we would send out a zero-sized chunk which signifies end-of-transmission. We now ignore zero-sized chunks. - We accidentally slept for far too long when we reached the maximum number of open file descriptors. This has been corrected and accept\_fd\_holdoff now works correctly. - Previously, when ESI processing, we did not look at the full length, but stopped at the first NULL byte. We no longer do that, enabling ESI processing of binary data. varnishtest ----------- - Make sure system "..." returns successfully to ensure test failures do not go unnoticed. - Make it possible to send NULL bytes through the testing framework. =========================== Changes from 2.0.2 to 2.0.3 =========================== varnishd -------- - Handle If-Modified-Since and ESI sub-objects better, fixing a problem where we sometimes neglected to insert included objects. - restart in vcl\_hit is now supported. - Setting the TTL of an object to 0 seconds would sometimes cause it to be delivered for up to one second - epsilon. This has been corrected and we should now never deliver those objects to other clients. - The malloc storage backend now prints the maximum storage size, just like the file backend. - Various small documentation bugs have been fixed. - Varnish did not set a default interval for backend probes, causing it to poll the backend continuously. This has been corrected. - Allow "true" and "false" when setting boolean parameters, in addition to on/off, enable/disable and yes/no. - Default to always talking HTTP 1.1 with the backend. - Varnish did not make sure the file it was loading was a regular file. This could cause Varnish to crash if it was asked to load a directory or other non-regular file. We now check that the file is a regular file before loading it. - The binary heap used for expiry processing had scalability problems. Work around this by using stripes of a fixed size, which should make this scale better, particularly when starting up and having lots of objects. - When we imported the jemalloc library into the Varnish tree, it did not compile without warnings. This has now been fixed. - Varnish took a very long time to detect that the backend did not respond. To remedy this, we now have read timeouts in addition to the connect timeout. Both the first\_byte\_timeout and the between\_bytes\_timeout defaults to 60 seconds. The connect timeout is no longer in milliseconds, but rather in seconds. - Previously, the VCL to C conversion as well as the invocation of the C compiler was done in the management process. This is now done in a separate sub-process. This prevents any bugs in the VCL compiler from affecting the management process. - Chunked encoding headers were counted in the statistics for header bytes. They no longer are. - ESI processed objects were not counted in the statistics for body bytes. They now are. - It is now possible to adjust the maximum record length of log entries in the shmlog by tuning the shm\_reclen parameter. - The management parameters listed in the CLI were not sorted, which made it hard to find the parameter you were looking for. They are now sorted, which should make this easier. - Add a new hashing type, "critbit", which uses a lock-less tree based lookup algorithm. This is experimental and should not be enabled in production environments without proper testing. - The session workspace had a default size of 8k. It is now 16k, which should make VCLs where many headers are processed less prone to panics. - We have seen that people seem to be confused as to which actions in the different VCL functions return and which ones don't. Add a new syntax return(action) to make this more explicit. The old syntax is still supported. - Varnish would return an error if any of the management IPs listed in the -T parameter could not be listened to. We now only return an error if none of them can be listened to. - In the case of the backend or client giving us too many parameters, we used to just ignore the overflowing headers. This is problematic if you end up ignoreing Content-Length, Transfer-Encoding and similar headers. We now give out a 400 error to the client if it sends us too many and 503 if we get too many from the backend. - We used panic if we got a too large chunked header. This behaviour has been changed into just failing the transaction. - Varnish now supports an extended purge method where it is possible to do purge req.http.host ~ "web1.com" && req.url ~ "\\.png" and similar. See the documentation for details. - Under heavy load, Varnish would sometimes crash when trying to update the per-request statistics. This has now been fixed. - It is now possible to not save the hash string in the session and object workspace. This will save a lot of memory on sites with many small objects. Disabling the purge\_hash parameter also disables the purge.hash facility. - Varnish now supports !~ as a "no match" regular expression matcher. - In some cases, you could get serialised access to "pass" objects. We now make it default to the default\_ttl value; this can be overridden in vcl\_fetch. - Varnish did not check the syntax of regsub calls properly. More checking has been added. - If the client closed the connection while Varnish was processing ESI elements, Varnish would crash while trying to write the object to the client. We now check if the client has closed the connection. - The ESI parser had a bug where it would crash if an XML comment would span storage segments. This has been fixed. VCL Manual page --------------~ - The documentation on how capturing parentheses work was wrong. This has been corrected. - Grace has now been documented. varnishreplay ------------- - varnishreplay did not work correctly on Linux, due to a too small stack. This has now been fixed. =========================== Changes from 2.0.1 to 2.0.2 =========================== varnishd -------- - In high-load situations, when using ESI, varnishd would sometimes mishandle objects and crash. This has been worked around. varnishreplay ------------- - varnishreplay did not work correctly on Linux, due to a too small stack. This has now been fixed. ========================= Changes from 2.0 to 2.0.1 ========================= varnishd -------- - When receiving a garbled HTTP request, varnishd would sometimes crash. This has been fixed. - There was an off-by-one error in the ACL compilation. Now fixed. Red Hat spec file ----------------~ - A typo in the spec file made the .rpm file names wrong. ========================= Changes from 1.1.2 to 2.0 ========================= varnishd -------- - Only look for sendfile on platforms where we know how to use it, which is FreeBSD for now. - Make it possible to adjust the shared memory log size and bump the size from 8MB to 80MB. - Fix up the handling of request bodies to better match what RFC2616 mandates. This makes PUT, DELETE, OPTIONS and TRACE work in addition to POST. - Change how backends are defined, to a constant structural defintion style. See https://www.varnish-cache.org/wiki/VclSyntaxChanges for the details. - Add directors, which wrap backends. Currently, there's a random director and a round-robin director. - Add "grace", which is for how long and object will be served, even after it has expired. To use this, both the object's and the request's grace parameter need to be set. - Manual pages have been updated for new VCL syntax and varnishd options. - Man pages and other docs have been updated. - The shared memory log file is now locked in memory, so it should not be paged out to disk. - We now handle Vary correctly, as well as Expect. - ESI include support is implemented. - Make it possible to limit how much memory the malloc uses. - Solaris is now supported. - There is now a regsuball function, which works like regsub except it replaces all occurrences of the regex, not just the first. - Backend and director declarations can have a .connect\_timeout parameter, which tells us how long to wait for a successful connection. - It is now possible to select the acceptor to use by changing the acceptor parameter. - Backends can have probes associated with them, which can be checked with req.backend.health in VCL as well as being handled by directors which do load-balancing. - Support larger-than-2GB files also on 32 bit hosts. Please note that this does not mean we can support caches bigger than 2GB, it just means logfiles and similar can be bigger. - In some cases, we would remove the wrong header when we were stripping Content-Transfer-Encoding headers from a request. This has been fixed. - Backends can have a .max\_connections associated with them. - On Linux, we need to set the dumpable bit on the child if we want core dumps. Make sure it's set. - Doing purge.hash() with an empty string would cause us to dump core. Fixed so we don't do that any more. - We ran into a problem with glibc's malloc on Linux where it seemed like it failed to ever give memory back to the OS, causing the system to swap. We have now switched to jemalloc which appears not to have this problem. - max\_restarts was never checked, so we always ended up running out of workspace. Now, vcl\_error is called when we reach max\_restarts. varnishtest ----------- - varnishtest is a tool to do correctness tests of varnishd. The test suite is run by using make check. varnishtop ---------- - We now set the field widths dynamically based on the size of the terminal and the name of the longest field. varnishstat ----------- - varnishstat -1 now displays the uptime too. varnishncsa ----------- - varnishncsa now does fflush after each write. This makes tail -f work correctly, as well as avoiding broken lines in the log file. - It is possible to get varnishncsa to output the X-Forwarded-For instead of the client IP by passing -f to it. Build system -----------~ - Various sanity checks have been added to configure, it now complains about no ncurses or if SO\_RCVTIMEO or SO\_SNDTIMEO are non-functional. It also aborts if there's no working acceptor mechanism - The C compiler invocation is decided by the configure script and can now be overridden by passing VCC\_CC when running configure. =========================== Changes from 1.1.1 to 1.1.2 =========================== varnishd -------- - When switching to a new VCL configuration, a race condition exists which may cause Varnish to reference a backend which no longer exists (see `ticket #144 `_). This race condition has not been entirely eliminated, but it should occur less frequently. - When dropping a TCP session before any requests were processed, an assertion would be triggered due to an uninitialized timestamp (see `ticket #132 `_). The timestamp is now correctly initialized. - Varnish will now correctly generate a Date: header for every response instead of copying the one it got from the backend (see `ticket #157 `_). - Comparisons in VCL which involve a non-existent string (usually a header which is not present in the request or object being processed) would cause a NULL pointer dereference; now the comparison will simply fail. - A bug in the VCL compiler which would cause a double-free when processing include directives has been fixed. - A resource leak in the worker thread management code has been fixed. - When connecting to a backend, Varnish will usually get the address from a cache. When the cache is refreshed, existing connections may end up with a reference to an address structure which no longer exists, resulting in a crash. This race condition has been somewhat mitigated, but not entirely eliminated (see `ticket #144 `_.) - Varnish will now pass the correct protocol version in pipe mode: the backend will get what the client sent, and vice versa. - The core of the pipe mode code has been rewritten to increase robustness and eliminate spurious error messages when either end closes the connection in a manner Varnish did not anticipate. - A memory leak in the backend code has been plugged. - When using the kqueue acceptor, if a client shuts down the request side of the connection (as many clients do after sending their final request), it was possible for the acceptor code to receive the EOF event and recycle the session while the last request was still being serviced, resulting in a assertion failure and a crash when the worker thread later tried to delete the session. This should no longer happen (see `ticket #162 `_.) - A mismatch between the recorded length of a cached object and the amount of data actually present in cache for that object can occasionally occur (see `ticket #167 `_.) This has been partially fixed, but may still occur for error pages generated by Varnish when a problem arises while retrieving an object from the backend. - Some socket-related system calls may return unexpected error codes when operating on a TCP connection that has been shut down at the other end. These error codes would previously cause assertion failures, but are now recognized as harmless conditions. varnishhist ----------- - Pressing 0 though 9 while varnishhist is running will change the refresh interval to the corresponding power of two, in seconds. varnishncsa ----------- - The varnishncsa tool can now daemonize and write a PID file like varnishlog, using the same command-line options. It will also reopen its output upon receipt of a SIGHUP if invoked with -w. varnishstat ----------- - Pressing 0 though 9 while varnishstat is running will change the refresh interval to the corresponding power of two, in seconds. Build system -----------~ - Varnish's has been modified to avoid conflicts with on platforms where the latter is included indirectly through system headers. - Several steps have been taken towards Solaris support, but this is not yet complete. - When configure was run without an explicit prefix, Varnish's idea of the default state directory would be garbage and a state directory would have to be specified manually with -n. This has been corrected. ========================= Changes from 1.1 to 1.1.1 ========================= varnishd -------- - The code required to allow VCL to read obj.status, which had accidentally been left out, has now been added. - Varnish will now always include a Connection: header in its reply to the client, to avoid possible misunderstandings. - A bug that triggered an assertion failure when generating synthetic error documents has been corrected. - A new VCL function, purge\_url, provides the same functionality as the url.purge management command. - Previously, Varnish assumed that the response body should be sent only if the request method was GET. This was a problem for custom request methods (such as PURGE), so the logic has been changed to always send the response body except in the specific case of a HEAD request. - Changes to run-time parameters are now correctly propagated to the child process. - Due to the way run-time parameters are initialized at startup, varnishd previously required the nobody user and the nogroup group to exist even if a different user and group were specified on the command line. This has been corrected. - Under certain conditions, the VCL compiler would carry on after a syntax error instead of exiting after reporting the error. This has been corrected. - The manner in which the hash string is assembled has been modified to reduce memory usage and memory-to-memory copying. - Before calling vcl\_miss, Varnish assembles a tentative request object for the backend request which will usually follow. This object would be leaked if vcl\_miss returned anything else than fetch. This has been corrected. - The code necessary to handle an error return from vcl\_fetch and vcl\_deliver had inadvertantly been left out. This has been corrected. - Varnish no longer prints a spurious "child died" message (the result of reaping the compiler process) after compiling a new VCL configuration. - Under some circumstances, due to an error in the workspace management code, Varnish would lose the "tail" of a request, i.e. the part of the request that has been received from the client but not yet processed. The most obvious symptom of this was that POST requests would work with some browsers but not others, depending on details of the browser's HTTP implementation. This has been corrected. - On some platforms, due to incorrect assumptions in the CLI code, the management process would crash while processing commands received over the management port. This has been corrected. Build system -----------~ - The top-level Makefile will now honor $DESTDIR when creating the state directory. - The Debian and RedHat packages are now split into three (main / lib / devel) as is customary. - A number of compile-time and run-time portability issues have been addressed. - The autogen.sh script had workarounds for problems with the GNU autotools on FreeBSD; these are no longer needed and have been removed. - The libcompat library has been renamed to libvarnishcompat and is now dynamic rather than static. This simplifies the build process and resolves an issue with the Mac OS X linker. ========================= Changes from 1.0.4 to 1.1 ========================= varnishd -------- - Readability of the C source code generated from VCL code has been improved. - Equality (==) and inequality (!=) operators have been implemented for IP addresses (which previously could only be compared using ACLs). - The address of the listening socket on which the client connection was received is now available to VCL as the server.ip variable. - Each object's hash key is now computed based on a string which is available to VCL as req.hash. A VCL hook named vcl\_hash has been added to allow VCL scripts to control hash generation (for instance, whether or not to include the value of the Host: header in the hash). - The setup code for listening sockets has been modified to detect and handle situations where a host name resolves to multiple IP addresses. It will now attempt to bind to each IP address separately, and report a failure only if none of them worked. - Network or protocol errors that occur while retrieving an object from a backend server now result in a synthetic error page being inserted into the cache with a 30-second TTL. This should help avoid driving an overburdened backend server into the ground by repeatedly requesting the same object. - The child process will now drop root privileges immediately upon startup. The user and group to use are specified with the user and group run-time parameters, which default to nobody and nogroup, respectively. Other changes have been made in an effort to increase the isolation between parent and child, and reduce the impact of a compromise of the child process. - Objects which are received from the backend with a Vary: header are now stored separately according to the values of the headers specified in Vary:. This allows Varnish to correctly cache e.g. compressed and uncompressed versions of the same object. - Each Varnish instance now has a name, which by default is the host name of the machine it runs on, but can be any string that would be valid as a relative or absolute directory name. It is used to construct the name of a directory in which the server state as well as all temporary files are stored. This makes it possible to run multiple Varnish instances on the same machine without conflict. - When invoked with the -C option, varnishd will now not just translate the VCL code to C, but also compile the C code and attempt to load the resulting shared object. - Attempts by VCL code to reference a variable outside its scope or to assign a value to a read-only variable will now result in compile-time rather than run-time errors. - The new command-line option -F will make varnishd run in the foreground, without enabling debugging. - New VCL variables have been introduced to allow inspection and manipulation of the request sent to the backend (bereq.request, bereq.url, bereq.proto and bereq.http) and the response to the client (resp.proto, resp.status, resp.response and resp.http). - Statistics from the storage code (including the amount of data and free space in the cache) are now available to varnishstat and other statistics-gathering tools. - Objects are now kept on an LRU list which is kept loosely up-to-date (to within a few seconds). When cache runs out, the objects at the tail end of the LRU list are discarded one by one until there is enough space for the freshly requested object(s). A VCL hook, vcl\_discard, is allowed to inspect each object and determine its fate by returning either keep or discard. - A new VCL hook, vcl\_deliver, provides a chance to adjust the response before it is sent to the client. - A new management command, vcl.show, displays the VCL source code of any loaded configuration. - A new VCL variable, now, provides VCL scripts with the current time in seconds since the epoch. - A new VCL variable, obj.lastuse, reflects the time in seconds since the object in question was last used. - VCL scripts can now add an HTTP header (or modify the value of an existing one) by assigning a value to the corresponding variable, and strip an HTTP header by using the remove keyword. - VCL scripts can now modify the HTTP status code of cached objects (obj.status) and responses (resp.status) - Numeric and other non-textual variables in VCL can now be assigned to textual variables; they will be converted as needed. - VCL scripts can now apply regular expression substitutions to textual variables using the regsub function. - A new management command, status, returns the state of the child. - Varnish will now build and run on Mac OS X. varnishadm ---------- - This is a new utility which sends a single command to a Varnish server's management port and prints the result to stdout, greatly simplifying the use of the management port from scripts. varnishhist ----------- - The user interface has been greatly improved; the histogram will be automatically rescaled and redrawn when the window size changes, and it is updated regularly rather than at a rate dependent on the amount of log data gathered. In addition, the name of the Varnish instance being watched is displayed in the upper right corner. varnishncsa ----------- - In addition to client traffic, varnishncsa can now also process log data from backend traffic. - A bug that would cause varnishncsa to segfault when it encountered an empty HTTP header in the log file has been fixed. varnishreplay ------------- - This new utility will attempt to recreate the HTTP traffic which resulted in the raw Varnish log data which it is fed. varnishstat ----------- - Don't print lifetime averages when it doesn't make any sense, for instance, there is no point in dividing the amount in bytes of free cache space by the lifetime in seconds of the varnishd process. - The user interface has been greatly improved; varnishstat will no longer print more than fits in the terminal, and will respond correctly to window resize events. The output produced in one-shot mode has been modified to include symbolic names for each entry. In addition, the name of the Varnish instance being watched is displayed in the upper right corner in curses mode. varnishtop ---------- - The user interface has been greatly improved; varnishtop will now respond correctly to window resize events, and one-shot mode (-1) actually works. In addition, the name of the Varnish instance being watched is displayed in the upper right corner in curses mode. =========================== Changes from 1.0.3 to 1.0.4 =========================== varnishd -------- - The request workflow has been redesigned to simplify request processing and eliminate code duplication. All codepaths which need to speak HTTP now share a single implementation of the protocol. Some new VCL hooks have been added, though they aren't much use yet. The only real user-visible change should be that Varnish now handles persistent backend connections correctly (see `ticket #56 `_). - Support for multiple listen addresses has been added. - An "include" facility has been added to VCL, allowing VCL code to pull in code fragments from multiple files. - Multiple definitions of the same VCL function are now concatenated into one in the order in which they appear in the source. This simplifies the mechanism for falling back to the built-in default for cases which aren't handled in custom code, and facilitates modularization. - The code used to format management command arguments before passing them on to the child process would underestimate the amount of space needed to hold each argument once quotes and special characters were properly escaped, resulting in a buffer overflow. This has been corrected. - The VCL compiler has been overhauled. Several memory leaks have been plugged, and error detection and reporting has been improved throughout. Parts of the compiler have been refactored to simplify future extension of the language. - A bug in the VCL compiler which resulted in incorrect parsing of the decrement (-=) operator has been fixed. - A new -C command-line option has been added which causes varnishd to compile the VCL code (either from a file specified with -f or the built-in default), print the resulting C code and exit. - When processing a backend response using chunked encoding, if a chunk header crosses a read buffer boundary, read additional bytes from the backend connection until the chunk header is complete. - A new ping\_interval run-time parameter controls how often the management process checks that the worker process is alive. - A bug which would cause the worker process to dereference a NULL pointer and crash if the backend did not respond has been fixed. - In some cases, such as when they are used by AJAX applications to circumvent Internet Explorer's over-eager disk cache, it may be desirable to cache POST requests. However, the code path responsible for delivering objects from cache would only transmit the response body when replying to a GET request. This has been extended to also apply to POST. This should be revisited at a later date to allow VCL code to control whether the body is delivered. - Varnish now respects Cache-control: s-maxage, and prefers it to Cache-control: max-age if both are present. This should be revisited at a later date to allow VCL code to control which headers are used and how they are interpreted. - When loading a new VCL script, the management process will now load the compiled object to verify that it links correctly before instructing the worker process to load it. - A new -P command-line options has been added which causes varnishd to create a PID file. - The sendfile\_threshold run-time parameter's default value has been set to infinity after a variety of sendfile()-related bugs were discovered on several platforms. varnishlog ---------- - When grouping log entries by request, varnishlog attempts to collapse the log entry for a call to a VCL function with the log entry for the corresponding return from VCL. When two VCL calls were made in succession, varnishlog would incorrectly omit the newline between the two calls (see `ticket #95 `_). - New -D and -P command-line options have been added to daemonize and create a pidfile, respectively. - The flag that is raised upon reception of a SIGHUP has been marked volatile so it will not be optimized away by the compiler. varnishncsa ----------- - The formatting callback has been largely rewritten for clarity, robustness and efficiency. If a request included a Host: header, construct and output an absolute URL. This makes varnishncsa output from servers which handle multiple virtual hosts far more useful. - The flag that is raised upon reception of a SIGHUP has been marked volatile so it will not be optimized away by the compiler. Documentation ------------- - The documentation, especially the VCL documentation, has been greatly extended and improved. Build system ------------ - The name and location of the curses or ncurses library is now correctly detected by the configure script instead of being hardcoded into affected Makefiles. This allows Varnish to build correctly on a wider range of platforms. - Compatibility shims for clock\_gettime() are now correctly applied where needed, allowing Varnish to build on MacOS X. - The autogen.sh script will now correctly detect and warn about automake versions which are known not to work correctly. varnish-7.5.0/doc/graphviz/000077500000000000000000000000001457605730600156115ustar00rootroot00000000000000varnish-7.5.0/doc/graphviz/Makefile.am000066400000000000000000000020121457605730600176400ustar00rootroot00000000000000# Makefile for graphviz outputs SUFFIXES: .dot .pdf .svg # for an out-of-tree build, sphinx needs the output in builddir # XXX is there a better way to do this? .PHONY: link_srcdir link_srcdir: if test "x$(srcdir)" != "x$(builddir)" && \ test ! -f $(builddir)/cache_http1_fsm.svg ; then \ d=`pwd`/$(builddir) ; \ cd $(srcdir) && find . -name \*.svg -type f | \ cpio -dmp $${d} || true ; \ fi dist-hook: $(MAKE) html # You can set these variables from the command line. # this is a4, letter is 8.5,11 SIZE = 8.4,11.7 EXTRA_DIST = $(srcdir)/*.dot \ $(SVGS) PDFS = \ cache_http1_fsm.pdf \ cache_req_fsm.pdf \ cache_fetch.pdf SVGS = \ cache_http1_fsm.svg \ cache_req_fsm.svg \ cache_fetch.svg CLEANFILES = \ $(PDFS) pdf: $(PDFS) html: $(SVGS) link_srcdir # XXX does not fit onto a4 unless in landscape cache_fetch.pdf: cache_fetch.dot $(AM_V_GEN) $(DOT) -Tpdf -Gsize=$(SIZE) -Grotate=90 $< >$@ .dot.pdf: $(AM_V_GEN) $(DOT) -Tpdf -Gsize=$(SIZE) $< >$@ .dot.svg: $(AM_V_GEN) $(DOT) -Tsvg $< >$@ varnish-7.5.0/doc/graphviz/cache_fetch.dot000066400000000000000000000067351457605730600205500ustar00rootroot00000000000000/* * we should format labels in a readable form like * label=" * {vbf_stp_startfetch:| * {vcl_backend_fetch\{\}|bereq.*}| * {abandon| * fetch}}" * * * ... but some servers in the v-c.o build farm use old graphviz 2.26.3 * which cannot handle labels with additional whitespace properly, so * for the time being we need to fall back into dark middle ages and * use illegibly long lines * * -- slink 20141013 */ digraph cache_fetch { margin="0.5" center="1" /*** cache_fetch.c ***/ subgraph cluster_backend { style=filled color=aliceblue RETRY [shape=plaintext] v_b_f_BGFETCH [label="BGFETCH", shape=box, style=filled, color=turquoise] v_b_f_FETCH [label="FETCH", shape=box, style=filled, color=turquoise] v_b_f_BGFETCH -> v_b_f [style=bold,color=green] v_b_f_FETCH -> v_b_f [style=bold,color=blue] v_b_f_FETCH -> v_b_f [style=bold,color=red] RETRY -> v_b_f [color=purple] /* vbf_stp_startfetch() */ v_b_f [ shape=record label="{vbf_stp_startfetch:|{vcl_backend_fetch\{\}|bereq.*}|{error|fail|abandon|fetch}}" ] v_b_f:error:s -> v_b_e v_b_f:fetch:s -> v_b_hdrs [style=bold] v_b_hdrs [ label="send bereq,\nread beresp (headers)"] v_b_hdrs -> v_b_r [style=bold] v_b_hdrs -> v_b_e v_b_r [ shape=record label="{vbf_stp_startfetch:|{vcl_backend_response\{\}|{bereq.*|beresp.*}}|{error|fail|{retry|{max?|ok?}}|abandon|{deliver or pass|{304?|other?}}}}" ] v_b_r:error:s -> v_b_e v_b_r:retry -> v_b_r_retry [color=purple] v_b_r:max -> v_b_e v_b_r:fetch_304:s -> vbf_stp_condfetch v_b_r:non_304:s -> vbf_stp_fetch v_b_r_retry [label="RETRY",shape=plaintext] vbf_stp_fetchbody [ shape=record fontcolor=grey color=grey label="{vbf_stp_fetchbody:|get storage|read body, run VFPs|{fetch_fail?|error?|ok?}}" ] vbf_stp_fetchbody:ok:s -> vbf_stp_fetchend vbf_stp_fetch [ shape=record fontcolor=grey color=grey label="{vbf_stp_fetch:|setup VFPs|get object|{error?|body?}}" ] vbf_stp_fetch:body:s -> vbf_stp_fetchbody vbf_stp_fetch:body:s -> vbf_stp_fetchend vbf_stp_fetchend [ shape=record fontcolor=grey color=grey label="{vbf_stp_fetchend:|finalize object and director|done}" ] vbf_stp_fetchend:done:s -> FETCH_DONE vbf_stp_condfetch [ shape=record fontcolor=grey color=grey label="{vbf_stp_condfetch:|copy obj attr|steal body|{fetch_fail?|ok?}}" ] vbf_stp_condfetch:ok:s -> vbf_stp_fetchend fail [shape=plaintext] fail -> FETCH_FAIL /* vbf_stp_error */ v_b_e [ shape=record label="{vbf_stp_error:|{vcl_backend_error\{\}|{bereq.*|beresp.*}}|{{retry|{fail|max?|ok?}}|abandon|deliver}}}" ] // v_b_e:deliver aka "backend synth" - goes into cache v_b_e:deliver -> FETCH_DONE [label="\"backend synth\""] v_b_e:retry -> v_b_e_retry [color=purple] v_b_e_retry [label="RETRY",shape=plaintext] v_b_e:max:s -> FETCH_FAIL v_b_e_retry [label="RETRY",shape=plaintext] FETCH_DONE [label="FETCH_DONE", shape=box,style=filled,color=turquoise] abandon [shape=plaintext] abandon -> FETCH_FAIL // F_STP_FAIL FETCH_FAIL [label="FETCH_FAIL", shape=box,style=filled,color=turquoise] } } varnish-7.5.0/doc/graphviz/cache_fetch.svg000066400000000000000000000546351457605730600205630ustar00rootroot00000000000000 cache_fetch cluster_backend RETRY RETRY v_b_f vbf_stp_startfetch: vcl_backend_fetch{} bereq.* error fail abandon fetch RETRY->v_b_f v_b_f_BGFETCH BGFETCH v_b_f_BGFETCH->v_b_f v_b_f_FETCH FETCH v_b_f_FETCH->v_b_f v_b_f_FETCH->v_b_f v_b_e vbf_stp_error: vcl_backend_error{} bereq.* beresp.* retry fail max? ok? abandon deliver v_b_f:s->v_b_e v_b_hdrs send bereq, read beresp (headers) v_b_f:s->v_b_hdrs FETCH_DONE FETCH_DONE v_b_e:deliver->FETCH_DONE "backend synth" FETCH_FAIL FETCH_FAIL v_b_e:s->FETCH_FAIL v_b_e_retry RETRY v_b_e:retry->v_b_e_retry v_b_hdrs->v_b_e v_b_r vbf_stp_startfetch: vcl_backend_response{} bereq.* beresp.* error fail retry max? ok? abandon deliver or pass 304? other? v_b_hdrs->v_b_r v_b_r:s->v_b_e v_b_r:max->v_b_e v_b_r_retry RETRY v_b_r:retry->v_b_r_retry vbf_stp_condfetch vbf_stp_condfetch: copy obj attr steal body fetch_fail? ok? v_b_r:s->vbf_stp_condfetch vbf_stp_fetch vbf_stp_fetch: setup VFPs get object error? body? v_b_r:s->vbf_stp_fetch vbf_stp_fetchend vbf_stp_fetchend: finalize object and director done vbf_stp_condfetch:s->vbf_stp_fetchend vbf_stp_fetchbody vbf_stp_fetchbody: get storage read body, run VFPs fetch_fail? error? ok? vbf_stp_fetch:s->vbf_stp_fetchbody vbf_stp_fetch:s->vbf_stp_fetchend vbf_stp_fetchbody:s->vbf_stp_fetchend vbf_stp_fetchend:s->FETCH_DONE fail fail fail->FETCH_FAIL abandon abandon abandon->FETCH_FAIL varnish-7.5.0/doc/graphviz/cache_http1_fsm.dot000066400000000000000000000014231457605730600213510ustar00rootroot00000000000000 digraph vcl_center { margin="0.5" center="1" acceptor -> http1_wait [label=S_STP_NEWREQ, align=center] hash -> CNT_Request [label="Busy object\nS_STP_WORKING\nR_STP_LOOKUP" color=blue] disembark -> hash [style=dotted, color=blue] http1_wait -> CNT_Request [label="S_STP_WORKING\nR_STP_RECV"] http1_wait -> disembark [label="Session close"] http1_wait -> disembark [label="Timeout" color=green] disembark -> waiter [style=dotted, color=green] waiter -> http1_wait [color=green] CNT_Request -> disembark [label="Busy object\nS_STP_WORKING\nR_STP_LOOKUP" color=blue] CNT_Request -> http1_cleanup http1_cleanup -> disembark [label="Session close"] http1_cleanup -> CNT_Request [label="S_STP_WORKING\nR_STP_RECV"] http1_cleanup -> http1_wait [label="S_STP_NEWREQ"] } varnish-7.5.0/doc/graphviz/cache_http1_fsm.svg000066400000000000000000000221141457605730600213620ustar00rootroot00000000000000 vcl_center acceptor acceptor http1_wait http1_wait acceptor->http1_wait S_STP_NEWREQ CNT_Request CNT_Request http1_wait->CNT_Request S_STP_WORKING R_STP_RECV disembark disembark http1_wait->disembark Session close http1_wait->disembark Timeout hash hash hash->CNT_Request Busy object S_STP_WORKING R_STP_LOOKUP CNT_Request->disembark Busy object S_STP_WORKING R_STP_LOOKUP http1_cleanup http1_cleanup CNT_Request->http1_cleanup disembark->hash waiter waiter disembark->waiter waiter->http1_wait http1_cleanup->http1_wait S_STP_NEWREQ http1_cleanup->CNT_Request S_STP_WORKING R_STP_RECV http1_cleanup->disembark Session close varnish-7.5.0/doc/graphviz/cache_req_fsm.dot000066400000000000000000000130341457605730600211010ustar00rootroot00000000000000/* * we should format labels in a readable form like * label="\ * {cnt_deliver:|\ * Filter obj.-\>resp.|\ * {vcl_deliver\{\}|\ * {req.*|resp.*}}|\ * {restart|deliver|synth}}" * * * ... but some servers in the v-c.o build farm use old graphviz 2.26.3 * which cannot handle labels with additional whitespace properly, so * for the time being we need to fall back into dark middle ages and * use illegibly long lines * * -- slink 20141013 */ digraph cache_req_fsm { margin="0.25" ranksep="0.5" center="1" //// XXX does this belong here? -- from cache_vcl.c /* vcl_load [label = "vcl.load",shape=plaintext] vcl_load -> init init [ shape=record label=" {VCL_Load:| {vcl_init}| {ok|fail}}" ] init:ok -> ok init:fail -> fail vcl_discard [label = "vcl.discard",shape=plaintext] vcl_discard -> fini fini [ shape=record label=" {VCL_Nuke:| {vcl_fini}| {ok}}" ] fini:ok -> ok */ acceptor [shape=hexagon label="Request received"] label_select [shape=hexagon label="LABEL"] ESI_REQ [shape=hexagon label="ESI request"] RESTART [shape=plaintext] ESI_REQ -> recv SYNTH [shape=plaintext] FAIL [shape=plaintext] acceptor -> recv [style=bold] label_select -> recv [style=bold] subgraph xcluster_deliver { /* cnt_deliver() */ deliver [ shape=record label="{cnt_deliver:|Filter obj.-\>resp.|{vcl_deliver\{\}|{req.*|resp.*}}|{fail|restart|deliver|synth}}" ] deliver:deliver:s -> V1D_Deliver [style=bold,color=green] deliver:deliver:s -> V1D_Deliver [style=bold,color=red] deliver:deliver:s -> V1D_Deliver [style=bold,color=blue] stream [label="stream?\nbody",style=filled,color=turquoise] stream -> V1D_Deliver [style=dotted] } V1D_Deliver -> DONE /* cnt_synth() */ subgraph xcluster_synth { synth [ shape=record label="{cnt_synth:|{vcl_synth\{\}|{req.*|resp.*}}|{fail|deliver|restart}}" ] FAIL -> synth [color=purple] SYNTH -> synth [color=purple] synth:del:s -> V1D_Deliver [color=purple] } subgraph cluster_backend { style=filled color=aliceblue "see backend graph" [shape=plaintext] node [shape=box, style=filled, color=turquoise] BGFETCH FETCH FETCH_DONE FETCH_FAIL } lookup2:deliver:s -> BGFETCH [label="parallel\nif obj expired", color=green] FETCH_FAIL -> synth [color=purple] FETCH_DONE -> deliver [style=bold,color=red] FETCH_DONE -> deliver [style=bold,color=blue] FETCH -> FETCH_DONE [style=dotted] FETCH -> FETCH_FAIL [style=dotted] /* cnt_lookup() */ subgraph xcluster_lookup { lookup [ shape=record color=grey fontcolor=grey label="{cnt_lookup:|hash lookup|{hit?|miss?|hit-for-miss?|hit-for-pass?|busy?}}" ] lookup2 [ shape=record label="{cnt_lookup:|{vcl_hit\{\}|{req.*|obj.*}}|{fail|deliver|pass|restart|synth}}" ] } lookup:busy:s -> lookup:top:ne [label=" waitinglist", color=grey, fontcolor=grey] lookup:miss:s -> miss [style=bold,color=blue] lookup:hfm:s -> miss [style=bold,color=blue,label=" req.\n is_hitmiss"] lookup:hfp:s -> pass [style=bold,color=red,label=" req.\n is_hitpass"] lookup:h:s -> lookup2 [style=bold,color=green] lookup2:deliver:s -> deliver:n [style=bold,color=green] lookup2:pass:s -> pass [style=bold,color=red] /* cnt_miss */ subgraph xcluster_miss { miss [ shape=record label="{cnt_miss:|{vcl_miss\{\}|req.*}|{fail|fetch|synth|restart|pass}}" ] } miss:fetch:s -> FETCH [style=bold,color=blue] miss:pass:s -> pass [style=bold,color=red] /* cnt_pass */ subgraph xcluster_pass { pass [ shape=record label="{cnt_pass:|{vcl_pass\{\}|req.*}|{fail|fetch|synth|restart}}" ] } pass:fetch:s -> FETCH [style=bold, color=red] /* cnt_pipe */ subgraph xcluster_pipe { pipe [ shape=record label="{cnt_pipe:|filter req.*-\>bereq.*|{vcl_pipe\{\}|{req.*|bereq.*}}|{fail|pipe|synth}}" ] pipe_do [ shape=ellipse label="send bereq,\ncopy bytes until close" ] pipe:pipe -> pipe_do [style=bold,color=orange] } pipe_do -> DONE [style=bold,color=orange] /* cnt_restart */ subgraph xcluster_restart { restart [ shape=record color=grey fontcolor=grey label="{cnt_restart:|{fail|ok?|max_restarts?}}" ] } RESTART -> restart [color=purple] restart:ok:s -> recv restart:max:s -> err_restart [color=purple] err_restart [label="SYNTH",shape=plaintext] /* cnt_recv() */ subgraph xcluster_recv { recv [ shape=record label="{cnt_recv:|{vcl_recv\{\}|req.*}|{fail|hash|purge|pass|pipe|restart|synth|vcl}}" ] recv:hash -> hash [style=bold,color=green] hash [ shape=record label="{cnt_recv:|{vcl_hash\{\}|req.*}|{lookup}}" ] } recv:pipe -> hash [style=bold,color=orange] recv:pass -> hash [style=bold,color=red] hash:lookup:w -> lookup [style=bold,color=green] hash:lookup:s -> purge:top:n [style=bold,color=purple] hash:lookup:s -> pass [style=bold,color=red] hash:lookup:e -> pipe [style=bold,color=orange] recv:purge:s -> hash [style=bold,color=purple] recv:vcl:s -> vcl_label vcl_label [label="switch to vcl\nLABEL",shape=plaintext] /* cnt_purge */ subgraph xcluster_purge { purge [ shape=record label="{cnt_purge:|{vcl_purge\{\}|req.*}|{fail|synth|restart}}" ] } } varnish-7.5.0/doc/graphviz/cache_req_fsm.svg000066400000000000000000001170501457605730600211150ustar00rootroot00000000000000 cache_req_fsm cluster_backend acceptor Request received recv cnt_recv: vcl_recv{} req.* fail hash purge pass pipe restart synth vcl acceptor->recv label_select LABEL label_select->recv ESI_REQ ESI request ESI_REQ->recv RESTART RESTART restart cnt_restart: fail ok? max_restarts? RESTART->restart hash cnt_recv: vcl_hash{} req.* lookup recv:hash->hash recv:pipe->hash recv:pass->hash recv:s->hash vcl_label switch to vcl LABEL recv:s->vcl_label SYNTH SYNTH synth cnt_synth: vcl_synth{} req.* resp.* fail deliver restart SYNTH->synth FAIL FAIL FAIL->synth deliver cnt_deliver: Filter obj.->resp. vcl_deliver{} req.* resp.* fail restart deliver synth V1D_Deliver V1D_Deliver deliver:s->V1D_Deliver deliver:s->V1D_Deliver deliver:s->V1D_Deliver DONE DONE V1D_Deliver->DONE stream stream? body stream->V1D_Deliver synth:s->V1D_Deliver see backend graph see backend graph BGFETCH BGFETCH FETCH FETCH FETCH_DONE FETCH_DONE FETCH->FETCH_DONE FETCH_FAIL FETCH_FAIL FETCH->FETCH_FAIL FETCH_DONE->deliver FETCH_DONE->deliver FETCH_FAIL->synth lookup2 cnt_lookup: vcl_hit{} req.* obj.* fail deliver pass restart synth lookup2:s->deliver:n lookup2:s->BGFETCH parallel if obj expired pass cnt_pass: vcl_pass{} req.* fail fetch synth restart lookup2:s->pass lookup cnt_lookup: hash lookup hit? miss? hit-for-miss? hit-for-pass? busy? lookup:s->lookup2 lookup:s->lookup:ne waitinglist miss cnt_miss: vcl_miss{} req.* fail fetch synth restart pass lookup:s->miss lookup:s->miss req. is_hitmiss lookup:s->pass req. is_hitpass miss:s->FETCH miss:s->pass pass:s->FETCH pipe cnt_pipe: filter req.*->bereq.* vcl_pipe{} req.* bereq.* fail pipe synth pipe_do send bereq, copy bytes until close pipe:pipe->pipe_do pipe_do->DONE restart:s->recv err_restart SYNTH restart:s->err_restart hash:w->lookup hash:s->pass hash:e->pipe purge cnt_purge: vcl_purge{} req.* fail synth restart hash:s->purge:n varnish-7.5.0/doc/sphinx/000077500000000000000000000000001457605730600152705ustar00rootroot00000000000000varnish-7.5.0/doc/sphinx/Makefile.am000066400000000000000000000154471457605730600173370ustar00rootroot00000000000000# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = $(SPHINX) -W -q -N BUILDDIR = build ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(SPHINXOPTS) $(builddir) .PHONY: help clean html linkcheck doctest help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " linkcheck to check all external links for integrity" clean: -rm -rf $(BUILDDIR)/* $(CLEANFILES) # use index.rst as an indicator if we have copied already .PHONY: link_srcdir link_srcdir: graphviz conf.py $(BUILT_SOURCES) if test "x$(srcdir)" != "x$(builddir)" && test ! -f index.rst; then \ s=`cd $(srcdir) && pwd`; \ for f in `cd $$s && find . -type f`; do \ d=`dirname $$f`; \ test -d $$d || mkdir -p $$d; \ test -f $$f || ln -s $$s/$$f $$f; \ done \ fi # work around for make html called within doc/sphinx .PHONY: graphviz graphviz: cd ../graphviz && $(MAKE) html sphinx_prereq: link_srcdir all: link_srcdir html: sphinx_prereq $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(subdir)/$(BUILDDIR)/html." linkcheck: sphinx_prereq $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(subdir)/$(BUILDDIR)/linkcheck/output.txt." EXTRA_DIST = \ conf.py \ dev-guide \ glossary \ include \ index.rst \ installation \ phk \ tutorial \ reference \ users-guide \ vcl-design-patterns \ vtc-syntax.py \ whats-new dist-hook: $(MAKE) html rm -rf $(BUILDDIR)/doctrees cp -r $(BUILDDIR)/html $(distdir)/../ @ # Remove build artifacts. rm $(distdir)/../html/.buildinfo $(distdir)/../html/*.inv distclean-local: rm -rf $(BUILDDIR) include/cli.rst: $(top_builddir)/bin/varnishd/varnishd $(top_builddir)/bin/varnishd/varnishd -x cli > ${@}_ mv -f ${@}_ ${@} BUILT_SOURCES = include/cli.rst include/params.rst: $(top_builddir)/bin/varnishd/varnishd $(top_builddir)/bin/varnishd/varnishd -x parameter > ${@}_ mv -f ${@}_ ${@} BUILT_SOURCES += include/params.rst include/counters.rst: ln -s $(abs_top_builddir)/lib/libvsc/counters.rst $@ BUILT_SOURCES += include/counters.rst include/varnishncsa_options.rst: $(top_builddir)/bin/varnishncsa/varnishncsa $(top_builddir)/bin/varnishncsa/varnishncsa --options > ${@}_ mv -f ${@}_ ${@} include/varnishncsa_synopsis.rst: $(top_builddir)/bin/varnishncsa/varnishncsa $(top_builddir)/bin/varnishncsa/varnishncsa --synopsis > ${@}_ mv -f ${@}_ ${@} BUILT_SOURCES += include/varnishncsa_options.rst \ include/varnishncsa_synopsis.rst include/varnishlog_options.rst: $(top_builddir)/bin/varnishlog/varnishlog $(top_builddir)/bin/varnishlog/varnishlog --options > ${@}_ mv -f ${@}_ ${@} include/varnishlog_synopsis.rst: $(top_builddir)/bin/varnishlog/varnishlog $(top_builddir)/bin/varnishlog/varnishlog --synopsis > ${@}_ mv -f ${@}_ ${@} BUILT_SOURCES += include/varnishlog_options.rst \ include/varnishlog_synopsis.rst include/varnishtop_options.rst: $(top_builddir)/bin/varnishtop/varnishtop $(top_builddir)/bin/varnishtop/varnishtop --options > ${@}_ mv -f ${@}_ ${@} include/varnishtop_synopsis.rst: $(top_builddir)/bin/varnishtop/varnishtop $(top_builddir)/bin/varnishtop/varnishtop --synopsis > ${@}_ mv -f ${@}_ ${@} BUILT_SOURCES += include/varnishtop_options.rst \ include/varnishtop_synopsis.rst include/varnishhist_options.rst: $(top_builddir)/bin/varnishhist/varnishhist $(top_builddir)/bin/varnishhist/varnishhist --options > ${@}_ mv -f ${@}_ ${@} include/varnishhist_synopsis.rst: $(top_builddir)/bin/varnishhist/varnishhist $(top_builddir)/bin/varnishhist/varnishhist --synopsis > ${@}_ mv -f ${@}_ ${@} BUILT_SOURCES += include/varnishhist_options.rst \ include/varnishhist_synopsis.rst include/varnishstat_options.rst: $(top_builddir)/bin/varnishstat/varnishstat $(top_builddir)/bin/varnishstat/varnishstat --options > ${@}_ mv -f ${@}_ ${@} include/varnishstat_synopsis.rst: $(top_builddir)/bin/varnishstat/varnishstat $(top_builddir)/bin/varnishstat/varnishstat --synopsis > ${@}_ mv -f ${@}_ ${@} include/varnishstat_bindings.rst: $(top_builddir)/bin/varnishstat/varnishstat $(top_builddir)/bin/varnishstat/varnishstat --bindings > ${@}_ mv -f ${@}_ ${@} BUILT_SOURCES += include/varnishstat_options.rst \ include/varnishstat_synopsis.rst \ include/varnishstat_bindings.rst include/vsl-tags.rst: $(top_builddir)/lib/libvarnishapi/vsl2rst $(top_builddir)/lib/libvarnishapi/vsl2rst > ${@}_ mv -f ${@}_ ${@} BUILT_SOURCES += include/vsl-tags.rst VTCSYN_SRC = \ $(top_srcdir)/bin/varnishtest/vtc.c \ $(top_srcdir)/bin/varnishtest/vtc_barrier.c \ $(top_srcdir)/bin/varnishtest/vtc_haproxy.c \ $(top_srcdir)/bin/varnishtest/vtc_http.c \ $(top_srcdir)/bin/varnishtest/vtc_http2.c \ $(top_srcdir)/bin/varnishtest/vtc_logexp.c \ $(top_srcdir)/bin/varnishtest/vtc_misc.c \ $(top_srcdir)/bin/varnishtest/vtc_process.c \ $(top_srcdir)/bin/varnishtest/vtc_syslog.c \ $(top_srcdir)/bin/varnishtest/vtc_tunnel.c \ $(top_srcdir)/bin/varnishtest/vtc_varnish.c include/vtc-syntax.rst: $(srcdir)/vtc-syntax.py $(VTCSYN_SRC) $(AM_V_GEN) $(PYTHON) $(srcdir)/vtc-syntax.py $(VTCSYN_SRC) > ${@}_ @mv -f ${@}_ ${@} BUILT_SOURCES += include/vtc-syntax.rst # XXX copy/paste rules need some TLC include/vmod_std.generated.rst: $(top_builddir)/vmod/vmod_std.rst cp $(top_builddir)/vmod/vmod_std.rst $@ BUILT_SOURCES += include/vmod_std.generated.rst include/vmod_directors.generated.rst: $(top_builddir)/vmod/vmod_directors.rst cp $(top_builddir)/vmod/vmod_directors.rst $@ BUILT_SOURCES += include/vmod_directors.generated.rst include/vmod_purge.generated.rst: $(top_builddir)/vmod/vmod_purge.rst cp $(top_builddir)/vmod/vmod_purge.rst $@ BUILT_SOURCES += include/vmod_purge.generated.rst include/vmod_vtc.generated.rst: $(top_builddir)/vmod/vmod_vtc.rst cp $(top_builddir)/vmod/vmod_vtc.rst $@ BUILT_SOURCES += include/vmod_vtc.generated.rst include/vmod_blob.generated.rst: $(top_builddir)/vmod/vmod_blob.rst cp $(top_builddir)/vmod/vmod_blob.rst $@ BUILT_SOURCES += include/vmod_blob.generated.rst include/vmod_cookie.generated.rst: $(top_builddir)/vmod/vmod_cookie.rst cp $(top_builddir)/vmod/vmod_cookie.rst $@ BUILT_SOURCES += include/vmod_cookie.generated.rst include/vmod_h2.generated.rst: $(top_builddir)/vmod/vmod_h2.rst cp $(top_builddir)/vmod/vmod_h2.rst $@ BUILT_SOURCES += include/vmod_h2.generated.rst include/vmod_unix.generated.rst: $(top_builddir)/vmod/vmod_unix.rst cp $(top_builddir)/vmod/vmod_unix.rst $@ BUILT_SOURCES += include/vmod_unix.generated.rst include/vmod_proxy.generated.rst: $(top_builddir)/vmod/vmod_proxy.rst cp $(top_builddir)/vmod/vmod_proxy.rst $@ BUILT_SOURCES += include/vmod_proxy.generated.rst EXTRA_DIST += $(BUILT_SOURCES) CLEANFILES = $(BUILT_SOURCES) .NOPATH: $(BUILT_SOURCES) varnish-7.5.0/doc/sphinx/conf.py.in000066400000000000000000000155511457605730600172030ustar00rootroot00000000000000# -*- coding: utf-8 -*- # # Varnish documentation build configuration file, created by # sphinx-quickstart on Tue Apr 20 13:02:15 2010. # # This file is execfile()d with the current directory set to its containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import sys, os # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. #sys.path.append(os.path.abspath('.')) # -- General configuration ----------------------------------------------------- # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['sphinx.ext.todo'] # Add any paths that contain templates here, relative to this directory. templates_path = ['=templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8' # The master toctree document. master_doc = 'index' # General information about the project. project = u'Varnish Cache' copyright = u'2010-2014, Varnish Software AS' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = '@VERSION@' # The full version, including alpha/beta/rc tags. release = '@VERSION@' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of documents that shouldn't be included in the build. #unused_docs = [] # List of directories, relative to source directory, that shouldn't be searched # for source files. exclude_patterns = ['build','include/*.rst','reference/vcl_*.rst'] # The reST default role (used for this markup: `text`) to use for all documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # -- Options for HTML output --------------------------------------------------- # The theme to use for HTML and HTML Help pages. Major themes that come with # Sphinx are currently 'default' and 'sphinxdoc'. import sphinx if sphinx.__version__ >= '1.3.1': html_theme = 'classic' else: html_theme = 'default' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # # topp background: #437EB2 # left column: #EEEEEE; # h3: #222222; # color: #222222; # a: #336590 html_theme_options = { "bgcolor" : "white", "relbarbgcolor" : "#437EB2", "relbartextcolor" : "white", "sidebarbgcolor" : "#EEEEEE", "sidebartextcolor" : "#222222", "sidebarlinkcolor" : "#336590", "textcolor" : "#222222", "linkcolor" : "#336590", # "codebgcolor" : "#EEEEEE", "codetextcolor" : "#222222", "headtextcolor" : "#222222", "headlinkcolor" : "#336590", } # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". html_title = "Varnish version @VERSION@ documentation" # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". #html_static_path = ['=static'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. html_use_smartypants = False # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_use_modindex = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = '' # Output file base name for HTML help builder. htmlhelp_basename = 'Varnishdoc' # -- Options for LaTeX output -------------------------------------------------- # The paper size ('letter' or 'a4'). #latex_paper_size = 'letter' # The font size ('10pt', '11pt' or '12pt'). #latex_font_size = '10pt' # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass [howto/manual]). latex_documents = [ ('index', 'Varnish.tex', u'Varnish Administrator documentation', u'Varnish Cache Project', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # Additional stuff for the LaTeX preamble. # latex_preamble = '' # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_use_modindex = True varnish-7.5.0/doc/sphinx/dev-guide/000077500000000000000000000000001457605730600171415ustar00rootroot00000000000000varnish-7.5.0/doc/sphinx/dev-guide/homepage_contrib.rst000066400000000000000000000107351457605730600232060ustar00rootroot00000000000000.. Copyright (c) 2019 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _homepage_contrib: How to contribute content to varnish-cache.org ============================================== This is where we walk you through the mechanics of adding content to varnish-cache.org (see phk's note :ref:`homepage_dogfood` for an insight into the innards of site). Git Repository -------------- The web site contents live in github at: https://github.com/varnishcache/homepage To offer your own contribution, fork the project and send us a pull request. Sphinx and RST -------------- The web site sources are written in `RST `_ -- reStructuredText, the documentation format originally conceived for Python (and also used in the Varnish distribution, as well as for formatting VMOD docs). `Sphinx `_ is used to render web pages from the RST sources. So you'll need to `learn markup with RST and Sphinx `_; and you will need to `install Sphinx `_ to test the rendering on your local system. Makefile -------- Generation of web contents from the sources is driven by the ``Makefile`` in the ``R1`` directory of the repo:: $ cd R1 $ make help Please use `make ' where is one of html to make standalone HTML files dirhtml to make HTML files named index.html in directories singlehtml to make a single large HTML file pickle to make pickle files json to make JSON files htmlhelp to make HTML files and a HTML help project qthelp to make HTML files and a qthelp project applehelp to make an Apple Help Book devhelp to make HTML files and a Devhelp project epub to make an epub latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter latexpdf to make LaTeX files and run them through pdflatex latexpdfja to make LaTeX files and run them through platex/dvipdfmx text to make text files man to make manual pages texinfo to make Texinfo files info to make Texinfo files and run them through makeinfo gettext to make PO message catalogs changes to make an overview of all changed/added/deprecated items xml to make Docutils-native XML files pseudoxml to make pseudoxml-XML files for display purposes linkcheck to check all external links for integrity doctest to run all doctests embedded in the documentation (if enabled) coverage to run coverage check of the documentation (if enabled) Most of the time, you'll just need ``make html`` to test the rendering of your contribution. alabaster theme --------------- We use the `alabaster theme `_, which you may need to add to your local Python installation:: $ sudo pip install alabaster We have found that you may need to link the alabaster package install directory to the directory where Sphinx expects to find themes. For example (on my machine), alabaster was installed into:: /usr/local/lib/python2.7/dist-packages/alabaster And Sphinx expects to find themes in:: /usr/share/sphinx/themes So to get the make targets to run successfully:: $ cd /usr/share/sphinx/themes $ ln -s /usr/local/lib/python2.7/dist-packages/alabaster Test the rendering ------------------ Now you can edit contents in the website repo, and test the rendering by calling make targets in the ``R1`` directory:: $ cd $REPO/R1 $ make html sphinx-build -b html -d build/doctrees source build/html Running Sphinx v1.2.3 loading pickled environment... done building [html]: targets for 1 source files that are out of date updating environment: 0 added, 1 changed, 0 removed reading sources... [100%] tips/contribdoc/index looking for now-outdated files... none found pickling environment... done checking consistency... done preparing documents... done writing output... [100%] tips/index writing additional files... genindex search copying static files... done copying extra files... done dumping search index... done dumping object inventory... done build succeeded. After a successful build, the newly rendered contents are saved in the ``R1/source/build`` directory, so you can have a look with your browser. Send us a pull request ---------------------- When you have your contribution building successfully, send us a PR, we'll be happy to hear from you! varnish-7.5.0/doc/sphinx/dev-guide/homepage_dogfood.rst000066400000000000000000000117231457605730600231650ustar00rootroot00000000000000.. Copyright (c) 2019 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _homepage_dogfood: How our website works ===================== The principle of eating your own dogfood is important for software quality, that is how you experience what your users are dealing with, and I am not the least ashamed to admit that several obvious improvements have happened to Varnish as a result of running the project webserver. But it is also important to externalize what you learn doing so, and therefore I thought I would document here how the projects new "internal IT" works. Hardware -------- Who cares? Yes, we use some kind of hardware, but to be honest I don't know what it is. Our primary site runs on a `RootBSD 'Omega' `_ virtual server somewhere near CDG/Paris. And as backup/integration/testing server we can use any server, virtual or physical, as long as it has a internet connection and contemporary performance, because the entire install is scripted and under version control (more below). Operating System ---------------- So, dogfood: Obviously FreeBSD. Apart from the obvious reason that I wrote a lot of FreeBSD and can get world-class support by bugging my buddies about it, there are two equally serious reasons for the Varnish Project to run on FreeBSD: Dogfood and jails. Varnish Cache is not "software for Linux", it is software for any competent UNIX-like operating system, and FreeBSD is our primary "keep us honest about this" platform. Jails ----- You have probably heard about Docker and Containers, but FreeBSD have had jails `since I wrote them in 1998 `_ and they're a wonderful way to keep your server installation sane. We currently have three jails: * Hitch - runs the `Hitch SSL proxy `_ * Varnish - You guessed it * Tools - backend webserver, currently `ACME Labs' thttpd `_ Script & Version Control All The Things --------------------------------------- We have a git repos with shell scripts which create these jails from scratch and also a script to configure the host machine properly. That means that the procedure to install a clone of the server is, unabridged:: # Install FreeBSD (if not already done by hosting) # Configure networking (if not already done by hosting) # Set the clock service ntpdate forcestart # Get git env ASSUME_ALWAYS_YES=yes pkg install git # Clone the private git repo git clone ssh://example.com/root/Admin # Edit the machines IP numbers in /etc/pf.conf # Configure the host sh build_host.sh |& tee _.bh # Build the jails foreach i (Tools Hitch Varnish) (cd $i ; sh build* |& tee _.bj) end From bare hardware to ready system in 15-30 minutes. It goes without saying that this git repos contains stuff like ssh host keys, so it should *not* go on github. Backups ------- Right now there is nothing we absolutely have to backup, provided we have an up to date copy of the Admin git repos. In practice we want to retain history for our development tools (VTEST, GCOV etc.) and I rsync those file of the server on a regular basis. The Homepage ------------ The homepage is built with `Sphinx `_ and lives in its own `github project `_ (Pull requests are very welcome!) We have taken snapshots of some of the old webproperties, Trac, the Forum etc as static HTML copies. Why on Earth... --------------- It is a little bit tedious to get a setup like this going, whenever you tweak some config file, you need to remember to pull the change back out and put it in your Admin repos. But that extra effort pays of so many times later. You never have to wonder "who made that change and why" or even try to remember what changes were needed in the first place. For us as a project, it means, that all our sysadmin people can build a clone of our infrastructure, if they have a copy of our "Admin" git repos and access to github. And when `FreeBSD 11 `_ comes out, or a new version of sphinx or something else, mucking about with things until they work can be done at leisure without guess work. (We're actually at 12 now, but the joke is too good to delete.) For instance I just added the forum snapshot, by working out all the kinks on one of my test-machines. Once it was as I wanted it, I pushed the changes the live machine and then:: varnishadm vcl.use backup # The 'backup' VCL does a "pass" of all traffic to my server cd Admin git pull cd Tools sh build_j_tools.sh |& tee _.bj varnishadm vcl.load foobar varnish-live.vcl varnishadm vcl.use foobar For a few minutes our website was a bit slower (because of the extra Paris-Denmark hop), but there was never any interruption. And by doing it this way, I *know* it will work next time also. 2016-04-25 /phk PS: All that buzz about "reproducible builds" ? Yeah, not a new idea. varnish-7.5.0/doc/sphinx/dev-guide/index.rst000066400000000000000000000054771457605730600210170ustar00rootroot00000000000000.. Copyright (c) 2016-2021 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _dev-guide-index: The Varnish Developers Guide ============================ This is the deliberately short and to the point list of things Varnish Developers should know. Behaviour --------- * Be sensible. * If in doubt, think. * If still in doubt, ask. * Admit your mistakes, it's faster that way. * Thou SHALL not paint `bikesheds. `_ * We will toss you out of the project rather than add another rule. Technical stuff ---------------- * Our coding style guideline is FreeBSD's `style(9) `_ * See autogen.des script for developer options to the toolchain. * We always -Werror, there are no harmless warnings, only source code that does not express intent well enough. * We prefer the source code, rather than the comments explain what is going on, that way tools like FlexeLint and Coverity also gets a chance. * Our reference platforms are Ubuntu and FreeBSD. * Asserts have negative cost, they save developer time next time around. * Our license is BSD 2-clause or looser, no GPL or LGPL. * It took 11 years for the first major security issue, and that was too soon. Bugs, issues, feature requests & VIPs ------------------------------------- Bugs, issues and feature requests start out as github issues. Monday at 15:00-16:00 (EU time) we "bug-wash" on IRC (#varnish-hacking on irc.linpro.no) to decide who and how issues are dealt with. Issues we cannot do anything about are closed. If feature-requests make sense, they get moved to a wiki/VIP page until somebody implements them. Varnishtest cases for bugs is the norm, not the exception. Architectural stuff ------------------- These rules are imported from the X11 project: * It is as important to decide what a system is not as to decide what it is. * Do not serve all the world's needs; rather, make the system extensible so that additional needs can be met in an upwardly compatible fashion. * The only thing worse than generalizing from one example is generalizing from no examples at all. * If a problem is not completely understood, it is probably best to provide no solution at all. * If you can get 90 percent of the desired effect for 10 percent of the work, use the simpler solution. * Isolate complexity as much as possible. * Provide mechanism, rather than policy. Various policies ---------------- .. toctree:: :maxdepth: 1 Varnish Cache organization and day-to-day operation policy_vmods The varnish-cache.org homepage ------------------------------ .. toctree:: :maxdepth: 1 homepage_dogfood homepage_contrib Project metadata ---------------- .. toctree:: :maxdepth: 1 who varnish-7.5.0/doc/sphinx/dev-guide/policy_vmods.rst000066400000000000000000000024111457605730600224000ustar00rootroot00000000000000.. Copyright (c) 2019 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _policy-vmods: Bundling VMODs with the Varnish distribution -------------------------------------------- Decisions about whether to add a new Varnish module (VMOD) to those bundled with Varnish are guided by these criteria. * The VMOD is known to be in widespread use and in high demand for common use cases. * Or, if the VMOD is relatively new, it provides compelling features that the developer group agrees will be a valuable enhancement for the project. * The VMOD does not create dependencies on additional external libraries. VMODs that are "glue" for a library come from third parties. * We don't want to add new burdens of dependency and compatibility to the project. * We don't want to force Varnish deployments to install more than admins explicitly choose to install. * The VMOD code follows project conventions (passes make distcheck, follows source code style, and so forth). * A pull request can demonstrate that this is the case (after any necessary fixups). * The developer group commits to maintaining the code for the long run (so there will have to be a consensus that we're comfortable with it). varnish-7.5.0/doc/sphinx/dev-guide/who.rst000066400000000000000000000135721457605730600205000ustar00rootroot00000000000000.. Copyright (c) 2019 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _who_is: Who is ... ? ============ Not quite `Twurp's Peerage `_ but a Who's Who of the Varnish Cache project. Anders Berg ~~~~~~~~~~~ Blame Anders! He is the one who got the crazy idea that the world needed another HTTP proxy server software, and convinced his employer, the norwegian newspaper `Verdens Gang `_ to pay for the first version to be developed. Here is an interview with Anders about `how it all began `_ Dag-Erling Smørgrav ~~~~~~~~~~~~~~~~~~~ DES was working at Redpill-Linpro, a norwegian UNIX/Open Source company when Anders floated his idea for a "forward HTTP cache", he lured PHK into joining, was one of the original developers (doing Linux), project manager and release engineer for the first three years of the project, and forced us to adopt a non-US-ASCII charset from the start. Poul-Henning Kamp ~~~~~~~~~~~~~~~~~ PHK, as he's usually known, has written most of the code and come up with most of the crazy ideas in Varnish, and yet he still has trouble remembering what 'REST', 'CORS' and 'ALPN' means, and he flunked 'CSS for dummies' because he was never a webmaster or webdeveloper. He does have 30+ years of experience in systems programming, and that seems useful too. PHK's `random outbursts `_ has their own section in the Varnish documentation. Per Buer ~~~~~~~~ Per also worked at Redpill-Linpro, and at some point when the impedance mismatch between Linpros "normal way of doing things" and the potential of Varnish became to steep, he convinced the company to spin off `Varnish Software `_ with himself at the helm. Do a git blame on the Varnish documentation and you will be surprised to see how much he cares about it. Very few people notice this. Ingvar Hagelund ~~~~~~~~~~~~~~~ Ingvar works as Team Leader (read very skilled sysadmin) at Redpill-Linpro, but his passion is reading books and blogging about it, as well as RPM packaging. So every Fedora and EPEL (read RedHat and CentOS) Varnish user out there owe him a thanks or two. Once in a while, he also trawls the internet checking for the rate of Varnish adoption among top web sites. Stig Sandbeck Mathisen ~~~~~~~~~~~~~~~~~~~~~~ Stig works at Redpill-Linpro and is the guy in charge of packaging Varnish for Debian, which means Ubuntu users owe him a thanks also. Besides this, he maintains VCL-mode for emacs and is generally a nice and helpful guy. Tollef Fog Heen ~~~~~~~~~~~~~~~ Tollef was product owner and responsible for Varnish while working for Redpill-Linpro. later tech lead at Varnish Software and held the Varnish release manager helmet for a few years. His experience with open source (Debian, Ubuntu and many others) brought sanity to the project in ways that are hard to measure or describe. Kristian Lyngstøl ~~~~~~~~~~~~~~~~~ Kristian was the first Varnish SuperUser, and he quite literally wrote the book, while giving Varnish courses for Redpill-Linpro, and he pushed boundaries where no boundaries had been pushed before which caused a lot of improvements and "aha!" moments in Varnish. Artur Bergman ~~~~~~~~~~~~~ Artur ran Wikias webservers and CDN when he discovered Varnish and eagerly adopted it, causing many bugreports, suggestions, patches and improvements. At some point, he pivoted Wikias CDN into the Varnish based startup-CDN named `Fastly `_ Kacper Wysocki ~~~~~~~~~~~~~~ Kacper was probably the first VCL long program writer. Combine this with an interest in security and a job at Redpill-Linpro and he turned quickly into the author of security.vcl and, later, the Varnish Security Firewall. He does not have any commits in Varnish and still has managed to drive quite a few changes into the project. Similarly, he has no idea or has even thought about asking for it, and still is being added here He maintains the VCL grammar in BNF notation, which is an unexploited gold mine. Nils Goroll ~~~~~~~~~~~ aka 'slink' is the founder of `UPLEX `_, a five-head tech / consultancy company with negative to zero marketing (applied for entry into the "Earth's worst company homepage" competition). He fell in love with Varnish when he migrated Germany's Verdens Gang counterpart over a weekend in March 2009 and, since then, has experienced countless moments of pure joy and happiness when, after struggling for hours, he finally understood another piece of beautiful, ingenious Varnish code. Nils' primary focus are his clients and their projects. He tries to make those improvements to Varnish which matter to them. Martin Blix Grydeland ~~~~~~~~~~~~~~~~~~~~~ Martin was the first full-time member of the C-team at Varnish Software. He is the main responsible for the amazing revamp of the logging facilities and utilities in the 4.0 cycle and later the storage rework. Besides that he fixes lots of bugs, knows varnishtest better than most, writes vmods and is the Varnish Cache Plus architect. Lasse Karstensen ~~~~~~~~~~~~~~~~ Lasse is the current release manager and stable version maintainer of Varnish Cache. When not doing that, he maintains build infrastructure and runs the Varnish Software C developer team in Oslo. Geoff Simmons ~~~~~~~~~~~~~ Geoff started working at UPLEX in 2010 and soon learned to love Varnish as much as slink does. Since then he's been contributing code to the project, writing up various VMODs (mostly about regular expressions, blobs, backends and directors), developing standalone applications for logging that use Martin's VSL API, and adding custom patches to Varnish for various customer needs. He spends most of his days in customer projects as "the Varnish guy" on the operations teams. varnish-7.5.0/doc/sphinx/glossary/000077500000000000000000000000001457605730600171335ustar00rootroot00000000000000varnish-7.5.0/doc/sphinx/glossary/index.rst000066400000000000000000000072001457605730600207730ustar00rootroot00000000000000.. Copyright (c) 2010-2015 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _glossary: Varnish Glossary ================ .. glossary:: :sorted: .. This file will be sorted automagically during formatting, so we keep the source in subject order to make sure we cover all bases. .. comment: "components of varnish --------------------------------" varnishd (NB: with 'd') This is the actual Varnish cache program. There is only one program, but when you run it, you will get *two* processes: The "master" and the "worker" (or "child"). master (process) One of the two processes in the varnishd program. The master process is a manager/nanny process which handles configuration, parameters, compilation of :term:VCL etc. but it does never get near the actual HTTP traffic. worker (process) The worker process is started and configured by the master process. This is the process that does all the work you actually want varnish to do. If the worker dies, the master will try start it again, to keep your website alive. backend The HTTP server varnishd is caching for. This can be any sort of device that handles HTTP requests, including, but not limited to: a webserver, a CMS, a load-balancer another varnishd, etc. client The program which sends varnishd an HTTP request, typically a browser, but do not forget to think about spiders, robots script-kiddies and criminals. varnishstat Program which presents varnish statistics counters. varnishlog Program which presents varnish transaction log in native format. varnishtop Program which gives real-time "top-X" list view of transaction log. varnishncsa Program which presents varnish transaction log in "NCSA" format. varnishhist Eye-candy program showing response time histogram in 1980s ASCII-art style. varnishtest Program to test varnishd's behaviour with, simulates backend and client according to test-scripts. .. comment: "components of traffic ---------------------------------" header An HTTP protocol header, like "Accept-Encoding:". request What the client sends to varnishd and varnishd sends to the backend. response What the backend returns to varnishd and varnishd returns to the client. When the response is stored in varnishd's cache, we call it an object. backend response The response specifically served from a backend to varnishd. The backend response may be manipulated in vcl_backend_response. body The bytes that make up the contents of the object, varnishd does not care if they are in HTML, XML, JPEG or even EBCDIC, to varnishd they are just bytes. object The (possibly) cached version of a backend response. varnishd receives a response from the backend and creates an object, from which it may deliver cached responses to clients. If the object is created as a result of a request which is passed, it will not be stored for caching. .. comment: "configuration of varnishd -----------------------------" VCL Varnish Configuration Language, a small specialized language for instructing Varnish how to behave. .. comment: "actions in VCL ----------------------------------------" hit An object Varnish delivers from cache. miss An object Varnish fetches from the backend before it is served to the client. The object may or may not be put in the cache, that depends. pass An object Varnish does not try to cache, but simply fetches from the backend and hands to the client. pipe Varnish just moves the bytes between client and backend, it does not try to understand what they mean. varnish-7.5.0/doc/sphinx/index.rst000066400000000000000000000047731457605730600171440ustar00rootroot00000000000000.. Copyright (c) 2010-2020 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license Varnish Documentation ===================== Varnish Cache is a web application accelerator also known as a caching HTTP reverse proxy. You install it in front of any server that speaks HTTP and configure it to cache the contents. Varnish Cache is really, really fast. It typically speeds up delivery with a factor of 300 - 1000x, depending on your architecture. To get started with Varnish-Cache we recommend that you read the installation guide :ref:`install-index`. Once you have Varnish up and running we recommend that you go through our tutorial - :ref:`tutorial-index`, and finally the :ref:`users-guide-index`. If you need to find out how to use a specific Varnish tool, the :ref:`reference-index` contains detailed documentation over the tools. Changes from previous versions are located in the :ref:`whats-new-index` chapter. In closing, we have :ref:`phk`, a collection of blog posts from Poul-Henning Kamp related to Varnish and HTTP. Conventions used in this manual include: ``service varnish restart`` A command you can run, or a shortkey you can press. Used either in the terminal or after starting one of the tools. `/usr/local/`, `varnishadm`, `sess_timeout` A utility, Varnish configurable parameter or path. https://www.varnish-cache.org/ A hyperlink. Longer listings like example command output and VCL look like this:: $ /opt/varnish/sbin/varnishd -V varnishd (varnish-7.5.0 revision 1234567) Copyright (c) 2006 Verdens Gang AS Copyright (c) 2006-2024 Varnish Software .. For maintainers: .. * always write Varnish with a capital V: Varnish, Varnish Cache. .. * Write Varnish tools as their executable name: `varnishd`, `varnishadm`. .. * if part of a command actually runable by the reader, use double backticks: .. ``varnishd -f foo.c`` .. * wrap lines at 80 characters, ident with 4 spaces. No tabs, please. .. We use the following header indicators .. For titles: .. H1 .. %%%%% .. Title .. %%%%% .. H2 - H5 .. ====================== .. ---------------------- .. ~~~~~~~~~~~~~~~~~~~~~~ .. ...................... .. toctree:: :maxdepth: 1 installation/index.rst tutorial/index.rst users-guide/index.rst reference/index.rst vcl-design-patterns/index.rst whats-new/index.rst dev-guide/index.rst phk/index.rst glossary/index.rst Indices and tables ------------------ * :ref:`genindex` * :ref:`search` varnish-7.5.0/doc/sphinx/installation/000077500000000000000000000000001457605730600177715ustar00rootroot00000000000000varnish-7.5.0/doc/sphinx/installation/bugs.rst000066400000000000000000000141351457605730600214670ustar00rootroot00000000000000.. Copyright (c) 2010-2017 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license %%%%%%%%%%%%%% Reporting bugs %%%%%%%%%%%%%% Varnish can be a tricky beast to debug, having potentially thousands of threads crowding into a few data structures makes for *interesting* core dumps. Actually, let me rephrase that without irony: You tire of the "no, not thread 438 either, lets look at 439 then..." routine really fast. So if you run into a bug, it is important that you spend a little bit of time collecting the right information, to help us fix the bug. The most valuable information you can give us, is **always** how to trigger and reproduce the problem. If you can tell us that, we rarely need anything else to solve it.The caveat being, that we do not have a way to simulate high levels of real-life web-traffic, so telling us to "have 10.000 clients hit at once" does not really allow us to reproduce. To report a bug please follow the suggested procedure described in the "Trouble Tickets" section of the documentation (above). Roughly we categorize bugs in to three kinds of bugs (described below) with Varnish. The information we need to debug them depends on what kind of bug we are facing. Varnish crashes =============== Plain and simple: **boom** Varnish is split over two processes, the manager and the child. The child does all the work, and the manager hangs around to resurrect it if it crashes. Therefore, the first thing to do if you see a Varnish crash, is to examine your syslogs to see if it has happened before. (One site is rumoured to have had Varnish restarting every 10 minutes and *still* provide better service than their CMS system.) When it crashes, which is highly unlikely to begin with, Varnish will spew out a crash dump that looks something like:: Child (32619) died signal=6 (core dumped) Child (32619) Panic message: Assert error in ccf_panic(), cache_cli.c line 153: Condition(!strcmp("", "You asked for it")) not true. errno = 9 (Bad file descriptor) thread = (cache-main) ident = FreeBSD,9.0-CURRENT,amd64,-sfile,-hcritbit,kqueue Backtrace: 0x42bce1: pan_ic+171 0x4196af: ccf_panic+4f 0x8006b3ef2: _end+80013339a 0x8006b4307: _end+8001337af 0x8006b8b76: _end+80013801e 0x8006b8d84: _end+80013822c 0x8006b51c1: _end+800134669 0x4193f6: CLI_Run+86 0x429f8b: child_main+14b 0x43ef68: start_child+3f8 [...] If you can get that information to us, we are usually able to see exactly where things went haywire, and that speeds up bugfixing a lot. There will be a lot more information in the crash dump besides this, and before sending it all to us, you should obscure any sensitive/secret data/cookies/passwords/ip# etc. Please make sure to keep context when you do so, ie: do not change all the IP# to "X.X.X.X", but change each IP# to something unique, otherwise we are likely to be more confused than informed. The most important line is the "Panic Message", which comes in two general forms: "Missing errorhandling code in ..." This is a situation where we can conceive Varnish ending up, which we have not (yet) written the padded-box error handling code for. The most likely cause here, is that you need a larger workspace for HTTP headers and Cookies. Please try that before reporting a bug. "Assert error in ..." This is something bad that should never happen, and a bug report is almost certainly in order. As always, if in doubt ask us on IRC before opening the ticket. .. (TODO: in the ws-size note above, mention which params to tweak) In your syslog it may all be joined into one single line, but if you can reproduce the crash, do so while running :ref:`varnishd(1)` manually: ``varnishd -d |& tee /tmp/_catch_bug`` That will get you the entire panic message into a file. (Remember to type ``start`` to launch the worker process, that is not automatic when ``-d`` is used.) Varnish goes on vacation ======================== This kind of bug is nasty to debug, because usually people tend to kill the process and send us an email saying "Varnish hung, I restarted it" which gives us only about 1.01 bit of usable debug information to work with. What we need here is all the information you can squeeze out of your operating system **before** you kill the Varnish process. One of the most valuable bits of information, is if all Varnish' threads are waiting for something or if one of them is spinning furiously on some futile condition. Commands like ``top -H`` or ``ps -Haxlw`` or ``ps -efH`` should be able to figure that out. .. XXX:Maybe a short description of what valuable information the various commands above generates? /benc If one or more threads are spinning, use ``strace`` or ``ktrace`` or ``truss`` (or whatever else your OS provides) to get a trace of which system calls the Varnish process issues. Be aware that this may generate a lot of very repetitive data, usually one second worth of data is more than enough. Also, run :ref:`varnishlog(1)` for a second, and collect the output for us, and if :ref:`varnishstat(1)` shows any activity, capture that also. When you have done this, kill the Varnish *child* process, and let the *master* process restart it. Remember to tell us if that does or does not work. If it does not, kill all Varnish processes, and start from scratch. If that does not work either, tell us, that means that we have wedged your kernel. Varnish does something wrong ============================ These are the easy bugs: usually all we need from you is the relevant transactions recorded with :ref:`varnishlog(1)` and your explanation of what is wrong about what Varnish does. Be aware, that often Varnish does exactly what you asked it to, rather than what you intended it to do. If it sounds like a bug that would have tripped up everybody else, take a moment to read through your VCL and see if it really does what you think it does. You can also try setting the ``vsl_mask=+VCL_trace`` parameter (or use ``varnishadm param.set vsl_mask +VCL_trace`` on a running instance), that will generate log records with like and character number for each statement executed in your VCL program. varnish-7.5.0/doc/sphinx/installation/help.rst000066400000000000000000000072541457605730600214630ustar00rootroot00000000000000.. Copyright (c) 2010-2019 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license %%%%%%%%%%%% Getting help %%%%%%%%%%%% Getting hold of the gang behind Varnish is pretty straight forward, we try to help out as much as time permits and have tried to streamline this process as much as possible. But before you grab hold of us, spend a moment composing your thoughts and formulate your question. From our perspective there is nothing as pointless as simply telling us "Varnish does not work for me" with no further information. This does not give us any relevant information to use when trying to figure out whats wrong. And before you even do that, do a couple of searches to see if your question is already answered, if it has been, you will get your answer much faster that way. IRC Channel =========== The most immediate way to get hold of us is to join our IRC channel: `#varnish on server irc.linpro.no` The main timezone of the channel is Europe work hours. If you can explain your problem in a few clear sentences, without too much copy&paste, IRC is a good way to try to get help. If you do need to paste log files, VCL and so on, please use a pastebin_ service. If the channel is all quiet, try again some time later, we do have lives, families and jobs to deal with also. You are more than welcome to just hang out, and while we don't mind the occasional intrusion from the real world into our flow, we try and keep it mostly on topic, and please don't paste random links unless they are *really* funny, spectacular and intelligent. Mailing Lists ============= Subscribing or unsubscribing to our mailing lists is handled through mailman_. If you are going to use Varnish, subscribing to our `varnish-announce` mailing list is a very good idea. The typical pattern is that people spend some time getting Varnish running, and then more or less forget about it. Therefore the announce list is a good way to be reminded about new releases, bugs or potential (security) vulnerabilities. The `varnish-misc` mailing list is for general banter, questions, suggestions, ideas and so on. If you are new to Varnish it may pay off to subscribe to it, simply to have an ear to the telegraph-pole and potentially learn some smart tricks. This is also a good place to ask for help with more complex issues, that may require file-chunks, references to files and/or long explanations. Make sure to pick a good subject line, and if the subject of the thread changes, please change the subject to match, some of us deal with hundreds of emails per day, after spam-filters, and we need all the help we can get to pick the interesting ones. The `varnish-dev` mailing list is used by the developers and is usually quite focused on source-code and such. Everybody on the `-dev` list is also on `-misc`, so cross-posting only serves to annoy those people. Trouble Tickets =============== Our bugtracker lives on Github, but please do not open a trouble ticket, unless you have spotted an actual bug in Varnish. Ask on IRC first if you are in doubt. The reason for this policy, is to avoid bugs being drowned in a pile of other `issues`, feature suggestions for future releases, and double postings of calls for help from people who forgot to check back on already opened Tickets. New ideas may get parked in our Github wiki, until we have time for them, or until we have thought out a good design. Commercial Support ================== If you need commercial support, there are companies which offer that and you can find a `list on our homepage. `_. .. _mailman: https://www.varnish-cache.org/lists/mailman/listinfo .. _pastebin: https://gist.github.com/ varnish-7.5.0/doc/sphinx/installation/index.rst000066400000000000000000000006621457605730600216360ustar00rootroot00000000000000.. Copyright (c) 2010-2019 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _install-index: Varnish Installation ==================== This section covers installation prerequisites, a step-by-step installation procedure, how and where to get help, and how to report bugs. .. toctree:: :maxdepth: 2 prerequisites.rst install.rst help.rst bugs.rst platformnotes.rst varnish-7.5.0/doc/sphinx/installation/install.rst000066400000000000000000000027421457605730600221760ustar00rootroot00000000000000.. Copyright (c) 2010-2019 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _install-doc: Installing Varnish ================== .. no section heading here. With open source software, you can choose to install binary packages or compile it yourself from source code. To install a package or compile from source is a matter of personal taste. If you don't know which method to choose, we recommend that you read this whole section and then choose the method you feel most comfortable with. Unfortunately, something as basic as installing a piece of software is highly operating system specific: .. toctree:: :maxdepth: 2 install_debian install_freebsd install_openbsd install_redhat Compiling Varnish from source ============================= If there are no binary packages available for your system, or if you want to compile Varnish from source for other reasons: .. toctree:: :maxdepth: 2 install_source Other pre-built Varnish packages ================================ Here is a list of the ones we know about: * `ArchLinux package`_ and `ArchLinux wiki`_ * `Alpine Linux`_ * `UPLEX Packages`_ with various vmods for Debian, Ubuntu and RHEL/CentOS .. _`ArchLinux package`: https://www.archlinux.org/packages/extra/x86_64/varnish/ .. _`ArchLinux wiki`: https://wiki.archlinux.org/index.php/Varnish .. _`Alpine Linux`: https://pkgs.alpinelinux.org/package/edge/main/x86_64/varnish .. _`UPLEX Packages`: https://pkg.uplex.de/ varnish-7.5.0/doc/sphinx/installation/install_debian.rst000066400000000000000000000023061457605730600234740ustar00rootroot00000000000000.. Copyright (c) 2019 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _install-debian: Installing on Debian/Ubuntu =========================== From package ------------ Type:: sudo apt-get install varnish Official packages of 6 ---------------------- Starting from Varnish Cache 5.0, we've simplified our packaging down to two: the main package and a development package. The official Varnish Cache repository is now hosted at Packagecloud.io. Note that while Packagecloud.io provides Bash Script installs, we recommend using the manual installation procedures. Instructions for installing the official repository which contains the newest Varnish Cache 6 release are available at: * https://packagecloud.io/varnishcache/varnish60lts/install#manual-deb With the release of 6.0.2, users have to switch to switch repositories to get the latest version. Read more about this on `Release 6.0.2 `_. Official packages of 4.1 ------------------------ To use Varnish Cache 4.1 packages from the official varnish-cache.org repos, follow the instructions available at: * https://packagecloud.io/varnishcache/varnish41/install#manual-deb varnish-7.5.0/doc/sphinx/installation/install_freebsd.rst000066400000000000000000000012351457605730600236640ustar00rootroot00000000000000.. Copyright (c) 2019 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _install-freebsd: Installing on FreeBSD ===================== From package ------------ FreeBSD offers two versions of Varnish pre-packaged:: pkg install varnish6 or, if for some reason you want the older version:: pkg install varnish4 From ports ---------- The FreeBSD packages are built out of the "ports" tree, and you can install varnish directly from ports if you prefer, for instance to get a newer version of Varnish than the current set of prebuilt packages provide:: cd /usr/ports/www/varnish6 make all install clean varnish-7.5.0/doc/sphinx/installation/install_openbsd.rst000066400000000000000000000005761457605730600237130ustar00rootroot00000000000000.. Copyright (c) 2019 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _install-openbsd: Installing on OpenBSD ===================== From package ------------ Varnish is distributed in the OpenBSD ports collection as 'www/varnish':: pkg_add varnish From ports ---------- :: cd /usr/ports/www/varnish make install varnish-7.5.0/doc/sphinx/installation/install_redhat.rst000066400000000000000000000042441457605730600235240ustar00rootroot00000000000000.. Copyright (c) 2019-2020 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _install-redhat: Installing on RedHat or CentOS ============================== Varnish is included in the `EPEL `_ repository, however due to incompatible syntax changes in newer versions of Varnish, only older versions are available. We therefore recommend that you install the latest version directly from our repository, as described above. Varnish Cache is packaged in RPMs for easy installation and upgrade on Red Hat systems. The Varnish Cache project maintains official packages for the current Enterprise Linux versions. Varnish Cache 6.x series are supported on el7 and el8. We try to keep the latest version available as prebuilt RPMs (el7 and el8) on `packagecloud.io/varnishcache `_. Starting with el8 a DNF module will inhibit Varnish packages, and the solution is to disable the module before installing:: dnf module disable varnish Official packages of 6 ---------------------- Starting from Varnish Cache 5.0, we've simplified our packaging down to two: the main package and a development package. The official Varnish Cache repository is now hosted at Packagecloud.io. Note that while Packagecloud.io provides Bash Script installs, we recommend using the manual installation procedures. Instructions for installing the official repository which contains the newest Varnish Cache 6 release are available at: * https://packagecloud.io/varnishcache/varnish60lts/install#manual-rpm With the release of 6.0.2, users have to switch to switch repositories to get the latest version. Read more about this on `Release 6.0.2 `_. External packaging ------------------ Varnish Cache is also distributed in third party package repositories. .. _`Fedora EPEL`: https://fedoraproject.org/wiki/EPEL * `Fedora EPEL`_ does community packaging of Varnish Cache. * RedHat has packaged versions of Varnish Cache available since Software Collections 2.1. Announcement on . varnish-7.5.0/doc/sphinx/installation/install_source.rst000066400000000000000000000154671457605730600235660ustar00rootroot00000000000000.. Copyright (c) 2019-2021 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _install-src: Compiling Varnish from source ============================= If there are no binary packages available for your system, or if you want to compile Varnish from source for other reasons, follow these steps: Getting hold of the source -------------------------- Download the appropriate release tarball, which you can find on https://varnish-cache.org/releases/ . Alternatively, if you want to hack on Varnish, you should clone our git repository by doing. ``git clone https://github.com/varnishcache/varnish-cache`` Build dependencies on FreeBSD ----------------------------- To get the dependencies required to build varnish from source you can either:: pkg install automake pkgconf py36-sphinx py36-docutils pcre2 libtool .. XXX does cpio need to be installed on FreeBSD? And optionally, to be able to run all the testcases:: pkg install haproxy nghttp2 vttest Or if you want the built from sources:: cd /usr/ports/www/varnish6 make depends clean Then continue `Compiling Varnish`_ Build dependencies on Debian / Ubuntu -------------------------------------- .. grep-dctrl -n -sBuild-Depends -r ^ ../../../../varnish-cache-debian/control | tr -d '\n' | awk -F,\ '{ for (i = 0; ++i <= NF;) { sub (/ .*/, "", $i); print "* `" $i "`"; }}' | egrep -v '(debhelper)' In order to build Varnish from source you need a number of packages installed. On a Debian or Ubuntu system, use this command to install them (replace ``sudo apt-get install`` if needed):: sudo apt-get install \ make \ automake \ autotools-dev \ libedit-dev \ libjemalloc-dev \ libncurses-dev \ libpcre2-dev \ libtool \ pkg-config \ python3-docutils \ python3-sphinx \ cpio Optionally, to rebuild the svg files:: sudo apt-get install graphviz Recommended, in particular if you plan on building custom vmods:: sudo apt-get install autoconf-archive Optionally, to pull from a repository:: sudo apt-get install git Then continue `Compiling Varnish`_ Build dependencies on Red Hat / CentOS -------------------------------------- .. gawk '/^BuildRequires/ {print "* `" $2 "`"}' ../../../redhat/varnish.spec | sort | uniq | egrep -v '(systemd)' in the following shell commands, replace ``sudo yum install`` if needed. Install sphinx * On Red Hat / CentOS 8, sphinx is not included in the default repositories, so execute these steps to include it from the powertools repository:: sudo dnf install -y 'dnf-command(config-manager)' sudo yum config-manager --set-enabled powertools sudo yum install -y diffutils python3-sphinx * On Red Hat / CentOS <= 7, install sphinx:: sudo yum install -y python-sphinx The following step should conclude installation of the required packages:: yum install -y \ make \ autoconf \ automake \ jemalloc-devel \ libedit-devel \ libtool \ libunwind-devel \ ncurses-devel \ pcre2-devel \ pkgconfig \ python3-docutils \ cpio Optionally, to rebuild the svg files:: yum install graphviz Optionally, to pull from a repository:: yum install git .. XXX autoconf-archive ? is this any helpful on the notoriously outdated Redhats? Then continue `Compiling Varnish`_ Build dependencies on MacOS --------------------------- To compile varnish on MacOS, these steps should install the required dependencies: * Install ``xcode`` via the App Store * Install dependencies via `brew`:: brew install \ autoconf \ automake \ pkg-config \ libtool \ docutils \ sphinx-doc * Add sphinx to PATH as advised by the installer:: PATH="/usr/local/opt/sphinx-doc/bin:$PATH" Then continue `Compiling Varnish`_ Build dependencies on Alpine Linux ---------------------------------- As of Alpine 3, these steps should install the required dependencies: * Add the `Alpine Community Repository`_ * Install dependencies:: apk add -q \ autoconf \ automake \ build-base \ ca-certificates \ cpio \ gzip \ libedit-dev \ libtool \ libunwind-dev \ linux-headers \ pcre2-dev \ py-docutils \ py3-sphinx \ tar \ sudo Optionally, to rebuild the svg files:: apk add -q graphviz Optionally, to pull from a repository:: apk add -q git Then continue `Compiling Varnish`_, using the ``--with-unwind`` ``configure`` option. .. _Alpine Community Repository: https://wiki.alpinelinux.org/wiki/Enable_Community_Repository Build dependencies on a SmartOS Zone ------------------------------------ As of SmartOS pkgsrc 2019Q4, install the following packages:: pkgin in autoconf automake editline libtool ncurses \ pcre2 python37 py37-sphinx py37-docutils gmake gcc8 pkg-config *Note:* you will probably need to add ``/opt/local/gcc8/bin`` to ``PATH`` in order to have ``gcc`` available. Optionally, to rebuild the svg files:: pkgin in graphviz Optionally, to pull from a repository:: pkgin in git Building on Solaris and other Solaris-ish OSes ---------------------------------------------- Building with gcc should be straight forward, as long as the above requirements are installed. By convention, consider installing Varnish under `/opt/local` using:: ./configure \ --prefix=/opt/local \ --mandir=/opt/local/man Alternatively, building with Solaris Studio 12.4 should work considering the following recommendations: * have GNU `nm` in `$PATH` before Solaris `nm` * Provide compiler flags for `configure` to include paths under which dependencies are installed. Example for `/opt/local`:: ./configure \ --prefix=/opt/local \ --mandir=/opt/local/man \ CPPFLAGS="-I/opt/local/include" \ CFLAGS="-m64" \ LDFLAGS="-L/opt/local/lib -R/opt/local/lib" Compiling Varnish ----------------- The configuration will need the dependencies above satisfied. Once that is taken care of:: cd varnish-cache sh autogen.sh sh configure make The `configure` script takes some arguments, but more likely than not you can forget about that for now, almost everything in Varnish can be tweaked with run time parameters. Before you install, you may want to run the test suite, make a cup of tea while it runs, it usually takes a couple of minutes:: make check Don't worry if one or two tests fail. Some of the tests are a bit too timing sensitive (Please tell us which so we can fix them). However, if a lot of them fail, and in particular if the `b00000.vtc` test fails, something is horribly wrong. You will get nowhere without figuring this one out. Installing ---------- And finally, the true test of a brave heart: ``sudo make install`` Varnish will now be installed in ``/usr/local``. The ``varnishd`` binary is in `/usr/local/sbin/varnishd`. To make sure that the necessary links and caches of the most recent shared libraries are found, run ``sudo ldconfig``. varnish-7.5.0/doc/sphinx/installation/platformnotes.rst000066400000000000000000000050241457605730600234210ustar00rootroot00000000000000.. Copyright (c) 2012-2016 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license Platform specific notes ------------------------ On some platforms it is necessary to adjust the operating system before running Varnish on it. The systems and steps known to us are described in this section. Transparent hugepages on Redhat Enterprise Linux 6 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ On RHEL6 Transparent Hugepage kernel support is enabled by default. This is known to cause sporadic crashes of Varnish. It is recommended to disable transparent hugepages on affected systems. This can be done with ``echo never > /sys/kernel/mm/redhat_transparent_hugepage/enabled`` (runtime) or by adding "transparent_hugepage=never" to the kernel boot line in the "/etc/grub.conf" file (persistent). On Debian/Ubuntu systems running 3.2 kernels the default value is "madvise" and does not need to be changed. OpenVZ ~~~~~~ It is possible, but not recommended for high performance, to run Varnish on virtualised hardware. Reduced disk and network -performance will reduce the performance a bit so make sure your system has good IO performance. If you are running on 64bit OpenVZ (or Parallels VPS), you must reduce the maximum stack size before starting Varnish. The default allocates too much memory per thread, which will make Varnish fail as soon as the number of threads (traffic) increases. Reduce the maximum stack size by adding ``ulimit -s 256`` before starting Varnish in the init script. TCP keep-alive configuration ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ On some Solaris, FreeBSD and OS X systems, Varnish is not able to set the TCP keep-alive values per socket, and therefore the *tcp_keepalive_* Varnish runtime parameters are not available. On these platforms it can be beneficial to tune the system wide values for these in order to more reliably detect remote close for sessions spending long time on waitinglists. This will help free up resources faster. Systems that does not support TCP keep-alive values per socket include: - Solaris releases prior to version 11 - FreeBSD releases prior to version 9.1 - OS X releases prior to Mountain Lion On platforms with the necessary socket options the defaults are set to: - `tcp_keepalive_time` = 600 seconds - `tcp_keepalive_probes` = 5 - `tcp_keepalive_intvl` = 5 seconds Note that Varnish will only apply these run-time parameters so long as they are less than the system default value. .. XXX:Maybe a sample-command of using/setting/changing these values? benc varnish-7.5.0/doc/sphinx/installation/prerequisites.rst000066400000000000000000000013101457605730600234220ustar00rootroot00000000000000.. Copyright (c) 2010-2014 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license Prerequisites ============= In order for you to install Varnish you must have the following: * A recent, preferably server grade, computer. * A fairly modern and 64 bit version of either - Linux - FreeBSD, or - Solaris (x86 only). * Root access. Varnish can be installed on other UNIX systems as well, but it is not extensively or systematically tested by us on other systems than the above. Varnish is, from time to time, said to work on: * 32 bit versions of the before-mentioned systems, * OS X, * NetBSD, * OpenBSD, and * Windows with Cygwin. varnish-7.5.0/doc/sphinx/phk/000077500000000000000000000000001457605730600160525ustar00rootroot00000000000000varnish-7.5.0/doc/sphinx/phk/10goingon50.rst000066400000000000000000000244341457605730600205610ustar00rootroot00000000000000.. Copyright (c) 2016 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _phk_10goingon50: ======================== Varnish - 10 going on 50 ======================== Ten years ago, Dag-Erling and I were busy hashing out the big lines of Varnish. Hashing out had started on a blackboard at University of Basel during the `EuroBSDcon 2005 `_ conference, and had continued in email and IRC ever since. At some point in February 2006 Dag-Erling laid down the foundations of our Subversion and source tree. The earliest fragment which have survived the conversion to Git is subversion commit number 9:: commit 523166ad2dd3a65e3987f13bc54f571f98453976 Author: Dag Erling Smørgrav Date: Wed Feb 22 14:31:39 2006 +0000 Additional subdivisions. We consider this the official birth-certificate of the Varnish Cache FOSS project, and therefore we will celebrate the 10 year birthday of Varnish in a couple of weeks. We're not sure exactly how and where we will celebrate this, but follow Twitter user `@varnishcache `_ if you want don't want to miss the partying. -------- VCLOCC1 -------- One part of the celebration, somehow, sometime, will be the "VCL Obfuscated Code Contest #1" in the same spirit as the `International Obfuscated C Code Contest `_. True aficionados of Obfuscated Code will also appreciate this amazing `Obfuscated PL/1 `_. The official VCLOCC1 contest rules are simple: * VCL code must work with Varnish 4.1.1 * As many Varnishd instances as you'd like. * No inline-C allowed * Any VMOD you want is OK * You get to choose the request(s) to send to Varnishd * If you need backends, they must be simulated by varnishd (4.1.1) instances. * *We* get to publish the winning entry on the Varnish project home-page. The *only* thing which counts is the amazing/funny/brilliant VCL code *you* write and what it does. VMODs and backends are just scaffolding which the judges will ignore. We will announce the submission deadline one month ahead of time, but you are more than welcome to start already now. -------- Releases -------- Our 10 year anniversary was a good excuse to take stock and look at the way we work, and changes are and will be happening. Like any respectable FOSS project, the Varnish project has never been accused, or guilty, of releasing on the promised date. Not even close. With 4.1 not even close to close. Having been around that block a couple of times, (*cough* FreeBSD 5.0 *cough*) I think I know why and I have decided to put a stop to it. Come hell or high water [#f1]_, Varnish 5.0 will be released September 15th 2016. And the next big release, whatever we call it, will be middle of March 2017, and until we change our mind, you can trust a major release of Varnish to happen every six months. Minor releases, typically bugfixes, will be released as need arise, and these should just be installable with no configuration changes. Sounds wonderful, doesn't it ? Now you can plan your upgrades. But nothing comes free: Until we are near September, we won't be able to tell you what Varnish 5 contains. We have plans and ideas for what *should* be there, and we will work to reach those milestones, but we will not hold the release for "just this one more feature" if they are not ready. If it is in on September 15th, it will be in the release, if not, it wont. And since the next release is guaranteed to come six months later, it's not a catastrophe to miss the deadline. So what's the problem and why is this draconian solution better ? Usually, when FOSS projects start, they are started by "devops", Varnish certainly did: Dag-Erling ran a couple of sites with Varnish, as did Kristian, and obviously Anders and Audun of VG did as well, so finding out if you improved or broke things during development didn't take long. But as a project grows, people gravitate from "devops" to "dev", and suddenly we have to ask somebody else to "please test -trunk" and these people have their own calendars, and are not sure why they should test, or even if they should test, much less what they should be looking for while they test, because they have not been part of the development process. In all honesty, going from Varnish1 to Varnish4 the amount of real-life testing our releases have received *before* being released has consistently dropped [#f2]_. So we're moving the testing on the other side of the release date, because the people who *can* live-test Varnish prefer to have a release to test. We'll run all the tests we can in our development environments and we'll beg and cajole people with real sites into testing also, but we won't wait for weeks and months for it to happen, like we did with the 4.1 release. All this obviously changes the dynamics of the project, and it we find out it is a disaster, we'll change our mind. But until then: Two major releases a year, as clock-work, mid-September and mid-March. ---------------- Moving to github ---------------- We're also moving the project to github. We're trying to find out a good way to preserve the old Trac contents, and once we've figured that out, we'll pull the handle on the transition. Trac is starting to creak in the joints and in particular we're sick and tired of defending it against spammers. Moving to github makes that Somebody Elses Problem. We also want to overhaul the project home-page and try to get a/the wiki working better. We'll keep you posted about all this when and as it happens. -------------------------------------------- We were hip before it was hip to be hipsters -------------------------------------------- Moving to github also means moving into a different culture. Githubs statistics are neat, but whenever you start to measure something, it becomes a parameter for optimization and competition, and there are people out there who compete on github statistics. In one instance the "game" is simply to submit changes, no matter how trivial, to as many different projects as you can manage in order to claim that you "contribute to a lot of FOSS projects". There is a similar culture of "trophy hunting" amongst so-called "security-researchers" - who has most CVE's to their name? It doesn't seem to matter to them how vacuous the charge or how theoretical the "vulnerability" is, a CVE is a CVE to them. I don't want to play that game. If you are a contributor to Varnish, you should already have the nice blue T-shirt and the mug to prove it. (Thanks Varnish-Software!) If you merely stumble over a spelling mistake, you merely stumbled over a spelling mistake, and we will happily correct it, and put your name in the commit message. But it takes a lot more that fixing a spelling mistake to become recognized as "a Varnish contributor". Yeah, we're old and boring. Speaking of which... ---------------------------- Where does 50 come into it ? ---------------------------- On January 20th I celebrated my 50 year birthday, and this was a much more serious affair than I had anticipated: For the first time in my life I have received a basket with wine and flowers on my birthday. I also received books and music from certain Varnish users, much appreciated guys! Despite numerically growing older I will insist, until the day I die, that I'm a man of my best age. That doesn't mean I'm not changing. To be honest, being middle-aged sucks. Your body starts creaking and you get frustrated seeing people make mistakes you warned them against. But growing older also absolutely rulez, because your age allows you to appreciate that you live in a fantastic future with a lot of amazing changes - even if it will take a long time before progress goes too far. There does seem to be increasing tendency to want the kids off your lawn, but I think I can control that. But if not I hereby give them permission to steal my apples and yell back at me, because I've seen a lot of men, in particular in the technical world, grow into bitter old men who preface every utterance with "As *I* already said *MANY* years ago...", totally oblivious to how different the world has become, how wrong their diagnosis is and how utterly useless their advice is. I don't want to end up like that. From now on my basic assumption is that I'm an old ass who is part of the problem, and that being part of the solution is something I have to work hard for, rather than the other way around. In my case, the two primary physiological symptoms of middle age is that after 5-6 hours my eyes tire from focusing on the monitor and that my mental context-switching for big contexts is slower than it used to be. A couple of years ago I started taking "eye-breaks" after lunch. Get away from the screen, preferably outside where I could rest my eyes on stuff further away than 40cm, then later in the day come back and continue hacking. Going forward, this pattern will become more pronounced. The amount of hours I work will be the same, but I will be splitting the workday into two halves. You can expect me to be at my keyboard morning (08-12-ish EU time) and evening (20-24-ish EU time) but I may be doing other stuff, away from the keyboard and screen, during the afternoon. Starting this year I have also changed my calendar. Rather than working on various projects and for various customers in increments of half days, I'm lumping things together in bigger units of days and weeks. Anybody who knows anything about process scheduling can see that this will increase throughput at the cost of latency. The major latency impact is that one of the middle weeks of each month I will not be doing Varnish. On the other hand, all the weeks I do work on Varnish will now be full weeks. And with those small adjustments, the Varnish project and I are ready to tackle the next ten years. Let me conclude with a big THANK YOU! to all Contributors and Users of Varnish, for making the first 10 years more amazing than I ever thought FOSS development could be. Much Appreciated! *phk* .. rubric:: Footnotes .. [#f1] I've always wondered about that expression. Is the assumption that if *both* hell *and* high water arrives at the same time they will cancel out ? .. [#f2] I've seriously considered if I should start a porn-site, just to test Varnish, but the WAF of that idea was well below zero. varnish-7.5.0/doc/sphinx/phk/503aroundtheworld.rst000066400000000000000000000101001457605730600220650ustar00rootroot00000000000000.. Copyright (c) 2021 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _phk_503aroundtheworld: ================================ The 503's heard around the world ================================ Yesterday, june 8th 2021, for about an hour, the world saw things like: .. image:: fastly_503.png (The alert reader will notice that it says "Mediation" instead of "Meditation". I dont know why, but I know who to ask, and I will.) People have, somewhat justifiably jumped on this error message, complaining about how terse it is, for instance `Max Rozen `_ who makes some good and sensible suggestions about good and sensible error messages. But all his suggestions require some amount of ambient information, information which we in the Varnish Cache project do not have and never can have. When Varnish gets to the bottom of the `builtin.vcl` and errors out, we dont know where that Varnish instance runs, who it runs for, what information it serves, who it serves that information to and most crucially, what information can safely be shared with the clients. This is why the default "503 guru meditation" only mentions the `XID`: The XID is a nonce, for all practical purposes a random number, but one which allows the persons responsible for that varnish instance to dig through their logs and find out what happened to this specific request. The other thing I want to note, is that the one thing you absolutely do not want to happen, is a "priority inversion", where it takes more CPU, database accesses or bandwidth to produce the error message, than it would have to produce the intended result. This error message is intended to be in all respects minimal. I'm pretty sure I have seen normal 503s from Fastly's CDN, they dont look like this, so something must have gone badly wrong for Fastly yesterday. Even though I'm sure the XID did not in any way help them debug this specific problem, I am happy to see that our "error message of last resort" did what it was supposed to do, as it was supposed to do it. The first 503 stress test ------------------------- When we release the first version of Varnish the text "Varnish cache server" was link to the project homepage. That link acted as a "viral marketing campaign" for us: When web-people saw the 503 on another webpage, often a competitors webpage, they clicked on the link to learn what software they were running. And since everybody usually makes a couple of mistakes trying to get Varnish running for the first time, more and more web-people learned about Varnish Cache. So word about Varnish Cache spread around the globe. All the way to China. Where the Chinese State Railroads rolled out Varnish Cache. And made some kind of beginners mistake. A lot of chinese web-people learned about Varnish Cache that day. As did a lot of other chinese people, who instead of a train schedules and train tickets, got a few lines of english, which the vast majority of them did not read or understand. But there were also a link. They did what any sensible web-user would do. They clicked on the link. And dDoS'ed the Varnish Cache project server. ... and the Redpill-Linprox Data Center, where that server lived. ... and the Norvegian ISPs that announced the prefix of that data center. In fact, they effectively dDoS'ed Norway. (Update: I'm now being told by people from Norway that this did not dDoS that much. As I live in Denmark I obviously only have this saga on second hand. (The only effect I experienced myself was that the Trac server was unreachable). Somebody has exagerated the story along the way. However, this did prompt me to track down the precise timing: Varnish release 2.1.2, May 2010 has the link, release 2.1.3 from July same year does not.) And that, my friends, is why the error message of last resort in Varnish Cache does not even contain a link any more. */phk* PS: The title of this post is of course an homage to `John R. Garman's famous article `_. varnish-7.5.0/doc/sphinx/phk/VSV00001.rst000066400000000000000000000152361457605730600176520ustar00rootroot00000000000000.. Copyright (c) 2017 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _phk_vsv00001: Yah! A security issue - finally! ================================= As embarrased as I am to find out that after 35 years of writing C-programs I *still* dont understand signed/unsigned variables, I am also incredibly proud that it took eleven years before Varnish had a major security incident. One side-effect of this episode is that the ink is still wet on most of the policies for handling security issues in the Varnish Project. We are, in the inimitable words of Amanda F. Palmer, *"guilty of making shit up as you go along."* So here is what we made up: VWV - Vulnerability Workaround in VCL ------------------------------------- Rolling a new Varnish release, even with an trivial one-line patch is not a fast operation, getting from patch to packages rolled and pushed into operating system respositories takes at least several days, provided you can get everybodys attention. Being able to offer users a way to mitigate security issues in VCL bypasses all that red tape: The moment you have the advisory in hand, you can defend your Varnish instance. VCL mitigation will always be our priority number one. WLIC - We Love Inline C ----------------------- Brian W. Kernighan famously said that `he didn't like the programming language PASCAL `_ because *"There is no escape."* That quote is why Varnish got inline-C code from day one, to make sure there would always be an escape. However, recently we have been wondering if we should discontinue inline-C code, now that we have the much nicer and more structured VMOD facility. Well, that's all settled now: Inline-C stays, because it enabled us to craft the VCL-snippets to mitigate this "deep-code" security issue. VSV - Varnish Security Vulnerability ------------------------------------ To my utter surprise, I could not get an embargoed CVE number, without having to reveal what was going on. There are good and sane reasons for that, and I harbour no ill feelings against the people who refused. Their policies need to target the really buggy software, and history has shown that to be anything but easy. But being unable to get a CVE number when we needed one, left us without a handle, and there being no signs that this was a fluke, we decided to start our own numbering of Varnish Security Vulnerabilities. We are starting from VSV number 1, as this one is the first real flag day for us, and we have reserved VTC testnames `f%05d` to go with it. Five digits should be enough for everybody. As soon as possible, if possible, we will of course try to get a unique CVE number attached to each VSV, but the VSV will be our own primary handle. VIVU - Very Important Varnish Users ----------------------------------- I have been struggling with this problem for a long time, because sooner or later we would hit this event. Clearly some people deserve to get an early heads-up on security advisories, but who ? Any list big enough to be fair would also be too easy to sneak into for people who should not be on it. Such a list would also be a major head-ache for us to maintain, not to mention setting and judging the qualification criteria, and dealing with appeals from the ones we rejected. Instead I have decided that we are simply going to follow the money. People and companies who paid at least €1000 for a `Varnish Moral License `_ in the 12 months before the publication date, get advance warning of our security advisories. Those €1000 per year goes almost [#f1]_ 100% to maintain, develop and improve Varnish, so even if there are no security advisories, the money will be well spent beneficially for a Varnish user. Of course nothing prevents Wile E. Hacker from also paying €1000 every year to receive advance notice, but I suspect it would sting a bit to know that your own money is being used to prevent the event you are gambling on - and that it might take eleven years for the ball to stop. But we will *also* maintain a VIVU-list, for people and companies who contribute materially to the Varnish Project in some way, (documentation, code, machines, testing, meeting-rooms...) or people we judge should be there for some other good reason. If you feel you should be on the VIVU list, drop me an email, don't forget to include your PGP key. VQS - Varnish Quality Software ------------------------------ From the very first line of code, Varnish has also been a software quality experiment, I wanted to show the world that software does not have to suck. Strange as it sounds, now that we finally had a major security issue, I feel kind of vindicated, in a strange kind of *"The house burned down but I'm happy to know that the fire-alarm did work"* kind of way. We have had some CVE's before, but none of them were major issues. The closest was probably `CVE-2013-4484 `_ four years ago, which in many ways was like VSV00001, but it only affected users with a not so common VCL construct. VSV00001 on the other hand exposes all contemporary Varnishes, it doesn't get any more major than that. But it took eleven years [#f2]_ to get to that point, primarily because Varnish is a software quality experiment. How unusual our level of software quality is, was brought home by the response to my request for a CVE number under embargo: *"I can do an embargoed CVE, but I'm not comfortable assigning one blindly to someone who doesn't have a history of requesting them."* - that sounds like praise by faint damnation to me. Anyway, I will not promise that we will have no major security issues until the year 2028, but I'll do my damnest to make it so. VDR - Varnish Developers Rock ----------------------------- When this issue surfaced I had contractors and moving boxes all around me and barely had workable internet connectivity in the new house. The usual gang of Varnish developers did a smashing job, largely on their own, and I am incredibly thankful for that. Much appreciated guys! And also a big round of thanks to the sponsors and VML contributors who have had patience with me trying to do the right thing, even if it took a while longer: I hope you agree that it has been worth it. *phk* PS: This was written before the public announcement. .. rubric:: Footnotes .. [#f1] There is practically no overhead on the VML, only the banking fees to receive the payments. .. [#f2] It can be argued that we should only count until the bug was introduced in the codebase, rather than until it was discovered. In that case it is not eleven years but "only" eight years. varnish-7.5.0/doc/sphinx/phk/VSV00003.rst000066400000000000000000000136341457605730600176540ustar00rootroot00000000000000.. Copyright (c) 2019 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _phk_vsv00003: Here we are again - VSV00003 in perspective =========================================== So it probably helps if you first re-read what I wrote two years ago about our first :ref:`first major security hole. ` Statistically, it is incredibly hard to derive any information from a single binary datapoint. If some event happens after X years, and that is all you know, there is no way to meaningfully tell if that was a once per millenium event arriving embarrassingly early or a biannual event arriving fashionably late. We now have two datapoints: `VSV00001 `_ happened after 11 year and `VSV00003 `_ after 13 years [#f1]_. That allows us to cabin the expectations for the discovery of major security problems in Varnish Cache to "probably about 3 per decade" [#f2]_. Given that one of my goals with Varnish Cache was to see how well systems-programming in C can be done in the FOSS world [#f3]_ and even though we are doing a lot better than most of the FOSS world, that is a bit of a disappointment [#f4]_. The nature of the beast ----------------------- VSV00003 is a buffer overflow, a kind of bug which could have manifested itself in many different ways, but here it runs directly into the maw of an `assert` statement before it can do any harm, so it is "merely" a Denial-Of-Service vulnerability. A DoS is of course bad enough, but not nearly as bad as a remote code execution or information disclosure vulnerability would have been. That, again, validates our strategy of littering our source code with asserts, about one in ten source lines contain an assert, and even more so that we leaving the asserts in the production code. I really wish more FOSS projects would pick up this practice. How did we find it ------------------ This is a bit embarrasing for me. For ages I have been muttering about wanting to "fuzz"[#f5]_ Varnish, to see what would happen, but between all the many other items on the TODO list, it never really bubbled to the top. A new employee at Varnish-Software needed a way to get to know the source code, so he did, and struck this nugget of gold far too fast. Hat-tip to Alf-André Walla. Dealing with it --------------- Martin Grydeland from Varnish Software has been the Senior Wrangler of this security issue, while I deliberatly have taken a hands-off stance, a decision I have no reason to regret. Thanks a lot Martin! As I explained at length in context of VSV00001, we really like to be able to offer a VCL-based mitigation, so that people who for one reason or another cannot update right away, still can protect themselves. Initially we did not think that would even be possible, but tell that to a German Engineer... Nils Goroll from UPLEX didn't quite say *"Halten Sie Mein Bier…"*, but he did produce a VCL workaround right away, once again using the inline-C capability, to frob things which are normally "No User Serviceable Parts Behind This Door". Bravo Nils! Are we barking up the wrong tree ? ---------------------------------- An event like this is a good chance to "recalculate the route" so to speak, and the first question we need to answer is if we are barking up the wrong tree? Does it matter in the real world, that Varnish does not spit out a handful of CVE's per year ? Would the significant amount of time we spend on trying to prevent that be better used to extend Varnish ? There is no doubt that part of Varnish Cache's success is that it is largely "fire & forget". Every so often I get an email from "the new guy" who just found a Varnish instance which has been running for years, unbeknownst to everybody still in the company. There are still Varnish 2.x and 3.x out there, running serious workloads without making a fuzz about it. But is that actually a good thing ? Dan Geer thinks not, he has argued that all software should have a firm expiry date, to prevent cyberspace ending as a "Cybersecurity SuperFund Site". So far our two big security issues have both been DoS vulnerabilities, and Varnish recovers as soon as the attack ends, but what if the next one is a data-disclosure issue ? When Varnish users are not used to patch their Varnish instance, would they even notice the security advisory, or would they obliviously keep running the vulnerable code for years on end ? Of course, updating a software package has never been easier, in a well-run installation it should be a non-event which happens automatically. And in a world where August 2019 saw a grand total of 2004 CVEs, how much should we (still) cater to people who "fire & forget" ? And finally we must ask ourselves if all the effort we spend on code quality is worth it, if we still face a major security issue as often as every other year ? We will be discussing these and many other issues at our next VDD. User input would be very welcome. *phk* .. rubric:: Footnotes .. [#f1] I'm not counting `VSV00002 `_, it only affected a very small fraction of our users. .. [#f2] Sandia has a really fascinating introduction to this obscure corner of statistics: `Sensitivity in Risk Analyses with Uncertain Numbers `_ .. [#f3] As distinct from for instance Aerospace or Automotive organizations who must set aside serious resources for Quality Assurance, meet legal requirements. .. [#f4] A disappointment not in any way reduced by the fact that this is a bug of my own creation. .. [#f5] "Fuzzing" is a testing method to where you send random garbage into your program, to see what happens. In practice it is a lot more involved like that if you want it to be an efficient process. John Regehr writes a lot about it on `his blog "Embedded in Academia" `_ and I fully agree with most of what he writes. varnish-7.5.0/doc/sphinx/phk/apispaces.rst000066400000000000000000000133411457605730600205560ustar00rootroot00000000000000.. Copyright (c) 2017 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _phk_api_spaces: API spaces ========== The reason you cant remember hearing about "API spaces" at university is that I just made that concept up, as a title for this piece. We need a name for the collision where APIs meet namespaces. At some point in their career, most C programmers learn that ``j0``, ``j1``, ``y0`` and ``y1`` are names to avoid whereas ``j2`` and ``y2`` up to, but not including ``jn`` and ``yn`` are OK. The reason is that somebody back when I was a child thought it would be really neat if the math library supported Bessel functions, without thinking about ```` as an API which had to coexist in the flat namespace of the C language along many other APIs. One of the big attractions of Object Oriented programming is that it solves exactly that problem: Nobody is confused about ``car->push()`` and ``stack->push()``. But Varnish is written in C which has a flat namespace and we must live with it. From the very start, we defined cadastral boundaries in the flat namespace by assigning VTLA prefixes to various chunks of code. ``VSB_something`` has to do with the sbufs we adopted from FreeBSD, ``VGC_something`` is Vcc Generated C-source and so on. Mostly we have stuck with the 'V' prefix, which for some reason is almost unused everywhere else, but we also have prominent exceptions. ``WS_something`` for workspaces for instance. As long as all the C-code was in-project, inconsistencies and the precise location of function prototypes didn't matter much, it was "just something you had to know". Now that we have VMODs, and even more so, now that we want to provide some semblance of API stability for VMODs, we have a lot of sorting and some renaming to do, in order to clearly delineate APIs within our flat namespace and our include files. Frederick P. Brooks points out in his classic "The Mythical Man-Month", that is the difference between a program-product and a programming-product, and he makes the case that the effort required tripples, going from the former to the latter. Having spent some weeks on what I thought would be a three day task I suspect that his was an underestimate. I will now try to lay out what I think will be our policy on APIs and name-space sharing going forward, but please understand that this is mostly just an aspirational goal at this point. General namespace rules ----------------------- 1. Each API or otherwise isolated section of code gets a unique prefix, ending with an underscore, (``VSB_``, ``V1F_`` etc.) 2. Public symbols has upper case prefix. 3. Private symbols use prefix in lower case, both as ``static`` symbols in source files, and when exposed to other source files in the same section of code. 4. Friends-With-Benefit symbols have an additional underscore after the prefix: ``FOO__bar()`` and are only to be used with explicit permission, which should be clearly documented in the relevant #include file. VMOD API/ABI levels ------------------- Vmods can be written against one of three API/ABI levels, called respectively ``VRT``, ``PACKAGE`` and ``SOURCE``, defined in detail below. A VMOD which restricts itself to the ``VRT`` API/ABI gets maximum stability and will, we hope, work without recompilation across many major and minor releases of Varnish. A VMOD which uses the ``PACKAGE`` API, will likely keep working across minor releases of varnish releases, but will usually need to be recompiled for new major releases of varnish. A VMOD which uses the ``SOURCE`` API is compiled against one specific version of Varnish, and will not work with another version until recompiled. The VMOD VRT API/ABI -------------------- This API space could also have been called 'inline', because it is basically what you see in the C-source generated by VCC: | Include files allowed: | | ``#include "vdef.h"`` | ``#include "vrt.h"`` | ``#include "vrt_obj.h"`` | ``#include "vcl.h"`` Any private and Friends-With-Benefits symbols are off-limits to VMODs, (it is usually stuff VCC needs for the compiled code, and likely as not, you will crash if you mess with it.) The ``"vrt.h"`` contains two #defines which together defines the level of this API: | ``#define VRT_MAJOR_VERSION 6U`` | ``#define VRT_MINOR_VERSION 2U`` A snapshot of these will be automatically compiled into the VMOD shared library, and they will be checked for compatibility when the VMOD is imported by the VCL compiler. The VMOD PACKAGE API/ABI ------------------------ This API space provides access to everything in the ``VRT`` API space plus the other exposed and supported APIs in varnishd. | Include files allowed: | | ``#include "cache.h" // NB: includes vdef.h and vrt.h`` | ``#include "cache_backend.h"`` | ``#include "cache_director.h"`` | ``#include "cache_filter.h"`` | ``#include "waiter/waiter.h"`` Any private and Friends-With-Benefits symbols are off-limits to VMODs. In addition to the two-part VRT version, ``"cache.h"`` will contain two #defines for levels of this API. | ``#define PACKAGE_MAJOR_VERSION 1U`` | ``#define PACKAGE MINOR_VERSION 3U`` Compile-time snapshots of these will be checked, along with their VRT cousins be checked for compatibility on VMOD import. The VMOD SOURCE API/ABI ----------------------- This API space provides access to private parts of varnishd and its use is highly discouraged, unless you absolutely have to, You can #include any file from the varnish source tree and use anything you find in them - but don't come crying to us if it all ends in tears: No refunds at this window. A hash value of all the .h files in the source tree will be compiled into the VMOD and will be checked to match exactly on VMOD import. *phk* varnish-7.5.0/doc/sphinx/phk/autocrap.rst000066400000000000000000000061771457605730600204350ustar00rootroot00000000000000.. Copyright (c) 2010-2015 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _phk_autocrap: ==================================== Did you call them *autocrap* tools ? ==================================== Yes, in fact I did, because they are the worst possible non-solution to a self-inflicted problem. Back in the 1980'ies, the numerous mini- and micro-computer companies all jumped on the UNIX band-wagon, because it gave them an operating system for their hardware, but they also tried to "distinguish" themselves from the competitors, by "adding value". That "value" was incompatibility. You never knew where they put stuff, what arguments the compiler needed to behave sensibly, or for that matter, if there were a compiler to begin with. So some deranged imagination, came up with the idea of the ``configure`` script, which sniffed at your system and set up a ``Makefile`` that would work. Writing configure scripts was hard work, for one thing you needed a ton of different systems to test them on, so copy&paste became the order of the day. Then some even more deranged imagination, came up with the idea of writing a script for writing configure scripts, and in an amazing and daring attempt at the "all time most deranged" crown, used an obscure and insufferable macro-processor called ``m4`` for the implementation. Now, as it transpires, writing the specification for the configure producing macros was tedious, so somebody wrote a tool to... ...do you detect the pattern here ? Now, if the result of all this crap, was that I could write my source-code and tell a tool where the files were, and not only assume, but actually *trust* that things would just work out, then I could live with it. But as it transpires, that is not the case. For one thing, all the autocrap tools add another layer of version-madness you need to get right before you can even think about compiling the source code. Second, it doesn't actually work, you still have to do the hard work and figure out the right way to explain to the autocrap tools what you are trying to do and how to do it, only you have to do so in a language which is used to produce M4 macro invocations etc. etc. In the meantime, the UNIX diversity has shrunk from 50+ significantly different dialects to just a handful: Linux, \*BSD, Solaris and AIX and the autocrap tools have become part of the portability problem, rather than part of the solution. Amongst the silly activities of the autocrap generated configure script in Varnish are: * Looks for ANSI-C header files (show me a system later than 1995 without them ?) * Existence and support for POSIX mandated symlinks, (which are not used by Varnish btw.) * Tests, 19 different ways, that the compiler is not a relic from SYS III days. (Find me just one SYS III running computer with an ethernet interface ?) * Checks if the ISO-C and POSIX mandated ``cos()`` function exists in ``libm`` (No, I have no idea either...) &c. &c. &c. Some day when I have the time, I will rip out all the autocrap stuff and replace it with a 5 line shellscript that calls ``uname -s``. Poul-Henning, 2010-04-20 varnish-7.5.0/doc/sphinx/phk/backends.rst000066400000000000000000000104161457605730600203600ustar00rootroot00000000000000.. Copyright (c) 2010-2015 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _phk_backends: =============================== What do you mean by 'backend' ? =============================== Given that we are approaching Varnish 3.0, you would think I had this question answered conclusively long time ago, but once you try to be efficient, things get hairy fast. One of the features of Varnish we are very fundamental about, is the ability to have multiple VCLs loaded at the same time, and to switch between them instantly and seamlessly. So imagine you have 1000 backends in your VCL, not an unreasonable number, each configured with health-polling. Now you fiddle your vcl_recv{} a bit and load the VCL again, but since you are not sure which is the best way to do it, you keep both VCL's loaded so you can switch forth and back seamlessly. To switch seamlessly, the health status of each backend needs to be up to date the instant we switch to the other VCL. This basically means that either all VCLs poll all their backends, or they must share, somehow. We can dismiss the all VCLs poll all their backends scenario, because it scales truly horribly, and would pummel backends with probes if people forget to vcl.discard their old dusty VCLs. Share And Enjoy =============== In addition to health-status (including the saint-list), we also want to share cached open connections and stats counters. It would be truly stupid to close 100 ready and usable connections to a backend, and open 100 other, just because we switch to a different VCL that has an identical backend definition. But what is an identical backend definition in this context? It is important to remember that we are not talking physical backends: For instance, there is nothing preventing a VCL for having the same physical backend declared as 4 different VCL backends. The most obvious thing to do, is to use the VCL name of the backend as identifier, but that is not enough. We can have two different VCLs where backend "b1" points at two different physical machines, for instance when we migrate or upgrade the backend. The identity of the state than can be shared is therefore the triplet: {VCL-name, IPv4+port, IPv6+port} No Information without Representation ===================================== Since the health-status will be for each of these triplets, we will need to find a way to represent them in CLI and statistics contexts. As long as we just print them out, that is not a big deal, but what if you just want the health status for one of your 1000 backends, how do you tell which one ? The syntax-nazi way of doing that, is forcing people to type it all every time:: backend.health b1(127.0.0.1:8080,[::1]:8080) That will surely not be a hit with people who have just one backend. I think, but until I implement I will not commit to, that the solution is a wildcard-ish scheme, where you can write things like:: b1 # The one and only backend b1 or error b1() # All backends named b1 b1(127.0.0.1) # All b1s on IPv4 lookback b1(:8080) # All b1s on port 8080, (IPv4 or IPv6) b1(192.168.60.1,192.168.60.2) # All b1s on one of those addresses. (Input very much welcome) The final question is if we use shortcut notation for output from :ref:`varnishd(1)`, and the answer is no, because we do not want the stats-counters to change name because we load another VCL and suddenly need disabiguation. Sharing Health Status ===================== To avoid the over-polling, we define that maximum one VCL polls at backend at any time, and the active VCL gets preference. It is not important which particular VCL polls the backends not in the active VCL, as long as one of them do. Implementation ============== The poll-policy can be implemented by updating a back-pointer to the poll-specification for all backends on vcl.use execution. On vcl.discard, if this vcl was the active poller, it needs to walk the list of vcls and substitute another. If the list is empty the backend gets retired anyway. We should either park a thread on each backend, or have a poller thread which throws jobs into the work-pool as the backends needs polled. The pattern matching is confined to CLI and possibly libvarnishapi I think this will work, Until next time, Poul-Henning, 2010-08-09 varnish-7.5.0/doc/sphinx/phk/barriers.rst000066400000000000000000000136751457605730600204310ustar00rootroot00000000000000.. Copyright (c) 2010-2017 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _phk_barriers: ============================ Security barriers in Varnish ============================ Security is a very important design driver in Varnish, more likely than not, if you find yourself thinking "Why did he do _that_ ? the answer has to do with security. The Varnish security model is based on some very crude but easy to understand barriers between the various components: .. code-block:: text .-->- provides ->---------------------------------------. | | | (ADMIN)--+-->- runs ----->---. | | | | | | |-->- cli_req -->---| v v '--<- cli_resp -<---| VCL MODULE | | | (OPER) | |reads | | | | | |runs | | | | .-<- create -<-. | .->- fork ->-. v | v |->- check -->-|-- MGT --| |-- VCC <- loads -| VSM |-<- write --<-' | '-<- wait -<-' | | TOOLS | | | | ^ | .-------------' | | | | | |writes | |reads | |->- fork ----->-. | | | | |->- cli_req -->-| | | VSM ----' |-<- cli_resp -<-| v | | '-<- wait -----<-| VCL.SO | | | | | | | | | |---->----- inherit --->------|--<-- loads -------' | |---->----- reads ---->------| | '----<----- writes ----<------|--<-- loads --------------------' | | | .--->-- http_req --->--. | .-->-- http_req --->--. (ANON) --| |-- CLD --| |-- (BACKEND) '---<-- http_resp --<--' '--<-- http_resp --<--' (ASCII-ART rules!) The really Important Barrier ============================ The central actor in Varnish is the Manager process, "MGT", which is the process the administrator "(ADMIN)" starts to get web-cache service. Having been there myself, I do not subscribe to the "I feel cool and important when I get woken up at 3AM to restart a dead process" school of thought, in fact, I think that is a clear sign of mindless stupidity: If we cannot get a computer to restart a dead process, why do we even have them ? The task of the Manager process is therefore not cache web content, but to make sure there always is a process which does that, the Child "CLD" process. That is the major barrier in Varnish: All management happens in one process all actual movement of traffic happens in another, and the Manager process does not trust the Child process at all. The Child process is in a the totally unprotected domain: Any computer on the InterNet "(ANON)" can connect to the Child process and ask for some web-object. If John D. Criminal manages to exploit a security hole in Varnish, it is the Child process he subverts. If he carries out a DoS attack, it is the Child process he tries to fell. Therefore the Manager starts the Child with as low priviledge as practically possible, and we close all filedescriptors it should not have access to and so on. There are only three channels of communication back to the Manager process: An exit code, a CLI response or writing stuff into the shared memory file "VSM" used for statistics and logging, all of these are well defended by the Manager process. The Admin/Oper Barrier ====================== If you look at the top left corner of the diagram, you will see that Varnish operates with separate Administrator "(ADMIN)" and Operator "(OPER)" roles. The Administrator does things, changes stuff etc. The Operator keeps an eye on things to make sure they are as they should be. These days Operators are often scripts and data collection tools, and there is no reason to assume they are bugfree, so Varnish does not trust the Operator role, that is a pure one-way relationship. (Trick: If the Child process us run under user "nobody", you can allow marginally trusted operations personel access to the "nobody" account (for instance using .ssh/authorized_keys2), and they will be able to kill the Child process, prompting the Manager process to restart it again with the same parameters and settings.) The Administrator has the final say, and of course, the administrator can decide under which circumstances that authority will be shared. Needless to say, if the system on which Varnish runs is not properly secured, the Administrator's monopoly of control will be compromised. All the other barriers ====================== There are more barriers, you can spot them by following the arrows in the diagram, but they are more sort of "technical" than "political" and generally try to guard against programming flaws as much as security compromise. For instance the VCC compiler runs in a separate child process, to make sure that a memory leak or other flaw in the compiler does not accumulate trouble for the Manager process. Hope this explanation helps understand why Varnish is not just a single process like all other server programs. Poul-Henning, 2010-06-28 varnish-7.5.0/doc/sphinx/phk/bjarne.jpeg000066400000000000000000000715261457605730600201750ustar00rootroot00000000000000ÿØÿàJFIFÿÛC    $.' ",#(7),01444'9=82<.342ÿÛC  2!!22222222222222222222222222222222222222222222222222ÿÀàÐ"ÿÄ ÿĵ}!1AQa"q2‘¡#B±ÁRÑð$3br‚ %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyzƒ„…†‡ˆ‰Š’“”•–—˜™š¢£¤¥¦§¨©ª²³´µ¶·¸¹ºÂÃÄÅÆÇÈÉÊÒÓÔÕÖרÙÚáâãäåæçèéêñòóôõö÷øùúÿÄ ÿĵw!1AQaq"2B‘¡±Á #3RðbrÑ $4á%ñ&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz‚ƒ„…†‡ˆ‰Š’“”•–—˜™š¢£¤¥¦§¨©ª²³´µ¶·¸¹ºÂÃÄÅÆÇÈÉÊÒÓÔÕÖרÙÚâãäåæçèéêòóôõö÷øùúÿÚ ?õŠ(¤¥qQNàQE¢Š(¸-Qp Z(¢àQE¢ŠZ.KIKEÀZ(¢•ÀZ))iÜŠ(¢à¢Š(¸-%s´´”Qp–’Š.Ñš(¢àf’Š9€Z3IEÀ.hÍ%-sFi(£˜Í.i(£˜£4”´sfŒÒQG0 š3IE…Í.i´´s \Òæ›EÀ;4f’Š9€vãëK¼úšes Ã÷·©£yõ4Ê(æ Þ}MÛÔÓh£˜,;ÌoSG˜Þ¦›IG0ì?ÌoSG˜ßÞ4Ê)s0°ÿ5ÿ¼hó_ûÆ™IK……gfêI¦Z*¸ÆbŒRÑPÆ7b—T±Å&)Ôb¤bŽ)ø¤oºióÏÄCŸÏþâÿZå+¨ø€sâ럠®bº ½ÔCÜúÀk Ú¸+§ÅsÞ]¾³ÿpWFp=Ùª¸¥Å.)˜£¸£XÅ¥Å.(1E.)q@ ¢Š+¾æaEQp (¢‹€RÒRÑp ZJZ.E-\Š(¢àRÑEÀ(¢–‹€QKE¢–“\Š\RbÀ(£Qp ZJZ.KIKEÀ(¢–•À(¢Š.IKE(¢Šw¢Š)\¢Š(¸ EQp (¢‹€QE\Š(¢à´”´\Š(¢à´”QpŠ(¢àQE¥¤¢•À1EQp JZ)\’Š)\bQKE+€Ú)h¤Ru%HÄÅ&)h¤i|§éN¦¿Ü?J@|çãßÝþÍw®Ç'>-¼úŠçGÞÙî¢çÑ~]¾³ôÌVø‹á5ÛáÛAþÀ­ºóžæ¢ÑŠQERÑ@ E-”RÑHRRÑ]¦aE%´RRÐKIK@ E´QE´QEQE0Š( ¥¤¥¤ES¢Š(¢Š(¢Š(¥¤¥ –’–€ (¢€ (¤ Š( –’ŠZ(¢ E%-QEQEQEQE-Q@Š(¤ÑIE0Š(¤E%-QE%QJàQE (¢€ JZ) J(¢ ER)’q})õüBÿCH›|dûüU|Û¬5ûëõ­o¶ïßújk*>eAî+¶? !î}'á•Û ZökÖ_‡†ÝÔ°+V¼Ó@¥¢ŠQK@ F(¥ ¢–Š@EE%-v´”P ¥¤¥ ”RRÐÒÒQ@Å¢Š(¥¤¥ BRÑE0 ZAK@Å¢Š)´RQL¢Š(QERQE-”P¨Í%-ŠJ)´”QL¥¤¢ E%´f’–Å¢’–€ \ÒQ@Q@Q@-%´RRÒ¢Š(h¤¢€Š( Š( Š(¤EPE%-QE!…”R¢ŠJ@QIH¡¹8¶“éST‡’»Ió/ˆ[v¿|é³Un#íçVµ–ݬÞY›ùÕkQ›ÈG«¯ó®ß²Aô¾†1£Û¸+J¨hãU¸ÿ`Uúó¢Š(´QEQE%QH¨¢Šë Z))h¢ŠJZZJZZZJZ)i)h¢’–˜Q@ KM¥ QIE-”PÑEQEQE%´”PÒÒQ@¢’Š(¢Š(¤¢€ ÒÒQ@ E%€ZZm--Q@ IEQEQE´QE ŠJ(h¤¥ Š( –’Š)i( ¤¢ŠW¢Š(¢Š) (¢’ IEQE% «¨XÊÙ5j©ê‡tÇý“Ió¢wjW'ÖVþtÛBØÓUþt—guÜÇý¶þtý4gSµôÙ˜®Çð}3¥ñ¦À?ØrªiÃ|ýVÅyèÔZZAK@„¥¤¥ aIKI@QHii)k¨€¢Š(¥¤¥ ”RRÓii)h¢Š(¥¢Š(¢’€–’Šu”´QEQEQERÒQL’–’ KM¥ ¢Š(¢Š;PE%Ræ’ŠZ)( ¢’–€–’Š@-”PEPÑIE--%´RQ@ E%-´”PEPÑIKHŠ(¤0¢Š(¢’Š@QEQIH¤¢Š*†²q¥Oþé«õ›¯6Ýàÿ°• >a›™¤?í±¤Œêöƒþ›/óª®rì}êæŠ7kvCþ›/ó®ÉìÈGÒö0öEY¨,Æ-"ÿtUŠó‘¨RÑE1Q@Ä¥¢’-%-AKM¥®¢¢Š(¢Š(E-6@ EPÑE´QE”´PRŠJZ`--%´RQHŠ(¦EPÑIEQI@ E%êZ@h¤ES¢Š((¥¤ ¢’–RP©i¢–€ŠJ(h¤¢€ŠJ(ii( ¢’ŠWh¤¥¦ER QE--%†QEQE J( ŠJZ@QI@ EPY>$;t+£þÁþU­XÞ);|?vé›*@|ÈzšÑðøÎ¿b?é²Öi­O ŒøŠÀÓa]søYúZ׋hÿÝ5En1oû¢¥¯<Ô)h¢€ (¢€ (¢ E-%W¥¯ÿ…ǨÿÐ.Óþúj_ø\š‡ýíï¶®¾VEÏd¢¼t|d¿ï¥ÛßmN/{évÿ÷Ù£•…Ï`¥¯ÿ…Éyÿ@¸?ï³AøÉwÿ@¸?ï³G+ ž¿šp¯ÿ…ÍwŸùCÿ}šü.k?äýöhåasØ)kÇ¿ásÜÿÐ*/ûìÒÿÂç¸ÿ T_÷Ù£•…Ï`¥¯ÿ…ÏqŸùGÿ}šž‹·w«“Ð 9£•…ÏYÍ®3Lñ^«¨FM1móýÿÿ_5¤šµÃpÆ0}5j”‚çBµÏ>»$K‚PŸqU¿á$šGÚ¾^Ð>ð£ÙH.u9 å—Z•ÎÝß•NšœÉÉD?&¯ØH\ÇIErwþ/ûù¢ { “ø‘\Ì¿Y%t])ÈC´œ÷¬ÜÏR¤¯'ÿ…Ê?èÿQÿ ”šoþ=Sf=bŠòø\Š?æïªiøÊ3ÿ ßüz‹0¹ë4òqñ™èïª_ø\¨æ:vasÕékÉÏÆTóoÎ|g‹þÍùÑf=f’¼¤|f‡þÏùÿõéÃã$ôçüÿúôY…ÏT¤ÍyYøÍmÿ@ù?ÏãJ>3Z÷°“üþ4Y…ÏUW•ŒÖóá/ùüiÿð¹lÿçÆ_óøÒ³ ž£K^Z>2YŸùq—üþ4¿ð¹,¿çÊOóøÑf=F’¼»þ%—üùKþ_ø\–_óå'åÿ×¢Ì.z…åÿð¹,çÎOÊ“þ%üùÉùQf=B–¼³þ-ÿ—9?*pøË`zÚIùQf=FŒ×˜ÂäÓ¿çÖOÊø\šwüûIùQf=?4¢¼¿þ&›ÿ>Òß4õøÅ¦ùw—þù4Y…ÏN¢¼ÄübÓçÞOûäÒÂâÓ¿çÞOûæ‹>ÁsÓè¯0´ìÿǼŸ÷Í;þ™Þ ?ï“EŸ`¹é´•ægã™ÿ/éçþY¿ýðhå}‚ç§Ñ^aÿ ~Ãþy¿ýðhÿ…¿aÿ<ßþø4r¾ÁsÓè¯0ÿ…¿aÿ<ßþø4ÂÞ±ÿžoÿ|V}‚ç§ÒW˜Âß±ÿžoÿ|CñzË´oÿ|9_`¹êµåßð·¬ÿç“ÿß&š~/ZÏ'ÿ¾ir˰\õ*+Ë?áoZÿÏÿ¾i?áo[ÿÏÿ¾hå—`¹ê´W”ÿÂÞƒ´ùR‹ÑÏ»þTr˰\õzÃñqLJ/ý3oå\ø½üû¿åYÚÇÄñ©i³Z¬ ŠFM’ì<ÐŽk[Ã>#±ôÔV^+_Âãþ*[úê+yü,”}%ú”ÿtT‚™ýÊ}I\(Ô(¥¢˜‚Š(¤ CEPIKI@$ÑÚœ#'µ.Ãé^˜ÁÖNJ_,“Ò ÒjR„v¦l'µ0J9§"›Œiâ”S¶5,y“"a‰cŒ´À·¦iï}r‘¤LKFA?ÿ^½;HЬôˆDŽJFX0ùAö+3K¶´Ðl•bM÷n>f#‘íRM~È3+ï|pAZÅ%«¶oÞVÉéÚ™q¨}šÏ´{ýkê~TFG‘¤vì¼ ¤V}BG3»ˆ‘rQF1õ¥)”¢>çÄrM)Hˆ ƒÆGj[-]–rgKŽÞ¸ÇZ¨!#g ´°!XõÇ¥ÉûÉvHp@÷Ç Õ®âgA£“òª9$œçZö:»ûÖÝï\¥¬Ætò/ããø\ OŠÛÈ4R±^Í×üþUNz DïÊÚú0&‚)ÓÜ Šæµi÷Þß|2•` ù—8ý*æ—wå0_3ƒÔ] e%…‰ee>üVnÌ«¨é³é÷/«ÐãŒñU•kÖ¼W˜Ë—‘óg¸eÁ¬X¬TaòÓqV„$©È£È8Î)\ ˜§…ÀÍMäô4:0¢â*°É£n*Ò[±þC@Äð§ò¢àWEÉ¥cж–ì«§ò¨¼‡c÷ò¦\fŠ»öGìùSM¤§þY·å@õæ¦6sÄOÿ|šš)Aɉð=!)±j,f®KÌø?ýòiœØÿS'ýòhSb®}Šùã'ýòh6SÿÏ ?ï“EÀ¤G4m«‚ÂàÿË ?ïƒHÖ7?óï/ýðh¸˜R«¿Ù÷_óï/ýðhû×üûMÿ|.2ž)¤Ußìë³ÿ.ÓßìÛ¿ùõ›þø4\ @fœ*ßöuàÿ—Y¿ïÙ¤mé?ñé?ýû4\ Ô º4ËßùôŸþýš_ì«ïùôŸþýšwAb•%^þÊ¿?òç?ýû4dßÿÏ”ÿ÷ìÒ¸qL#šÒþÈÔ?çÊãþýšOìmDŸøñ¸ÿ¿f‹ 3À§u«ãDÔ‰ÇØ§ÿ¿f§‡Ãú”²ˆÖÆrǧÈy§p2±NÝéÿ µëÛpÓE¯§'?›}ð¯Ä©¾³\ü1Hsù0|’ì8zLVÄÞÖmd).™r¬=c4ßì M†~Ápýs<Öm¤2qKZŸðj§þa÷÷Á¥ÿ„wVÿ }Çýði]Œ³M"µÇ‡5sÿ0ûûàÒŸ êÿô¸ÿ¾ AcV·ü#ZÏý®?ïŠpðƳÿ@ëûæŸ2 VÏü#Ïý§ÿ¾ißðŠë8ÿtÿ÷Í.eÜ,ÌZZÙÿ„SZÿ tß÷Í8xO[ÿ |ß•˸XÃ"[£ÂZßý¥ü©á×?è-ÑîfÍnÿÂ!®ÿÐ>Zpð~»ÿ@ùhçp³0qKŠÞÿ„;]ÿ |Ÿ¥/ü!º÷ýäý(çp³0hÅoëßô“ô¥ÿ„3^?òàÿ˜¥Ïáfsø£Ðÿ¯σþbÿ^½ÿ>ùŠ9ãÜvg;ŠÖðÏ$±ÿ®¢®ÿ®ÿσþb´4O kVºÝ¤òY2Æ’Ç#¥LçW¨Yžíú”ÿtT”ËpÍ `€*R õ®DX”QEQE JZJ))h  ¡¤ø{þv¿÷å?—û'Ãßô µÿ¿)þç_ð–ßÿÏ8ÿZ_øK/ÿ眭o¨EþÊðÿý-ïÊÿ…/öW‡ÿèmÿ~Wü+οá,¿þâ~´ÂYÿ<Óõ§¨FþËðÿýmÿïÒÿ…/ö_‡ÿèýú_ð¯8,¿þâ~´¿ð–_ÿq?3F z8Òü?ÿ@è?ïÚÿ…8i~ÿ t?÷í¼ÛþËÿî'ëKÿ ^¡ýÄýij¤ÿføxÌ>/ûö¿á\·Œ5 #K·[}>Ê%¹~­å”~°—Å7Äàª~µÌêÚ‹ÝÜÉ4‡8÷ªŽãµÈg¼* „’Xü¾õTƒ"©n]ÎO°õ¦?ÎUŸ ; h˜™? ÙÈ,h 0wªÃ ¼trÖlÚÀÌÏ‚O ÿõšÆÝH““S¥ÎÉS'!q=³ŒÖr‘i/gE¼ q¶.€ Óʳ£‹Ë' ïoÞ1÷¢êo>Wu–ɇòÿ´è Âݘ?§ÿ^”jYjW-È8}¬X@ç8"´"R«¹=Vê®m‚Æ2šÔÓâ à r'ÿ×MÕ[‡!%­êÆêó]%µäq¨+»8ÎV³’É\.Ns÷_ú¹k×0?Ë'lŒRdÁÀØûDrÅ•[sŠÓÒ%°¼…„–ñy¨pÃm`²´hQvåz†ôª-y5›I%´ŠçŽ)¹\™Dï¾Í§ϼ_÷ÍgÓ¿çÞ/ûæ¼Ù¼Q~i?á(Ô?Ù©3=(A§ŽGÿ|ÑäiÿóÂ?Ê¼Ôø£Pÿf“þCÚ@ô¿&ÃþxGÿ|ÑäØÏÿ*óOøJ5j?á(¿ö£P=/ʰÿž1þTy6óÆ?ʼÓþ}CÔQÿ >¡ê(ÔKò¬çŒ•U‡üñò¯4ÿ„žÿÔQÿ >¡ê)j¥˜¬çŒ•+1åGùWšÂM¨zŠ?á&Ô=Eé~]üñò¥òìçŠ~Uæð“_úŠ?á%Ô=Eéž]üòOÊ.ËþyGùW™ÿÂK¨xQÿ .¡ýê5Ó6YÏ$ü¨òì¿ç’~Uæ_ð’êÞÂK¨{õ£P=7e—üòOÊ—m—üòOʼÇþMCûß­/ü$z‡÷¿Z5ÓvYÏ$ü¨Ûeÿ<Óò¯2ÿ„Pþõð‘jÞ¥¨›²Ëþy§åI¶Ëþy'å^gÿ ¡ýïÖøH¯ÿ¿úѨ›¶Ëþy§åF,ÿçš~UæðêߣþCûôj¦bÏþy§åF,ÿçš~Uæð_ÿõ£û~ÿûÿ­ŒôÏô1ÿ,Óò¨Þk=#Ï¥y³ë÷›~yÇšÕÑD’…º¹'-÷žÞµÑJƒž¬–õ±Ùy–ì?wçÔŽ”‹åDwm]ÇÚ©}¡R>½ª¯Ú÷IËWbŒ(­74…76jKv9 7¨nµHÎLsC>5Í:ìë‹ßjÈÚê=èKèâp³F¥|t¬É$eä1ÓXñßÒ¹å]KF7AXè’{7è¤ßkýÕ¯5Ô/®­o«½©[V½‡Yr§Ž½*f’Ùœn-3ÒwÚÿuhßkýÕ¯2þÚ¾ÿž†í«ßùè:Îì“ÓwÚÿuhó-¿ºµæÛ7¿óÐþtl^ÿÏCùÒ»Ó<Û_î­m·÷Wò¯4þؽÿž‡ó£û^÷þz9˜Xô¿6Ûû«GmýÕ¯5µç÷ÏçKý­yÿ= .f3Ò|ëoî­/ŸmèµæßÚ—ŸóÐÒNïþz9˜Xô:ÛÑi|ûoî­yÇößüô4£R»?ÆhæacѼûoE£Ï·ôZó±¨]|Ó¾ßrŒÒæ …ö‹E£íþ‹^~/®¾jAypOß?ÌïÏ·ôZ_>îŠå¬f‘þóÖ‡z.ÅsKÏ€ü¨ûD9û£òª~bÒ›ç(ì)ê3Eo#Šs\,+3ÏOAJ· :b‹°4AÍ-TŽp{Õ…HÑI‘K@ ER(¢Š@x˜·4ﳚÜ‘úRý–?Jߘ“ìæ³šÜû,~”ák¥.`0¾Îhû?Ö÷Ùcô£ì±ç¥àa}žœ-ëp['¥8[§¥â0$‹Ê‰ŸÐW3;ù…BôÝŸ­vºØX4Ç+ÃWí‚q¶©;—…žLŒaQ‡2y46™ ä-7Ë/,1Ç>ÕW<›”·b@úóOŒn‡aêXSv€8¦DÙvç§J–4J¬Y˜¯cšÙ ²Xª=Çù5Ÿglì…ˆ­%VR¸c¢HÞ"ˆ†y=â§c"ƒ€Û[B}} gÙXÙ« ¡Ëmã¾k9­ h܆ýc½ØÁ éèkJ7·¼6}“¯+ŽžÄ{*⥸‘~I²®„qÔˆ¥’ü¸YU¾a×sX¤ÄήMK ¤À±ÔóÇ­R{£3`ª‘ÐrH®~}MRV+ætl‚7Çò«š`ûsRC/é[ÆRFrH³%¸ÜqŽ´Ï³ûVß’¾”y+éZó¦/Ùý©>ÏžÕµå/¥/”¾”s‰öjO³Öß”¾”yKè(æ ˜¿g>”}›Ú¶|¥ô¥ò—ÒŽp¹‹öj_³ûVÏ”¾”yKéG0ßgö£ìþÕ³å/¥/”¾”s‹öj_³ŸC[>Pô£Ë_J9‚æ?Ù½¨û)ô­-})|±éG3 ˜ßfö¥û7µlycÒ—Ë”s01¾ÍíKöojØòÇ¥Xô¥ÌGÙ¥f>•±åJ<±éG3ìÞÔ SéZþXô¥JW`d‹céGÙ½«_`ô¥òÇ¥c¹‘öojâÚvÍmŠÏ¸alø›ô«§vÂæ|p‹›Å‡’ªrƺu¹XÊŒ€W;c"§™ $’qš•î°ÆNkÓƒäV#s£{ÌŽi‰.yïYÎÇŽµr92= qÖ¨Û=QI1ÈÜóÍiZ¬l­æž¬X\ƒób´br£ƒÖ¸å&jÇλNP‚:UpÄ1qž´ùœôæ«1!RxÆE`ØÖÆv³lT•ry¬8ÜÇq$MÓ5Ô\èíÈ®SRC¦1Á­a&ÕŽ•£´Rí¥p3þÌ})ÂØúUý´»hÔ.Pû7µ8[ŸJ½¶—mr·4ñnjæÚP´j* j‘`æ¬ a ¨«…˶ªU³(QÖ²ØA֫ͨžkUM‰o޵NMD⬠õb@5F[¦çš®Q9Õ÷¨]Uw}êã^íÁëLûkç­g$‹G¡Ûê!ZÓŠðÖ¼ÞÓSt<šÚ·ÕÇ5b݇c¸K{Ôë(=듃SS˜VœÀãšJB±ºijŒW!»Õ¤«ˆ}-Sζš]µ SK´úV¶3¹Ú]µ&ÓéK´ÑÊ"ÚiÛjM§Ò¦ŽP¹ÚP¦¤Ú})BJ|¡sÄʳTúHèk…¸'Ö½Ä6ï.(QÊüÕç²àg(ØÖ¢¶X;6AÅN¬XLÔ”·Ìà‘dP ÀÏZeH¤Â6õaÈ©ì­CIŒd÷4ØyÆy5»§Z¬Qî#šQÙf8# ÅJ-A\­.åÏ'“jvöËÉÜ{DЂ׎†™öw\ê*8¼M‚5#=}*Âx‹N•€èO÷kËMKi;Ó 2ž„/J¤tËŒ1Hdç¯Ë]4W©$`DàäñOkøâg2®ÓÍgª/‘\š|î» xâºOY=¼R—\â¬K®éI‚JÊÞ®闖÷é'Ù”®s)µ¤5g-{(èO¶µ?”Þ”y-é[Xâ¹_mj%½(ò[Ò‹ ¹_mjÇþ”yéE]öѶ¬y éKä7¥Ar¾Ú6Õ!½)~ÎÔYÊÛhÛV~ÎÔ}™¨² ¢¶Ú6Õ¯³5foJ,…ÌŠÛivÕŸ³7¥/Ù›Ò‹ æE]´m«_fj>ÌÔY™vÒí«_fj>ÌÔ¬ƒ™vÑ´U¿²·¥e>”Y2*l¥ÛVþÊhû)¢È9‘WmjßÙMe4´dP›åˆúÖ%ûb"3Î:WAw /­r:ÌÄJȧ“Àö®Œþõ¬FkšÓ¥n€ýñЃ×Öº»4¬ƒŽjÓ8¦¬C¶—m^û'µdöª3æ(í£씿eö¤È¡¶—ì´}—Ú€æER…«ßdö¥û'µ æ(K¶¯}—Ú—ì¾Ô1D SÀ«ßfö¨e‹© æ3¥—B{¹æ­]°@kž¼ºœè„ErÌ·§š¥%ÉcÖ¨<ìÇ­"±&œ‹H¸d&£v$P£Š¬œŠ±Y²M&ÃVsRy\VM”‘WqZQpËÐÔBc5›(·£"óÙ´Ö:d×1´Šz»-HC³ÔÕñóVíµÐ`9¯-µ¿xœrq]f™©‡æ¢ö$íÒ@¤¬›k ÀsZ1É‘VÄrâÜzRýœzV‡•G•í[r™ÿg”yÒ´<ª<¯jaÊPò¥/=*÷•íJ#ö 9J_g”¢éW„^ÔáµÊfMf³Bñ°á䓨µ¾¦öWVF%¾ƒž? öÿ+Ú¸èÍ£§Ü‘LN=È#?¯éGSz Ò³8ymìo`v’Ô[ÆNØÝ\çŽ9ªébúzó]—¯ÍÏÿZ®LâÞÆòüÆe'ÀëR({Ý0»¦×¾ŸýjÙìvÕ§ˆ§gùÓ Ë““ðú×D«:•ч¸ÁüÇøVNž€Îzg­tp*ZWSL<³îCVr¢ÜÎc,Æ@»Š‘Ðk©{(Êõ¨Î“ò¹ K:ì;G$Rr]A¦`Ác©pƒåï#…œâW”ˆàŒ(8ù_ƺû_ GæÁÈW^ŸçéV®ì-­™E 9Þ¡I_@PlÍðòì™RXÝœŸ•AÿƒÄ—j÷de‰¶˜×pÁÈõü \ÓÛf¡æòk¡ÕtøµÍ>ã%2(–Žæ¶n9 ]^[HŠ7ñ»ù×y£xqì-íÌi+Ÿ™úç#üþ9ÚƒœŒ79Éèýêk´BÁ@T'è(‹êrWÑ$eÛºMq=« ·6û|Ô»;H=óƒýqV>ÏíZ n-´nXÔÓ¼¯j«œ¼†oÙý©~ÏíZ>UU ä3¾ÏíGÙëGÊ£Êö 9 ï³ûQöjÑò½¨ò½¨C;È¥û?µhy^Ô¾UÈgýŸÚoíZ>UP 9 ÿ³ûRýŸÚ´<ª_*€ä3¾ÏíGÙý«Gʣʠ9Lÿ³ûRýŸÚ´<¡G•@ùLÿ³ûRýŸÚ´<¡G•í@r™ÿgö¥ò=«CÊ£Ê)Ÿöj_³ûV‡–(ò¨S?ìþÔ=«Gʆ>:P§5{wrÝÄÉí_g;‡ç^‹$C÷¬~µÃÝ@Öú”nW†'ë]4eaØÍ†«p‹œ,˜ëþȬÙÒq‘°¿¶ìVÄ ¥åÞ Œƒë^Õ‘,à ä¸þµ³ÔÞâï ½f$Àˆ°Nk_ÃÖ÷1ºÛ<ã ÈY™AèG½^ºŠPŸ-·c†•†à+¨ð­¬P[ªƒ¹ŸæÎ+)Ũ¶m'+34Y¯VòÚC/Ú£uoÞH]€Ç@O$dgñ¬{K F‰#š§å!sϸ®ÿV1Ù\ M7oEÙ"(ûËþ"¨Å©iWî&ýâ?vN q);›ºi2®•«^Û…ÔÄý¡Vʯ®$3ùPÜ ³ïq{kDI‹tFêQÛçÇøT¶–åIšD+¸`);°=ë9²¹R)CgŸ|ìrIä“õïYzô+6šnX(üŸlâ·æŒI)AÓúÖn¸Šú{Û+ÏYAû×E}~Åü‡e ŽAüë»ðÎe·e9à× l[ËaÊpµzO…¬Ù,LŒ1¸ñ[õ8ª£LAÇJ_#Ú®ˆé|±TaÊQò=©|Š»°Q°R°r”¼j_#Ú®l»RŸ“íG“W6Rì¢ÁÊSò/*ÞÊ Ð¥'ˆY×`(5¯7±5ÂšÒ P±Ëê××1+´“ZÚ¤ždÄf²Êó]6²Dj•2- µ*ŠÂlÑUâŠxRX¶PÕëRŽ•âž¡ Ê 0ÆjxT2Š*LUö^*¼‹@™Hä½cvñ8Áªæ>iñ'ÍH†vºf¡¼.Mtö³nšó»dqƒ]–™9e¢/ ‘§²´ú+¬V¶´êZÃ6Rí§Ñ@XhZvÚQJ( ¶³µí,jšDÖà~ó£öaþqøÖ ¥¤=‹ÈÆ26¼dã>ýA¤Ó¥(“ÚÈ™ lv#Ú»¿hSG3êq—ÆeUê®+‚Žá¤¹ sÇ>µRº;Ô¹ãÌW¼‰ÙqÅnÁ'æ¹Ùr²3wÎkJÆrЂÕ} Fäw;H¥kr…†âs-sŽõf °ÜšRW-3®yRE0Kcð¬ ktIå˜ñíWí®Õ!ê+[’K‚¬˜ÂsQïËa"ŽHÑ]±Ï=k¤ÑåV¶‘‚ã 2kÌ'¼ºûQcy Ü[:CÜÎLžiGOsOJÒm$gëcÐRpÌ6‚§5Õ[ŒÛÆÙæÚfªÓ “`H§ +Ñ´ùDÖQ°ôÅD^–1Å[Kí£h§QTr ÛFÚuÝ¢´´´Ý¢´´´›hÅ-˜¥Ú(¥¦b— Z@&)vŠZ)€˜`RÒÐbŒRÑHb–ŠLRâ–ŠLRbEQ¹µß…ïX¾ŽfAÈë1MdVê*£&„xó«Áyu© „úâª\\26b»Ÿiðà W1 #lb;ŒW›ÎçÏ'ªVîwWFôÊ÷®£ ‰NXõ«a¬KÛ¢,¦QXñśߊkFäü»±žy¦ÕãcU+;£­ƒÅ‰#˜ÅÁn ýMa_Ã.Ÿ|× ÈdlùyéZ,‰mHÆ}MYÕ®mnmYdaÓƒ+—gcg&ÕÙ6‘u„:9éÓ5º×à÷ö¯5µ™à»T¶“pc…9èk¶´u¸Heo”¸*ê{XVÂ2º5°ÊÄõª:¨ˆ€ò0P£95£€b<Ô¶ÞZ’)îù±ùñÆ ì'>Us3úºŒŸkeÛFÒx&½Þ·…cA…QK )B8Ô*¯ –º’±Ã)9;‰F(¢™!KE€(¢–€ QE9Bí@\?¹ÍV\#s[w/€k–ÕåùMkOq3™º;¤&ªšžnIªÌkY0Hx5"š©¿á5a"‘p5æªù¾ôá-bÊ%&“uF_4¡ªej•Z«R)©¸Éó‘Q:æž:QJàA²¤9§m§)•Éh³n0º.N•ÊG&­ý*\°¤I×QIEw´RRŠ`-RÐKIK@ )ÔÑKHB°Ê‘€kÍ|Ol°^Èî;ŽzUq^4·Äñ7fLSHÒ“Ôó[–[60;TPÜ„N-÷Ë+ =«6IvÅ+jmrûÞd¨ÏCZÒ± ç9í\èly&´m®|˜·–ì) 3¡7é °fÒ³/uftÚ‹Ôw¬GºšY`’IÍF#¼y³ÉSÆ(Iô)Í—cdFÞ2ǹúÔòßìq·;ASÓª?øGµ«VA4q£ÍÎDþ{+]Ù Päœæ¡Â;ÉŠÒZØÛ‚í#¹9ä)÷Å{6™E§Â¯÷öüßZñßé«ëpBGî"!ߎ¸í^ÚSV1­+è-%-©ˆ”QEQEQEQK@ KE´QE-”´RÒQ@ E% ¥¦Ñšu-0°$â ’ú(ú°üëHRœþK’[–¨¬Ó«Â7.~´ªí*nüko©VìOµ‰§IYÛ 6 ¬¿…\‡PŠUÉ Ÿï ‰a+GxT‹êgx¶Ñ®´7dhXIn‡ù×’]lj!8ä~†½kÄ:ìn•,¼6T€|ב­Ôz¤†3‰#<¯¡ªT§ÞFÔ¦›²0d·¸–B°Ìñìµ¹@’’ànè»ϯ&©iöò+Šýk§¶Âàf¹jhô:!N-jrí¢C§ê6Ën &ü` ZÐI‡lt J¿{l[cwSj­¼#õ䟩®iË›rd”^†¼Ìë†< í´Hü½*õl±üOøb¼ö9 $/AÖ»?xŽÛR†;vU®vlzUR‡S ÍÚÆý-V‡0QE (¢ŠZ(¢€ Æj@5ÛªÊüSäz£<¸ÍWº—å5Ëj²g<ÖÕÜÜk™Ô&É5q`eÊjœŽK4•BI9«l4”깓š«)H¤[V©Õt5aƒeͨÅ%Cc%W©•ª¨5"µC`]Vú¬R‚j9€}FÇý˜üІ‰pk{E—,+™c†­ý‚*îKG ÑEè-Q@ EPÑIK@ )™Np®WÆÊ<›Sß-ý+¦’x¡‰¥‘ÕQz’k€ñ­ý¡p äDœ ?Ϊ1oRéüG©ÅÝGZÀ;|Â1Œê.àGZçn$Ç#å44lÊÎIUãÞ˜’’ÇéV\¯–àVioÞx^~¼Š›sZÓå`ÄpMiL¡P²qŽ+Ölȃ$c9úVÔ1†AßÖ£šÚEÜ|wҴűƒŽÝ _¿óF›–DØ#…Gj°´„`nê(½š[Ï*%ËÂ4™í\Òjæ5=áî’,t!t뉮Nãþè'§?vSL´û™okŒyQ…üªÕu¥dpÉÝÜZJ(¦ ¢ŠLÐÑE%-Q@Q@ E%-´”´À(¢Š(¢Š@RQ@ QK:Ä=O¥6iÂ|«ËÒ²ï'Ú¤Éêk®†™sÏb%.ˆt×ÁÉËçØU n”6 ŒwªÈúSÝ<Ä÷¯NÊÈÂHI Yøùþ”ݵË-4:á#Z ÕI6QZI©&Óó Ž>µÍyr–þʤ0ÍsI\ÙTfµÆªŠ©'µ2žfÚ:VX\ÕËBîÁPk‰»lÓÚÚm§€„~5‹KªÚ"1VóG"®Çn­…º095?mΩ=ïÉi‘œujփ܊ªÊç¢w¢t´3”(¢Š@QE ( ÐPHø©ñTæ~)Òc5™s>3ÍMq6æ±nî@š‚öçó\Õäù'š³}{ךÚ}Ç9ªZ IdªR¿4÷“5YÍ.a€njd5Xu«Örc-GV“¥U¨«Jx¬›(y¦q¦*FªÜÓsÅ'z–"äg5:ž*œmV« Ç±ÅD^žÇŠÏ4““[ÚAÁÏçšØÓ&Ú@Íh™-”:RÓW¥:½30¥¤¢€– ¸»¶´M×Ç úÈÀW7ª|@Ñl¤25ÔÀp#\ÿ¼¦jáJsøQ.Inuu ÕݵŒ{©ã†%êò0¼Rø…®ß1°²‹  þlyþUÊê÷×’™nn$¸|`N=«² }§c']tG­ßüQÐlä1·WXÿ–‘  Ÿ©9ý+¿ø­«Î—ÛÇon²dFê dSßð¯?gºl=ñÒ¢lƒÚº”:\ÏÚI›V^)Ô¬%mÓ¼Ñ;nxäbA>¾Æº[=v×U_Ý¶Ùæ6ê+Ï qÏOåL4N$аäQV F´æâzlœ¥b^¦â@õªZgŠÀ†ÿ z @ãñ­™#Y£ß+äx5Ã8µ¹Ú¤¤´0$f‹#±ë.Có’+ ºµ8*EcËhÛŽx©H–†A3† ‘Á«–×n¾vÜÉëíTÍ»ðâž"dBOZ—$Ù¬56Yž@Ä•_½Ó9ëÇã]§€„:–².nä†2¤Hˆìwçú×+áï ÜêsÄÓ“©ù™»ŸaYú„mkpð†¡”Ç‘ìqUC³µöYÊ¿sénœQ^Oà[jVòI§j‹ ¤gh|à=+Ðí,|QƒíWzd©Ü,.§óÝÒ®¥Gk£Êý Z*u¶b£s{â—ìÃûߥcÊʹ^’¬ýœâ4}}M>V+QVŲzšwÙ¢ô?¬ TUÓ*2Gæ“dû¿ T¢­~à«ùÒ«[–É¥ÊJZtƒk°÷¦Ò´RRÐE”ÀZ(¢ P\Ü­º~nÔë‰Ö˱úW7st÷Sí©®¬=wÍ-Œç+l[[–‘ËvþuVáËÍÛ*ÕvZí©/²…Ô„Ÿ˜ÕˆÛ+U›½ɇÁ­"ô3d³ <Ô+)‡-´¶9Ûžµm—rñU™9«¹ 测ëÇw»ÔôQÑGµb›å8ÊöõÛIôâ¨\iªÄ¼_+wc.g¹JÆ%”rXFâ ï)èE`êZëƒ{e–ŽC’‡ªžâ»€©#[¸5ZxZi£BÈÖGýG½g(Ý4ÎvÓæŒ÷‡­hÛŒðŸJ³q§Gyº²aæuÇfö>†³×'ï+#/Ø×%TÓ³:éÉIAXu¦H¬xÍ2ÞIçÜÖ­“ÊáHïŒW4´7‰VÚÁåÁÇ©­K{$…EÉõ«ÑB@ Æz“Ú­­íÀó'ÉîUkJ["Ü£ ÙY–@»c]Ò7Ê£Ü×a¡iK¤hëÍïú“\¡ã OôKwº¸SÁ“„Ô× èú¼:î‘o¨AÂʼ¯÷XpGç[Ó¦à®ÎzÕT½Ôi,d¢ã¸¦ž*xˆØ3Ú‡ØÝkIÁ=ŽtÈ(©<°GÊi…JðEc(´1(¥¤©R5:£s@ÈÕŸpø­ÌÕ—tøÌëÉð5Ëê7¸ÈÍjjwTó\^£tK 8 qt]5QŸ5 “&Ô¤ÊÍQš\ÓI©T¨jiÈjX£5iB6«‘5bÊELaOZGéSqŠU9¦½i°'^*u5\z¶+ ,n¨\Ѿ˜NjnjͬÞ\‚«Í<(LG°¯Ju éK^Áˆ„€2N¯;ñwÛsØi7Tp÷1·$ú)íõ¦|Cñ[Àí£ÙÈWý!׿wükˤ¸f=MzlRyaW9õïUd·dädZœ]8êsõ§‹…=F)9E‡+Fk;TLŸÝü«NHR^P€}*›ÆÈH#œ‘h¤G5jÓQº²lÃ+(î¤äš@<L1ŠÉ¢“¶Æý¿‰cpî÷“‘ùUßµi÷c÷SÆ­èÇõ®@¡íMÃ+It5U_S¯û+ ÆG±­]+EŠgYn¤@‹Ï$^|‚A÷ICVTÊÃæf?SYº ér•dº§©x·NÑí K‹œaLg(¿¼ñ¥yåg‘‰,ʼn50™$E'+·ƒÁvÒ ÍÔ¥½”V´ýž]³ õù¾#´ø5?›§j0çîJ¬?ÿÖ¯Li[Iú óÄž{£éÅÀ qŒWUÿ Œ¬Óñjæ­ˆ§)6™¯µgGæ9þùSr9ãð®`øÂ~Ö±ÄÑÿ }×h"eíáÜX¦tøsëùQµÿÚü«–>-½="ˆ~ÓâËþËÿ€Òöð¬@êÙeØvgwlÒB—>iöÛ\‹x›Q'vèÇü˜|O©Ïšƒþ)}búÌÔÄX`‚~¦›ö`?åš×Þ&Ôü¼(ú(¦ÂI©ckãè(úÄë0;f¶`*zj[²`–ì⸳®ß¸ùnÛ>Ø®«E¹–ãL‰¦ròrÇ¿4ãV2vEB´f쉧¾jŽ¥¸ÿYŸQQU³qh¢Š@QE0 k²¢c€)Õ…¬j<›xÏïºTÜådL‘Vþõ®f88QÐU{dÁgïÐT KúÕôM‘ãÒ½m! ¬ŽN•^AV%ûµ ‚@ÍY«cnzRÏo¹GJDZ3VT #ú×WC­¼›† >Hýª#0ɑҭÄË*c½ Êçå=jLUé¡î:Š„®õ÷6"„‘+ýáP˜CϽ]d9¦ˆY³``Ïe%ŒæêЭþ¶1ÐûŠšM0jßn²ÃH?ÖGÜÿõëea¸G\C¿š}ͲÛÈË k8ö÷5Ï]Ç—Rá)'¡€tÛX€yî>Ï'¢|Íù?Z˜kvV€C ¥ÅÑÇ/$»?@*fµ/IµI÷È´™$?.Üšçä]½´»–aÕLëòÛÅ=@ê~§½V¼–á†#œúƒWE»N|¬QÍZ]:M¼­NĹë©Áßé’ÌKÈê¾ÀWUð×Q·—KHJH<ÈÁ?Ä:õÊ­O§+®•Œ,Ž‘ªÛßÀ1¸bqÜV‰ÝX.{Îb¹¥Š"8cInÁâWS•a}ªÀ%ZƒMeÉàœQÓ¥jšŠÔV¸Æ‹û¿•GŒu©²A─㑃YN”e¬tmv¨¤©ÙJõ¨$®V¬õ(£9ëXׯ€kb㡬+óòšC9]bl渻© Jk©ÖXüÕÈJræ­l44585CšPÕ ›u¨Á§Š‘‹J¦’”T°,FjÜmЧYCYI º‡"œÂ¢ŒñSjBËLÆ*à Œ­+€/4½(QN=+6Æ74E­@¤ Hƒ5(€õ±Ò³õ½N=G¹¾|~íÀ‰»δkÌ>(ëžßJþX—ΘÔž~'ñîч<Ò9¦ì6Ôn亹’i[t’1f'¹5QÆå8¤™‹jêQÐרä`äs®9õ¢EÊ0>• ÊU¼¹{tj±÷—ŸÎ§q³›äÛÜTιäUIK¦_Z½º…°22´Âµ19¦í¥`# VŸæç†z6R³F£Уò‡ÐÔM ¯UãÖ¤ØÂ•de¤À®PúQ‚½³W£uQŸQAã¬dxÛŒ•?¥L¨Ã•ljŠ[Lò½j¸y n¤R© ‘23ó õÍ=È„ã¾+Æà¿”H¼ç¨¯^Ò¤ó 8`¿JåÆü Ç5}‹Ê I=iÂ1ܟ΄è~´ñ^C9„Ø=ÿ:6Jv)iÒ—bú Z( / ¦ycÐT´œR=‹éAQéO8õ¦š`áÅvÞ?ñ.‡ñþuÄŸ¾¿vž?ñ/ˆ{šÚÄta¾3N~ª}ª*š~‹P×cÜôŠ)i”RÓY‚©bx4À¥ª_ +bAýãp¢¹"ZF,Ç$õ5¥{!½»iþíxQUž?îŠõpôy#æa9\«4†(‰^ d}jm&îK›3æ¶æ Fj­è)Þ¡Ð¦Ýæ§£U¯°@Ùô™!Ôb™!Ī)CfB*)®¥Hœ° ­Ml¨|Ƥ‰¶7Z辆e‰­Ã©â³Š4dVÔd:ÔsÚùƒ§5´X™2:÷ªÓÀоõTˆÚlâµ$¹‹±È©r°XÍŠÝ.rõïOþQÉ}>ëvÓ¹Tk&™0p¹¬å+0å¹FÉ~Ó~Q~äc'ëÛúÖ„ÚLR·Ì:× á¿ˆZl2ÞF6€îrå—#•ŒÈÑ6SÑcn]&)×1¶sY~œ‚CLÕ‹;ù-Î2Y==+~Þá'@ÊsšV°Y1º\mo§A Ÿ}=*éj‹t zšW(“9¥4Š03MÎæªŠ¸‡NÆi2iÄKïéTØ|cf¨Ë€N:Sâówg<ÿ 6^sXÔwCFuÇz¿èkv㡬+þ†¹Ê8g£W!7 k°ÖЏûžÖ‹agšp¦SÖ¥Œ‘jAQ­H+61i@æO¡*UˆÅVZ±¨c,­L¦ V§†¬Øži¦€i¥ª(ëJM7u4µC¹æœ*ÜÔªj@ž:–¡SRnâ“×Ö8ÙÜáTdŸA_;ø‹RmST¼½|ÿ¤JY¢ö–+ÛÝ¿ïú úl,tr8ê>„ƒÁ¨ÏÈsR²ŒTyìÃÐÉ ÔKaÎ*;yöÊöö§€PñÐõ^tÚw/ÝþU7-ÏîîQÇCWF³ÝüÛl¼œƒVÈAô¦˜™o FÐzU]Ìy§©oZ¤ÅblbŒSU‰ÓÀ'¡§pE!AéOÁªæŸ¥Þj—MœEÜ ž@z’hµô@ÚJìÎò})68<égð~µgì¢@:˜äSúg5ÑøsÁXHî5û§ÞyȬ ¾ØþXúÔÊ2]§Ôó¤ÜÌ),ǤúVí·‚üA¨Gº=&}‡¡‘v:÷]3JÒlbY4Û;hƒ%#ëÖ¯©¬¹Ê>zŸá·‰SæM.SŽÊÀÿZït„x¢x¥d@ªÊz‚"½,Œ×ž—ZÔŒ~ôþ<šäÅ;ÀÆ¿ÂX^ÿZÂñˆKÛØg'¢ŠÝ^¦¸oÚ˨.&9}Çjóé¤åfcF*R³,Z^x‡P·ûDWp¤[¶åÈ^*©¨jÚÕŒ¢9oÕ™†s*ŸCÖ,í4¦¶žfüÍÀ„ÝÅfë·V÷—i-¼­ Û‚Yÿ*Ú+ß³Z¿f´ô-ÝÞë–—‘ZË|wÈ‚§Ž Y'ÖSW]5õó ÜÔòêz-ì¶×7_iYâUPpqY·z¢Íâ¨Æ‡jȬ÷œnú &úà]ZçW“NþÒ:nËdàâ«éɨêW2Æu#X”»»18´†½¤Ã{%ü6“ý©ÁûÍÆMcéZ¢X\ÌòÅæG2uš»O@IÙè]¼³–ºƒX’p„d|ËœúUÏ ë—2^ ™Dpv3u©Þëöòi/ako*£3#îÚ¥Iá.yu¼de† pÄ}ãR׸ù…%î>s¹#æZëü6sd£×"ÝEuÞ?ñ/Îk _Žã6fûƒëPTòóúÕzíg -¤ÍRÔ/ ´XB7žçµ8EÉÙv$¼Ô ³LÈÜöÏÝø¦Fa zâ²ïu(#rò+LßÞ'ʲ§Ö£—ˆå ÃøqŠôiááw«2slÛ‹]°rÇÓuZÐË÷0+‡’þ9dòçKŒS–êëN|¨`½²+¥JÆ{†±&S “Ígè¶òe÷°#ñr¶Û€Aé¸Vž“rŸÚ,Uƒ+.x¬j{Ͳâ¬ug û½)ñ°f¨‘ƒE•jH v|ÅDU†É2wS€oJ™!*Ý*âBjÔ’9ðÁZ¶ŒŠÈšØ¡ÞªíØá_ƒYËÈh[«MëÓš«lÍo.Öé]–²'whzÍgÍÑ•bK‹D½ƒïcƒYvê7=…Öà­ÀÁÆ*ý¤å>V£T·Y ûB<|çÖ¥vc·S˜¾ðÓFnìm,£‘npævQˆýIÎ:{ŠÝð¾‡i éÞM´A7³wcêkZÚR,¸&FA»¨PÌ0¨”ºV切j|‡^j9A-jƒ ÈéY”i•³u )ÞgS̤2ñSÊ{ñ)Ýô[KHÁ/=ÈÀõÀ?ãúWšÉà]jFW Y2C 8þÉíÃôÒ½KÆ»RÖÞè¸W€Ÿ,žÌÄ.\~5Î[Ý ¼âã[É]P“·8>¾aý=+èè¶©èrKs ÖÁDò¡rÛœ®ìã>œsÓÌËFWšôýRúö[ÔÔUÂDSË·‹7Cœ pK`üç ¯1 ŠÕ6÷$Š9?…©ízZsF¯È dã¨ô eM»2*zOj³ iô¤š1´°èzÔåxÏJW³‹á28¥òÏ¥TL:T«y'¥W2™>Æì <#úR¤Í"dS¾z«ŠÃ•Hê+kÚ™ÒµE“c:J<·UëÏL~5Œ¡»×cðûFþÐ×EÔ‹˜l±!Ï÷ÿ‡ùøP§Êî…(©+3Ð#[Øc]9Ý;Y7ßÝÉ*­Ó‡ ›Fµö2çþúº±)ïTì&Ù=;T|¯íWCÖV9Ý¢¹_îÆÊQƒ°JvžÇ Ç5µÃ ¦HÞê3êqP]ê0Y[5Ñ€|¬3ß­s»ÎWKsxÚ1µö-¨#Šà'ø¨/£îÖÜž2·ZÊßï¿Ë5ÏÍιw!ŒÆÌrTœâ±ÆPœ)ÞJÆ5%- Šy5 äÜ[²I Êð°ÍH:œÓÁ¯c›cûÀdÿfÿ©C²ÈÙq ޤç¶)E>yw+ž]Ìoì+Q¶Ø÷ARÕÛals×(8­ZuÌ9åÜË\@qcfû‚¤þÏEû–öÞs­ i¥ÌÅÌÊ&à ò¤ë«é$`‰ ‘Ø(À©4”\WÇ#ë]_†ÿãÇé!®QO­u>–8ì$iæ`~U¥fo†W™»'1¿µV§›¨X+Žj=À÷®Öz\­nÆx–õÞõ ‡’£Ú»3Ò¸[ÈLÚ¬Ê2K1Ío†^ñœö2­t³y/Ì ž¤ô­ø|?i[¥TÈWm`KX€Íâpû‰Ú»OZô¢Œ„´±´·}ñÛGœ}í£&­ÝÙYÜBDñÆPãV]F ;F‘Èýßõô®nm^K¹ ÈØ^Ê: ™ÔQ*r3|Aà»9÷I¥Ü*MÉòú©ÿ ÌÐô vÞà<¶Œ ¤–×qa$[ÇÌ{šÔVȬMnn¨£6ÂR<\ÆÊ:×µ–Øpe@}ÍÃGŒv®?ÆšEÕÕ¯Ú´ù$Šx†YQˆóüjUt˜:]€°C0ýܨO±§ IcçÕä¾–íâÝK¼uËËÛl,¯æ/½mÍs£h‹® Õ ¬Ýr ±¹€]zÕ”¿´›½EÂÅ{;׌„“?ke'µR{x%åH¥„< Œåj$®4Asna|ö5JâáØ¥¨äÊê¿™­ó²Tà ŠÇ0âkDåPÒÀ~¤RˆkÇ'ÝÜž™Å>WrùÉüzÕ²¨ž1‚k&µ¸ÊFL¹Í/Ê‚€ŽœÔ} !’ˆC¨$OztL6žiàŠ,•óÔbªßi°ÞÆC( ØÕ“íJ 8;û)ì%ÃŽÍVtíjh‡’dÆ{ê¯lÒòŒû×#} ]@ÌÉuìS’*âï£%é±ÒÁ)•CÎiº}êL nÇlÖ‹©´S‹K¬‚N­A¥Ë!×ïÔ ‹yqîz —0LëbuË9û©ÇãVÐ䚯o“¦rÝXúžõe㊆R:TO÷MHF4Ö-"Œëžõ‰x2 mÝ f±®ºÂz1£“ÕS!«‹¾‹ç5ÞjI•5Çj †4Eè3'5*¥<'5*¥&Æ1R¥ OT§…¨lÂÒ•©Rì©ÚB*r”Ò”†@iS”¨Ù ÔÂMJR,ÑaiA4ÿ.Kš=7n)§Š› “̦´•j‰ÞŽP;‹7, °¶ pÛœ¦1ýkÊ0Àäïþ&^Guâ‰Ko ÆÃ´ð€ô§Á#õ§ˆÀõ«ÜRóßšy_ÌR}©lcJ‚ý*ƒ&Z#޼UiÓdw¬äƈ•U}éàú-9Š‘c‹ŒHÝ”ð*ÜszŠbB*ÊB¾”s‰cejõ†oö]ê„À“ê ñüyŠÄ£ ÅzÃ÷hloUÏåÿ×¥Ïq5¡ÕÞë7V7&7²Þ½UÕŽüª5ñu6€gþ›(þdT×”vÑîõàZļ֤J¨§ÒµöÔ’Ö?‰¥9lÍ”ñ;y%?8=3éõ©aÔÓP·d— ­Ã)ô®{Ôˆd·4š}íÕÍІÑ>vîNrN«oÝÐì…8EjvDÓf|âXÿÜaýAª§Ãou$ÖÒ‡ˆôVûªªk°àùJÿîÈ I¿"Î`¸Œ¤ªpÊzŠÎµzÓ,Û°}ZŒôHiR’2° Ž ÓKª¹€ÏLÕË«ˆ®`ó@ÁàûV-ðw{bªIYrHÀÁ¯:ÚØó+Ðös幡öˆGYqž´ó4`à¸ëXSØJ“n¥”ÄÙÏF$çÝê±ÊmîâUaç…òÉþý)ò®æ+¹¬×¦w8ëQýºØÆQ†Æ9éT­ ¹ŠÒkY#óX¬™ûÀÿZ‹ìW-§Áo°îC1/Óiò¥d+#HßÛ†Ú\îÁlm=SLmRÑ Ý'¨-ƒžœÔ-kp×i AµbxþfÉ9Æ üª½¾”öûâp’BȈrzàAdhµìAü°¹…’zT»ŽF*™$:gšÏ‡LšWnuFŒÁÚO>£ôÓHš)™1™W!›}iY‘vWÚ»½+NñöÚ”²>^ýª¸àZ¹}¨Ãium ØÄ­ƒŸJØTÓ V }µõ=¾“¿´*ÿÂ_m ýìÈIì iZøžÎè ’€}3MŽëL¹˜~âÙÏûJ*é¶ÒÜsah}ü¥­oæzͮƞŸ|·PÜõ†ÏWSHpdvÏÐU´ŠÞ.m‰@ACqXW2}žg sÏzíÂIjÞç&w[n/0 ª@Ä\ç!Æ1YòÎYO5Biˆ~O»=­ŽkëZ¡’æ83Âä{Ö|7™cÍf^ÜÖfÉã`ÅR¢9Y±žG5Å9·+tÒQ;Í.}ï‚I"º´2Ä›yþ•Åø^ãíR;äBâºÉ_È0äqÐâÍVÆ’Ý #œsüêY>uÇ^*´`î]£ ñŠœƒÐ ÎI²ÑÏ]éãM¾7pq §ç_î·¯ãWÕË&F hKšI*ÃÁµo"æ[9—ˆð}Gc]*Ýr³šµ;>dlÛÜýÕ’QWÖ¤VÅdýº+b0¥Ú”ëò/Ý´ÏÒ´v06VˆÎc“#ëV#»$sgÄoüV²¯ÐgùSá"ˆœ FTˆìཚm´©&·,À’ ƒñ9þ•É&¶÷eCôaO´Õ^)ƒgqÉæ…³ ž„²Ncòšçìu¨¦9Ú}Ík „uX~u›)+ƒP0ùªÀlŽiXR23ÀéNWíCƒŠ„œP"Îh Y*@ÜRü÷¦³ÓKP´€Ò¸+µi->ÓÄ‘ó¸S´‘¾¢«ÍÓ,§ë±GôýkNõR{ sþµJqןJ-íãŠÙ"B"€ª pRzmKñÜAŒ–çÐÔëp‘Y«l›²jÊB©‚¦¥”‹`däÔSIµ zt§ƒòâªÎÛÛùE&ì†C8ýÐõ¬{‘Á­™SsÞ¹ê.¥#ŸÔå5Çj+óšì¯ù¹KäËš„ËHÉ RªT¢:rÇCc±¥H¥Xýª@• ŽÄ"?j_.¬éÞ]MÇb©Ž›åÕß*“Êö¢áb‰Š˜ÐûV‰‹ÚšaÏjW ¾E/‘íZ>Gµ/‘íI°±˜`ö¦˜qZfj¢ö¥Ì+¦+M㪒GM ãŠ­!«²­TdËb©!ëÒÜÝøŽíï´îG=M[·ðôò(%:ŽõÛkþDñl:¢Gû‰¢bþ‚QÇêNdèÊ£Ù:i«³‘_ ; –QùÔ«áBÃýw?Jêã‡-ÓŠ¿H£žµÒK©£§Çþº<Æá¿T®|5¨Û©&ËêµêÈ@éNw]¼àÒöóBö1g‰Inñ:>„UY£èkÖµ+ +¥a$K“ÜpkÎ5›5³Ô^ÛruP­Í¡œér™¨µ*jU@*UQéZsØTPjÊ'§4ØÐÕj8Ç¡¨s†>•ÓxgRk'Œ);ØÒ°Ö1ïVìàf”2–P9$Tóƒô:k›é/ ©šn\5Ró9ïK¹›’MKÍ¢¬¬IäZ2Ñîúš»â[2ùjG\ ËgaÑ ¤ÝI÷UTï‹”™ØÁªFëòµ:X4ûܽŴnÿÞè1\¼zv£·+<#ëšœXked€ýŠÍš&o¥•ŽÂ±oCîäÿ:§q ·“ET·³Õyßåþš±4W ´À`w5”£ÔæÅQ‹ƒšZ•Åú(!•·@½?ûA3“ô¤BjXVwG’Ò.ÙâfH˜GÞy¨…Õály /÷€Í\V§ƒH‚¹¸¸Ÿ%‰À c¯­Eö‹ÆaþèM_φ\¤f¾büêhvÝç(_LTàÓI4®+œÇ‹-X|þÈ*}>ÓFºÓ`3¼âB£ç“Z:µ±¹°xÅrzn›vú›YÃyöPFæ r§ð®š2º³= [K“¹ØÇáKdµÔî#|wÁj-Y„£ê6ÓI)üù®bkhÄÈaûD É’/›\u«Ãá^E'Ó5«ƒzž²•ކÞÛS±™žê#´ÿm¹Gõ¨5u!Vç?+ðÞÆ’Ã$žQbK}ÑZ‡Jþس—Ê—jàcÙÁéþ}iÓ—$ˆ­iCSŽkͧüµRêãw*sSêºUöœÄ]Bʽœr§ñ¬gfSòž+³šç Œ}VC à—ûÃÏê3<ÀIÓoq]ò®bóTc5Ÿâ dš4P (”ø¼¨©Þ—‰ {MòØ^ñ$~*šÐí›q¸Íméþ+´º oR}ô¬+Ï kvörÌ/`¹(¹òÌ;‹W=œo¹´PAÿYo¹JŸ¦j9bÇv\Žx®r04×A^kiu©X|ö7ŸiŒuŠn¿Ÿø×Aeâô”ˆnPÃ7÷c?CÐÔ¸µ±JHé°G$ëÉo¨ óp¤@7>8¬iîn¡¸a0!½ E‚çAö±!ëJ[Œ×7ý¢ÃœL—Z¡bØÅ¬.oŸžëÈAúš¸€â°4½H"y“¡c ÝÇj¿&²¦½; …‚}ãMK’fT õ¬¨nÌÏ‚kFÕ0wž´ZÃ/;àT?Àiî4òÖ-ܤBç嬛¿¼kRCòÖ]çJ‰+Ĥa^Ž sw‰—®šìpkå2õÏsDŒß.ž±Õ‘8GEʱÇRêq>úVWŸÀçšíhtž…õ—o«1Mžõ“ç•$síïPÍM¡.*9gùN*ŠÜäu¨%º85›æãƒÚ¸ RO?R™óÐà~Ô^]íF>‚¹ó¹cÔœÕÓÐÆ«¾‚*ÔÈ£Ò‘Tèµ£‘‰#AéVÑ=ª8Sq¡j½y5“‘I ¹lÀ«È 8¨Á§ÍC•ʱ(ëOb¢ñK˜c·R™NÜf›´‘Ãb“ìÁ‡2ŸÊŽ`,Ç{.Üa˜{ °Ð¤} GfpÁ}#Åd3)cÜ“F…]Žþßlñ‘ô«Q^É4}[iõ¬¦‚39)÷;WQ°œŸcž¥YYĺS+Õ%’žd*9¬ìpJ%õz=f‹–Ç:zÜÊ[V¹¢ÆN,Ò Kš£æÍ¸äÔ†{‚8òǾiX‹‰¢˜ŒY8Î;S³HCdC\V²%´Õ…ÂŽ¿Ê»V?)ª¶]àH¹È­(»MP—-DʺG‹¼ #“  œÕëÿè!C:m¶º#ýl8>ã½qú¦5“oL²Œ;U+=bâ#€Ä0ôï]Îj,ö¡S›sZçÃÚ–“tªdó¡'̃¿¡ô¯PðŒ2A£„³|ÜßÖ¸-'ÄdÛ7ÌÝÆ8Åz~™Æ›Ë·+œ~k=yµV¹t-IJ…dEe=˜f°µ/i:‚’!HŽ.?N•½šLÕ^Ç1çÓ|3V—rj€û¡£þ¹¯;ñæ›.—ªµ´ÛKyjr½ÿ"¾„Íx§ÅÅÿЉO­²èMZÒ›æÔMG…®ž;ù­w~îUÝq]<™[Ûr7eOå\.›qöMb O  ô<íf‰ $ð²V•¦´Þ–6î#-¤Êàò¸ þ5¡¥1òHÏAšÌHÚEÜdvduµèŽÈ2žHÇé\í6š:æ—…§2A+“ÿ-ŸŸø­æÒ&•@fwG\Wá‹Óš"®óÓ¶k°½v¹Ó%EgBªsŠ&¬ÑKXœîžšN²%_³ÊÜnN™úWEgáíWHmúf ²!þlgðé\Dq4:ðG5èº ÚêZx›÷€Wm6pL‘5DÂk:I·sÿ-a[ñV´v}ÔBKgm§§qU­ÞxÞH.X<8ÆÖÍ^Ža´ÐÕsš‰šW$¶³û&H™¾€Q8Y7Ø•˜¿ãRÃ(QÉæ§ûDc«ŠóëVœžŽÆ±ŠF| a¹IÄ{OEà›­êZ¨‡ýÜy}Ù-ùUæ½·èN ‹í–™á™ Æå«¿©N •¼-ªO©éÓ-æåš)vnnãÿéZSéé*œŽ¾œÇÖ«ÇögÜ`‘Øä1“ïPɨÍjûyöï]j¬jjŒÚ¶åÝ ]ài1ÿ-­þYÕ{ý9¬ñ¦­ÄgýUô#« Ãf^Çüñ] zÄrȤZ[‹++öó£s ÏðÏïb=Z“DÙ3ÎÉ-¤S Ôð‘Ñd×éê+nD„¾„Údä~}ª„úŽ¡¤üºŒ)<Ãsð?Þ¿•]³¿[¤ˆTŽô¡÷F=þƒ:ÆÏi •{õç×°ßbÞÖA,fIB°9gŸÒ½ËÊüíÐþÀxæ[Ë NÏPXÕâŒöõïŸÂ’zŽÆÄQ3aQIÇ Ó·Ó ÃJqí\ö‡âµ¿…ʨC\z×FÚ¬b-ùÀÆy«‹«D6¨{TÑÏ»€®aµ «É?ro`£­XK;÷#oûÌCe#¦zç4ŽÃ‰ WÑpe…¿úÕiŒ¬>ú~³e"y\b³.Û zšCµNçªSIæ>GJÎr²±qE ‘œÖDé–­™ÇZÍ•rk™³TSûSÂT¡)Áj[*ÄA)á*@”ð•-ŽÄA)Á*P”à•#±Ú]•0J]”®!ÙK²¦ÙK²Xƒej}”…)U’¡t«¬•GNâ3Þ:«$U¨ñÔ4ÉhÈx³Qù8íZo DbÇjÑ1$S—QÙ*îä5W[{×<—L£â¦K¯zõ\NxNÆúÏžõ/ž=k;±ÜÔët=kDÞ2¹«çàuªòÜã½S7#Ö«É>{ÖV)ÈKû‚Ñ0Ï^+1G54ì]‡¥0-RF-ÝŽAšµzÔ@©ÑªXâ t«Hjœf­Fk6Ê,­J¢£AS¨¨lbNœŠW˜âšI©¦0â•À…݇zƒy,2x©ž =j“%“«â¤Y*¨zPôXÊQ/¤´çl¨9÷A]ñvLõi;ÞÇ]à¿€‰¨jJÙ8h¡=>­þ蹦` \ÔÜmÜvhÍ74f…&¼kâØÛ±úu_ý «ØëÆ~,>ïªÿvÕ?ô&«§ñ žSp1)®ºÂà_èñ’ÀH£iç£ò+“º8§XêXË‘–ŒýåÍuÊ7C„¬õ;tÉàHܼ€J-å±§'•Á¬Ë=VÚV1ÆÄ“Î*å¼ ÄTq†Äu®ºU[Ñ™JŒ(5s·k0u<ÕNM"ÜiWÎrrb<ÆÇéÚµî´Y²bÌ/ê½?*Ĺ´¿Ó[82F?‰FNÕºdÙ“G¬Ýظ·Õ!0áf¡úñ®GÄ7· 3ArKÄI(ÙÈaë]U¶» "è­ÁY9ó¬­L´’ö˜ Œ˜º¯áéLÈi2ýžüùgå”lü{Wn?(ŒŸÜÇ×Ü×moäjõHÖ½M¶,™ NO|“M1´kZN¶ð¯È ú{U–š)Æc}­èj§Øæ$^)ßaÏGúT1¡··G”Úßg»\Â~t`SZˆ·0ŸïJ’B'‰£uÆzƒYJå#%.Kcw"¥È'Š©4FÞvN«Ôj’'ìkžLÕ!³ ¥"ÕÉœUCɬ™¢" N R©*YDa)Á*P”àµ## J¥ N Hd[)vT¡ivÔ°!ÙNÙRí¥ÛHvSJU”…)ˆªR£d«e)…)\E&Ž¢hêó%Fc§q*¡ö­•%4Á#ȳK»Þ¢ÝKšö›8‘(Žôñ9[4gš†R.} úÒª¦ê7T4UË&JPõ\5=MHPÔèjª Cz*¹PˆÕØš³cEèúTëU‘ºU…jÍ”J)ÔÀiÀÔ SQµ8š4"ªîjg5]ÍRÒÔ›éŒj2õH–YT«'=jˆz‘dæ†e$h¤jq'AYѾMN§'9©hç’4’AŽ H&ôª ÷N*ÌdMŒ$‹êôðÕUZ¤É ›4Oº­éþ&ßÒ±¦Ôí <ËŸAÍROEip’Á˜PänàÞ–¤ši|Éž‰©ª&tü|°¹ý+ʼ2«mv'w ·¹©õ/jwûã&8£u*UP{f¹¹.œ’+ÓŽ µï;𬡲=’_éB¥§.øåPgšËŸâ¸$AlÍîÇå&áúf<÷«¢8ZQßRI½D›âã‚"Ž$?LÕaã]YŽ|åÁpqÎŹè£[Í;ó)È­è®VÿL¯úÄá«•Ÿ,8댻¢_¯Z8IÆ1ïYN7Z pܰÁ Z0j88'¹…¸ògt'€kA¥ ‘NA¬åÜë-ïCŽ5uo¦k„ŽùápsÅtZv©ª†š)3l^§xØT«xŸÝ5dOòE%)-‡b´ƒÐRy»¸e¨‚N"µU'ÕŠÈÏÔ4{+Øœ•\Žx × 9»ÓçkiƒdqƒÜ{W¤U{»+kåQu¼ÉéYžiµºØüp}ª8îe÷Äpã·­[œE«[ä—'Ðû×,•¥r‘4R홢'†åi&f‰–E8*AA ¯kµ†Û˜#×Þ¯$‹ujû–ÌeéçóÀ™z°æ¬i×g-ž¦{V-œÙVˆžPâ¬y†)ÇcK— gÌTÔ\e¾Z>­§Ò‘n–DÜ /šGz¥p¸c,\â½_=Ô.f*85šÒbI§ÜO¹xïUÔÖrf±V'Z™j5*šÍ–Nµ 5jpj‘–S¨CSƒT±Øœp5jxjC%œ* ÔðÕ#%áQƒO” @iÔ„¤"E #Å4Š”Ša„FE4ŠÓ B⫸«L*ày£4ÜÑšöÎ!Ù£4ÌÒæÍ¦f—4XƒN£Í(4šuj™ª«TªÕ- ¸Vcz ¯S£Öm ÒI*u’³’J$¬š ²T‚J ²T‚OzVËže4½W{Ðd¥`¹#=DíLi*&z«¬Õ 5#=F\Qa M0šBÔÝÕHB“MÍ!4ÜÖ±%-MêrzR}iŒÅŽJÚ,–ù8ÂSAâšNx´Y,ñÐÒŒ·'°¡¸õÍܱ­!¡ <ÍÉâž201ÅHè3S$D °æ©U9ê9O”a Ov9¤ØÑf3™}…L¯¹òMSVÂdMMÈÅ ./ßïZÁÉÍgÁŽæ´"`´˜¢9"³µÝ%5 R@ýê ©þ•n7Ëf¬Œžs‘H(u)!Ve8"»í&eXafÆÍaø«Kòf‘.øp;w†în/e‹NŠ#$¬@\v¿…Rv4Ý{Û+üÉßÒ‡Im`)&ð\ð­é]„Z}‡tµšU BoqÔŸAùþUÆê×fâð9n'ÛµL¥¥ÐÒ+³f6Ò @eÇÔ±¶å"±nãâÜeëUâ•¢“펧õ«‘K·ån•Ö¸ËoÎz­dû1–¤|bX˜Ç ޱªVs®Z]›ùÐ÷ ]Km•TÈøRÜ:ÜF%ˆüËÐ÷ÐÖv¶ƒ,dèg´œ~5y†åÇzÉ’o´Ú¬«Ä‘Ÿ˜z¿m8šp~£Þ€-ÛKº2‡¨©•°k<9Y·(àõ«· ŠmedÇZ™$ª[¸§,˜©°°^¼,'½i«#L}k’dSÁ$pj%Æ@Šæ9WäpßCHògŠâlnZÖéqÆpk«y°ªÃ¡¬e&[‘M.z¬²œdð*¼óo!CaOR(C-4Ûºt¨šL ‹xƒÅW¸Ÿjžj–£!™ÃJH¤SUƒäæ¥Vªe¢Òš5V O PÊ,§†ªÁ©áêlQd5<5UOR2Èjxj¬œ¥h=8=VNImZ¤VªêEz-©UPôðô„XÝKš„=85HM4ÓwQº„4ÃN&šMHˆÚ¡z™Bôóý%W¼q…”w ¢’’€NÍ34¢R)¨A§RЪez¨¦¥V©heÅ’¦Y*ˆz=Cˆî^Òù¾õHIKæTò…˾w½oz§æR(åÛKQ´µ\ÉL/NÀNÒS Ô%é»éò`µ&êƒ}(jVbi3Í3uªBœÐ>”Ýß…Æ>ñüj‰cÄd’M&qžM4Ê…ÔšŒË'fÇÐVˆ–?Œùl~´? • ÝXŸ©¦´j5¬HdñÍ’\€}¹«+vŽx(ãõšÛcZ«tÀ®Gj¦+Wn`8Ê¿JÊûê¾|¦%R凡«VhÆ2ý»T¡Ø–1ÏN*fTïBFHâ­,š©B„ð{ÕøœíªÄ¬($nc½hÆØAš¹ •› ˜ÍXŠœTböÎ;«Y!p °Å]ð¦¦ø~# ¦o½+uoaè*ŽãƒÞ¡» dŽ‹š%µËƒÖÅÝcY}KTŠ-؆<°AÓ u®~õÿ}ÏlþµUnv^Æêüé×’‘qØb²r¼M-¨Æ“¿¥KüÿZ¦ÍO…ðAô¬Ó¢ÜóH“¼-ÜMÜãµ5”‘ëS! <p7,¹>Œ+:DšË z‘ÐÕågŒðr=)Ï*ȸe{ÖWh£3íAp$a½ [Ó§Ù+&~Vä}jµÄ*Tb ]Èr¤ƒZ+0:.‡Š_²­.‹¨G?8ïëA¼xå<å} ÐFÀn}Gµ8ÏK ê ÔÑÌÄäò)Xe°ÄT«.*¸pš_¶r}©X m.k¥‹T·ŠÆ2îÂão|× ®䜞ÀU„ËÏÀô©”)3f[ùo%Š'ea%G=+fTé>j LÕó±Þ©O>÷Æx¨^ãäªâLšj%"â½J©,•"ÉI¢®]Rª"JIPÑI—ÓÃÕ0ôðõ6Ëaéáê zxz–‡rÐzxzªœ¥…ËAéÁê°zpz›ËaéêõP=<=K@]OT„”ñ%!—CÓƒÕA%8IHE½ôoªÂJ]õ6Æêij‡}éö5RÕ5 <’ŒÒf½óˆ ¢šM f’“4f˜¥ÌÒƒHŠp4ÀiA¤Àxj‡4àj@œ585Aº”5+ ±¾õú7R°ï£Ì¨7Òn¢Â'/M/Pï¤/E†Mº›º¢ÝFêvmÔ¡ªÔàÔX ÃRæ¢ N I}ÝÔÒÄô­˜âsÅR vqZ"X½ª694¬äqP»íÍh™$s75]Ø0+žiÎs“U&,ƒzžE+Ü©ÎãµtB³Ø{W(³î‘IãžkB+Ÿ”庵@Ñ´÷i| t©#`ŒVdmƒq8÷íS£ Á¦ÉhÓû´Ä`¡¨"‘ˆÅ8|¯“Ò£É=jê6äÏz¢¬àÕ¸xAÍ&ødÂ)Ï5w`¸…‡ŒVdcÖ®Ú˵ñž `ÎOP‰í5 ¤”‚>•,ß~¶uû´Eç ù×õ„͸ä÷æ¹êG•›AÝzRFØlPÜ Œ5fŠ5díOVüªº6Qiû±I)#¶EFTž¸ 6x§‰ïY±K+ëTˆÁ­6ªÒÛ°çQ`WC†u­­;I]EIyL}:.kaV¹®£Ma¿×FÒШ«²¼:W8º;GRSçMšÖâØå—B3¹Góô«Í0àð1SÇr»œw¨Œ¥ÔÕÓ‰Œ³n^â™ä;yέkˆ`¼ä|ýáßëY_Ù÷o‘ª!Îs×éZ+J e'íS c$`–úv¬øÐ19«± ´uö憄^ «¡¦01 ³)R#%¶Öœ`„rAcêÆ ¤@ÓîéÒ%W•ǘ@àR *¬2à’¤{Õ!%`_ Along the way he came up with a trivial and practical way to guarantee that a given multiprogramming system was free of deadlocks: Draw the locking order and make sure all the arrows point to the right. When we started working with multi-core systems in FreeBSD, we were sure to have deadlocks in our future, and we adopted and expanded a facility called "WITNESS" originally written for BSDI, which keeps an eye on Brinch-Hansens arrows in real time. Historically I have been pretty good at avoiding deadlocks, it seems to come natural to me to think about locking order, but not everybody feels that way about them, and WITNESS have caught a lot of "Ohh, didn't think about *that*" situations over the years. It is no accident that Varnish has a very simple locking structure, but as we add more and more flexibility and extensibility to Varnish it grows a little here and there, and I managed to introduce a lock-order reversal the other day - my first in about five years I think. Since I'm obviously getting old and slipping up here, I though it was about time I carried out the Brinch-Hansen check on Varnish. I briefly pondered porting WITNESS into Varnish, but it's 3k lines and we have extremely good code coverage in our regression tests so I decided to KISS and do it as post-processing. I have added default-off debug code to emit VSL "Witness" records, taught varnishtest how to enable that code, and added a small python script to process the records into a nice plot: .. image:: brinch_hansens_arrows_1.svg And yo and behold: All the arrows point to the right. *phk* varnish-7.5.0/doc/sphinx/phk/brinch_hansens_arrows_1.svg000066400000000000000000000730121457605730600233770ustar00rootroot00000000000000 %3 ROOT ROOT lck_cli lck_cli ROOT->lck_cli cli_cb_before(81) lck_vbe lck_vbe ROOT->lck_vbe VBE_Delete(172) lck_exp lck_exp ROOT->lck_exp exp_thread(588) lck_backend lck_backend ROOT->lck_backend vbe_dir_finish(185) ROOT->lck_backend vbp_thread(361) lck_objhdr lck_objhdr ROOT->lck_objhdr hcb_deref(401) ROOT->lck_objhdr hcl_lookup(141) ROOT->lck_objhdr hcb_lookup(438) ROOT->lck_objhdr HSH_Unbusy(702) ROOT->lck_objhdr HSH_DerefObjCore(778) lck_ban lck_ban ROOT->lck_ban ban_lurker_getfirst(1019) lck_lru lck_lru ROOT->lck_lru EXP_NukeOne(333) lck_smp lck_smp ROOT->lck_smp smp_open(328) lck_cli->lck_vbe VRT_new_backend(100) lck_cli->lck_vbe backend_find(243) lck_cli->lck_vbe VBE_Delete(172) lck_vcl lck_vcl lck_cli->lck_vcl VCL_AddBackend(195) lck_cli->lck_vcl VCL_Load(466) lck_cli->lck_vcl ccf_config_use(617) lck_cli->lck_vcl VCL_Get(134) lck_cli->lck_vcl VCL_Rel(177) lck_cli->lck_vcl ccf_config_discard(579) lck_cli->lck_backend vbp_update_backend(158) lck_cli->lck_backend VBP_Control(525) lck_cli->lck_backend VBP_Remove(578) lck_cli->lck_ban BAN_Insert(533) lck_cli->lck_ban BAN_Insert(572) lck_cli->lck_ban BAN_TailRef(210) lck_cli->lck_ban BAN_TailDeref(225) lck_cli->lck_ban ccf_ban_list(1312) lck_cli->lck_smp debug_persistent(653) lck_backend_tcp lck_backend_tcp lck_vbe->lck_backend_tcp VBT_Rel(193) lck_wstat lck_wstat lck_exp->lck_wstat Pool_Sumstat(72) lck_backend->lck_backend_tcp VBT_Recycle(264) lck_wq lck_wq lck_backend->lck_wq Pool_Task_Any(98) lck_backend->lck_wq Pool_Task(231) lck_waiter lck_waiter lck_backend_tcp->lck_waiter vwk_enter(147) lck_objhdr->lck_exp exp_mail_it(145) lck_hcb lck_hcb lck_objhdr->lck_hcb hcb_deref(405) lck_objhdr->lck_ban BAN_CheckObject(927) lck_objhdr->lck_ban BAN_CheckObject(948) lck_objhdr->lck_lru EXP_Rearm(299) lck_objhdr->lck_wq Pool_Task(231) lck_objhdr->lck_smp smp_oc_getobj(425) lck_objhdr? lck_objhdr? lck_ban->lck_objhdr? ban_lurker_getfirst(1029) lck_lru->lck_objhdr? EXP_NukeOne(352) lck_smp->lck_ban BAN_Reload(773) lck_smp->lck_ban BAN_TailRef(210) varnish-7.5.0/doc/sphinx/phk/cheri1.rst000066400000000000000000000141611457605730600177620ustar00rootroot00000000000000.. _phk_cheri_1: How Varnish met CHERI 1/N ========================= I should warn you up front that Robert Watson has been my friend for a couple of decades, so I am by no means a neutral observer. But Robert is one of the smartest people I know, and when he first explained his ideas for hardware capabilities, I sold all my shares in sliced bread the very next morning. Roberts ideas grew to become CHERI, and if you have not heard about it yet, you should read up on it, now: https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/ The core idea in CHERI is that pointers are not integers, which means that you cannot randomly make up or modify pointers to point at random things any more, whatever your intentions might be. From a security point of view, this circumscribes several large classes of commonly used attack-vectors, and Microsoft Research found that CHERI stopped a full 43% of all the vulnerabilities they saw in 2019: https://msrc-blog.microsoft.com/2022/01/20/an_armful_of_cheris/ (Yes, we can pause while you sell all your shares in sliced bread.) I have been eagerly waiting to see how my own Varnish HTTP Cache Software would fare under CHERI, because one of my goals with the Varnish software, was to turn the quality dial up to 11 to see if it made any real-life difference. Robert has graciously lent me a shell-account on one of his shiny new MORELLO machines, which rock an ARM64 prototype CPU with CHERI capabilites. In this sequence of rants I will sing the saga of "How Varnish meets CHERI" - as it happens. My hope is that it will inspire and help other software projects to take the CHERI-plunge, and to help convince ARM that "MORELLO" should become a permanent part of the ARM architecture. A very thin layer of Varnish ---------------------------- For those of you not familiar with Varnish, you will need to know: * Varnish is an afterburner cache to HTTP webservers, written in C and it clocks in around 100KLOC. * Around 20% of all webtraffic in the world pass through a Varnish instance. * You configure Varnish with a domain-specific programming language called VCL, which is translated to C source code, compiled to a shared library, dlopen(3)'ed and executed. * Varnish runs as two processes, a "manager" and a "child". The child process does not ``exec(2)`` after the ``fork(2)``. * The source code is written in a very paranoid style, around 10% of all lines are asserts, and there are other layers of paranoia on top of that, for instance we always check that ``close(2)`` returns zero. * We have 900+ test cases, which exercise 90%+ of our source lines. * In 16 years, we have had just a single "Oh Shit!" security issue. I still hate Autocrap tools --------------------------- Autocrap is a hack on a hack on a hack which really ruins software portability, but it is the "industry standard" so we use it also for Varnish, no matter how much I hate it. See: https://dl.acm.org/doi/abs/10.1145/2346916.2349257 Because a lot of software does not work in CHERI mode, there are two kinds of packages for CheriBSD: Regular and Cheri. See: https://ctsrd-cheri.github.io/cheribsd-getting-started/packages/index.html Autocrap does not grok that some packages end up in ``/usr/local`` and some in ``/usr/local64``, so the first thing I had to do, was to explain this:: export LIBTOOLIZE=/usr/local64/bin/libtoolize export PCRE2_LIBS=`/usr/local/bin/pcre2-config --libs8` export PCRE2_CFLAGS=`/usr/local/bin/pcre2-config --cflags` ${SRCDIR}/configure \ [the usual options] Things you just can't do with CHERI ----------------------------------- A long long time ago, I wrote a "persistent storage" module for Varnish, and in order to not rewrite too much of the rest of the source code, the architecture ended up files ``mmap(2)``'ed to a consistent address, containing data structures with pointers. The advent of memory space layout randomization as a bandaid for buffer-overflows (dont get me started!), made that architecture unworkable, and for close to a decade this "stevedore" has been named ``deprecated_persistent``. We still keep it around, because it helps us test the internal APIs, but it is obviously not going to work with CHERI so:: ${SRCDIR}/configure \ --without-persistent-storage \ [the usual options] Dont panic (quite as detailed) ------------------------------ Varnish has a built in panic-handler which dumps a lot of valuable information, so people dont need to send us 1TB coredumps. Part of the information dumped is a backtrace produced with ``libunwind``, which is not available in a CHERI version (yet), so:: ${SRCDIR}/configure \ --without-unwind \ [the usual options] [u]intptr_t is or isn't ? ------------------------- The two typedefs ``uintptr_t`` and ``intptr_t`` are big enough to hold a pointer so that you can write "portable" code which does the kind of integer-pointer-mis-math which CHERI prevents you from doing. In theory we should not have any ``[u]intptr_t`` in Varnish, since one of our quality policies is to never convert an integer to a pointer. But there are a couple of places where we have used them for "private" struct members, instead of unions. Those become the first stumbling block:: vsm.c:601:15: error: shift count >= width of type vd->serial = VSM_PRIV_LOW(vd->serial + 1); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~ The confusing message is because In CHERI, ``[u]intptr_t``, like pointers, are 16 bytes wide but in this case the integer-view is used as a bit-map. For now, I change them to ``uint64_t``, and put them on the TODO list. One of them is printed as part of the panic output:: VSB_printf(vsb, "priv2 = %zd,\n", vfe->priv2); But that doesn't work with the wider type, so:: VSB_printf(vsb, "priv2 = %jd,\n", (intmax_t)vfe->priv2); And with that Varnish compiles under CHERI, which we can check with:: % file bin/varnishd/varnishd bin/varnishd/varnishd: […] CheriABI […] First test-run -------------- Just to see how bad it is, we run the main test-scripts:: % cd bin/varnishtest % ./varnishtest -i -k -q tests/*.vtc […] 38 tests failed, 33 tests skipped, 754 tests passed That's not half bad… */phk* varnish-7.5.0/doc/sphinx/phk/cheri2.rst000066400000000000000000000113771457605730600177710ustar00rootroot00000000000000.. _phk_cheri_2: How Varnish met CHERI 2/N ========================= CHERI capabilities are twice the size of pointers, and Varnish not only uses a lot of pointers per request, it is also stingy with RAM, because it is not uncommon to use 100K worker threads. A number of test-cases fail because they are stingy with memory allocations, and other test-cases run out of cache space. The job of these test-cases is to push varnish into obscure code-paths, using finely tuned sizes of objects, lengths of headers and parameter settings to varnish, and the bigger pointers in Varnish trows that tuning off. These test-failures have nothing to do with CHERI. There was enough margin that we could find magic numbers which work on both 32 bit and 64 bit CPUs, that is with both 4 and 8 byte pointers, but it is doubtful there is enough margin to make them also work with 16 byte pointers, so I will merely list these tests here as part of the accounting:: Workspace sizes ===================== TEST tests/c00108.vtc TEST tests/r01038.vtc TEST tests/r01120.vtc TEST tests/r02219.vtc TEST tests/o00005.vtc Cache sizes ===================== TEST tests/r03502.vtc TEST tests/r01140.vtc TEST tests/r02831.vtc TEST tests/v00064.vtc Things you cannot do under CHERI: Pointers in Pipes --------------------------------------------------- Varnish has a central "waiter" service, whose job it is to monitor file descriptors to idle network connections, and do the right thing if data arrives on them, or if they are, or should be closed after a timeout. For reasons of performance, we have multiple implementations: ``kqueue(2)`` (BSD), ``epoll(2)`` (Linux), ``ports(2)`` (Solaris) and ``poll(2)`` which should work everywhere POSIX has been read. We only have the ``poll(2)`` based waiter for portability, one less issue to deal with during bring-up on new platforms, its performance degrades to uselessness with contemporary loads of open network connections. The way they all work is that have a single thread sitting in the relevant system-call, monitoring tens of thousands of file descriptors. Some of those system calls allows other threads to add fds to the list, but ``poll(2)`` does not, so when we start the poll-waiter we create a ``pipe(2)``, and have the waiter-thread listen to that too. When another thread wants to add a file descriptor to the inventory, it uses ``write(2)`` to send a pointer into that pipe. The kernel provide all the locking and buffering for us, wakes up the waiter-thread which reads the pointer, adds the new fd to its inventory and dives back into ``poll(2)``. This is 100% safe, because nobody else can get to a pipe created with ``pipe(2)``, but there is no way CHERI could spot that to make an execption, so reading pointers out of a filedescriptor, cause fully justified core-dumps. If the poll-waiter was actaully relevant, the proper fix would be to let the sending thread stick things on a locked list and just write a nonce-byte into the pipe to the waiter-thread, but that goes at the bottom of the TODO list, and for now I just remove the -Wpoll argument from five tests, which then pass:: -Wpoll ===================== TEST tests/b00009.vtc TEST tests/b00048.vtc TEST tests/b00054.vtc TEST tests/b00059.vtc TEST tests/c00080.vtc But why five tests ? It looks like one to test the poll-waiter and four cases of copy&paste. Never write your own Red-Black Trees ------------------------------------ In general there are few pieces of code I dare not wade into, but there are a LOT of code I dont want to touch, if there is any way to avoid it. Red-Black trees are one of them. In Varnish we stol^H^H^H^H imported both ```` and ```` from FreeBSD, but as a safety measure we stuck a ``V`` prefix on everything in them. Every so often I will run a small shell-script which does the v-thing and compare the result to ``vtree.h`` and ``vqueue.h``, to keep up with FreeBSD. Today that paid off handsomely: Some poor person on the CHERI team had to wade into ``tree.h`` and stick ``__no_subobject_bounds`` directives to pointers to make that monster work under CHERI. I just ran my script and 20 more tests pass:: Red-Black Trees ===================== TEST tests/b00068.vtc TEST tests/c00005.vtc TEST tests/e00003.vtc TEST tests/e00008.vtc TEST tests/e00019.vtc TEST tests/l00002.vtc TEST tests/l00003.vtc TEST tests/l00005.vtc TEST tests/m00053.vtc TEST tests/r01312.vtc TEST tests/r01441.vtc TEST tests/r02451.vtc TEST tests/s00012.vtc TEST tests/u00004.vtc TEST tests/u00010.vtc TEST tests/v00009.vtc TEST tests/v00011.vtc TEST tests/v00017.vtc TEST tests/v00041.vtc TEST tests/v00043.vtc Four failures left… */phk* varnish-7.5.0/doc/sphinx/phk/cheri3.rst000066400000000000000000000044341457605730600177660ustar00rootroot00000000000000.. _phk_cheri_3: How Varnish met CHERI 3/N ========================= Of the four tests which still fail, three use the "file stevedore". Things you cannot do under CHERI: Pointers in shared files ---------------------------------------------------------- In varnish a "stevedore" is responsible for orchestrating storage for the cached objects, and it is an API extension-point. The "malloc stevedore" is the default, it does precisely what you think it does. The "file" stevedore, ``mmap(2)``'s a file in the filesystem and partitions that out as needed, pretty much like an implementation of ``malloc(3)`` would do. The "file" stevedore exists mainly because back in 2006 mapping a file with ``MAP_NOCORE``, was the only way to avoid the entire object cache being included in core-dumps. But CHERI will not allow you to put a pointer into a regular file ``mmmap(2)``'ed ``MAP_SHARED``, because that would allow another process, maybe even on a different computer, to ``mmap(2)`` the file ``MAP_SHARED`` later and, by implication, resurrect the pointers. The "persistent" stevedore mentioned in part 1, does the same thing, and does not work under CHERI for the same reason. If instead we map the file ``MAP_PRIVATE``, nobody else will ever be able to see the pointers, and those three test cases pass:: MAP_SHARED pointers ===================== TEST tests/b00005.vtc TEST tests/r02429.vtc TEST tests/s00003.vtc (We cannot do the same for the "persistent" stevedore, because the only reason it exists is precisely to to resurrect those pointers later.) The final and really nasty test-case ------------------------------------ As previously mentioned, when you have 100K threads, you have to be stingy with memory, in particular with the thread stacks. But if you tune things too hard, the threads will step out of their stack and coredumps result. We try to be a little bit helpful about the diagnostics in such cases, and we have a test-case which tries to exercise that. That test-case is a pretty nasty piece of work, and for all intents and purposes, it is just a cheap erzats of what CHERI is supposed to do for us, so I am going to punt on it, and get to the more interesting parts of this project:: SigSegv handler test ===================== TEST tests/c00057.vtc */phk* varnish-7.5.0/doc/sphinx/phk/cheri4.rst000066400000000000000000000130741457605730600177670ustar00rootroot00000000000000.. _phk_cheri_4: How Varnish met CHERI 4/N ========================= So what can CHERI do ? ---------------------- CHERI can restrict pointers, but it does not restrict the memory they point to:: #include #include #include int main() { char buf[20]; char *ptr1 = cheri_perms_and(buf, CHERI_PERM_LOAD); char *ptr2 = buf; strcpy(buf, "Hello World\n"); ptr1[5] = '_'; // Will core dump ptr2[5] = '_'; // Works fine. puts(buf); return (0); } I suspect most programmers will find this counter-intuitive, because normally it is the memory itself which write-protected in which case no pointers can ever write to it. This is where the word "capability" comes from: The pointer is what gives you the "capability" to access the memory, and therefore they can be restricted separately from the memory they provide access to. If you could just create your own "capabilities" out of integers, that would be no big improvement, but you can not: Under CHERI you can only make a new capability from another capability, and the new one can never be more potent than the one it is derived from. In addition to 'read' and 'write' permissions, capabilities contain the start and the length of the piece of memory they allow access to. Under CHERI the printf(3) pattern "%#p" tells the full story:: #include #include #include int main() { char buf[20]; char *ptr1 = cheri_perms_and(buf, CHERI_PERM_LOAD); char *ptr2 = buf; char *ptr3; char *ptr4; strcpy(buf, "Hello World\n"); //ptr1[5] = '_'; // Will core dump ptr2[5] = '_'; // Works fine. puts(buf); printf("buf:\t%#p\n", buf); printf("ptr1:\t%#p\n", ptr1); printf("ptr2:\t%#p\n", ptr2); ptr3 = ptr2 + 1; printf("ptr3:\t%#p\n", ptr3); ptr4 = cheri_bounds_set(ptr3, 4); printf("ptr4:\t%#p\n", ptr4); return (0); } And we get:: buf: 0xfffffff7ff68 [rwRW,0xfffffff7ff68-0xfffffff7ff7c] ptr1: 0xfffffff7ff68 [r,0xfffffff7ff68-0xfffffff7ff7c] ptr2: 0xfffffff7ff68 [rwRW,0xfffffff7ff68-0xfffffff7ff7c] ptr3: 0xfffffff7ff69 [rwRW,0xfffffff7ff68-0xfffffff7ff7c] ptr4: 0xfffffff7ff69 [rwRW,0xfffffff7ff69-0xfffffff7ff6d] (Ignore the upper-case 'RW' for now, I'll get back to those later.) Because of the odd things people do with pointers in C, ``ptr3`` keeps the same range as ``ptr2`` and that's a good excuse to address the question I imagine my readers have at this point: What about ``const`` ? ---------------------- The C language is a mess. Bjarne Stroustrup introduced ``const`` in "C-with-classes", which later became C++, and in C++ it does the right thing. The morons on the ISO-C committee did a half-assed import in C89, and therefore it does the wrong thing:: char *strchr(const char *s, int c); You pass a read-only pointer in, and you get a read-write pointer into that string back ?! There were at least three different ways they could have done right: * Add a ``cstrchr`` function, having ``const`` on both sides. * Enable prototypes to explain this, with some gruesome hack:: (const) char *strchr((const) char *s, int c); * Allow multiple prototypes for the same function, varying only in const-ness:: char *strchr(char *s, int c); const char *strchr(const char *s, int c); But instead they just ignored that issue, and several others like it. The result is that we have developed the concept of "const-poisoning", to describe the fact that if you use "const" to any extent in your C sources, you almost always end up needing a macro like:: #define TRUST_ME(ptr) ((void*)(uintptr_t)(ptr)) To remove const-ness where it cannot go. (If you think that is ISO-C's opus magnum, ask yourself why we still cannot specify struct packing and endianess explicitly ? It's hardly like anybody ever have to apart data-structures explicitly specified in hardware or protocol documents, is it ?) Read/Write markup with CHERI ---------------------------- Because ``const`` is such a mess in C, the CHERI compiler does not automatically remove the write-capability from ``const`` arguments to functions, something I suspect (but have not checked) that they can do in C++. Instead we will have to do it ourselves, so I have added two macros to our ```` file:: #define RO(x) cheri_perms_and((x), CHERI_PERM_LOAD) #define ROP(x) cheri_perms_and((x), CHERI_PERM_LOAD|CHERI_PERM_LOAD_CAP) Passing a pointer through the ``RO()`` macro makes it read-only, so we can do stuff like:: @@ -285,7 +286,7 @@ VRT_GetHdr(VRT_CTX, VCL_HEADER hs) […] - return (p); + return (RO(p)); } To explicitly give ``const`` some bite. The difference between ``RO`` and ``ROP`` is where the upper- and lower-case "rw" comes into the picture: Capabilities have two levels of read/write protection: * Can you read or write normal data with this pointer (``CHERI_PERM_LOAD``) * Can you read or write pointers with this pointer (``CHERI_PERM_LOAD_CAP``) Rule of thumb: Pure data: Use only the first, structs with pointers in them, use both. One can also make write-only pointers with CHERI, but there are only few places where they can be gainfully employed, outside strict security in handling of (cryptographic) secrets. Right now I'm plunking ``RO()`` and ``ROP()`` into the varnish code, and one by one relearning what atrocity the 37 uses of ``TRUST_ME()`` hide. Still no bugs found. */phk* varnish-7.5.0/doc/sphinx/phk/cheri5.rst000066400000000000000000000135431457605730600177710ustar00rootroot00000000000000.. _phk_cheri_5: How Varnish met CHERI 5/N ========================= Varnish Workspaces ------------------ To process a HTTP request or response, varnish must allocate bits of memory which will only be used for the duration of that processing, and all of it can be released back at the same time. To avoid calling ``malloc(3)`` a lot, which comes with a locking overhead in a heavily multithreaded process, but even more to avoid having to keep track of all these allocations in order to be able to ``free(3)`` them all, varnish has "workspaces": .. code-block:: none struct ws { […] char *s; /* (S)tart of buffer */ char *f; /* (F)ree/front pointer */ char *r; /* (R)eserved length */ char *e; /* (E)nd of buffer */ }; The ``s`` pointer points at the start of a slab of memory, owned exclusively by the current thread and ``e`` points to the end. Initially ``f`` is the same as ``s``, but as allocations are made from the workspace, it moves towards ``e``. The ``r`` pointer is used to make "reservations", we will ignore that for now. Workspaces look easy to create: .. code-block:: none ws->s = space; ws->e = ws->s + len; ws->f = ws->s; ws->r = NULL; … only, given the foot-shooting-abetting nature of the C language, we have bolted on a lot of seat-belts: .. code-block:: none #define WS_ID_SIZE 4 struct ws { unsigned magic; #define WS_MAGIC 0x35fac554 char id[WS_ID_SIZE]; /* identity */ char *s; /* (S)tart of buffer */ char *f; /* (F)ree/front pointer */ char *r; /* (R)eserved length */ char *e; /* (E)nd of buffer */ }; void WS_Init(struct ws *ws, const char *id, void *space, unsigned len) { unsigned l; DSLb(DBG_WORKSPACE, "WS_Init(%s, %p, %p, %u)", id, ws, space, len); assert(space != NULL); assert(PAOK(space)); INIT_OBJ(ws, WS_MAGIC); ws->s = space; l = PRNDDN(len - 1); ws->e = ws->s + l; memset(ws->e, WS_REDZONE_END, len - l); ws->f = ws->s; assert(id[0] & 0x20); // cheesy islower() bstrcpy(ws->id, id); WS_Assert(ws); } Let me walk you through that: The ``DSLb()`` call can be used to trace all operations on the workspace, so we can see what actually goes on. (Hint: Your ``malloc(3)`` may have something similar, look for ``utrace`` in the manual page.) Next we check the provided space pointer is not NULL, and that it is properly aligned, these are both following a varnish style-pattern, to sprinkle asserts liberally, both as code documentation, but also because it allows the compiler to optimize things better. The ``INIT_OBJ() and ``magic`` field is a style-pattern we use throughout varnish: Each structure is tagged with a unique magic, which can be used to ensure that pointers are what we are told, when they get passed through a ``void*``. We set the ``s`` pointer. We calculate a length at least one byte shorter than what we were provided, align it, and point ``e`` at that. We fill that extraspace at and past ``e``, with a "canary" to stochastically detect overruns. It catches most but not all overruns. We set the name of the workspace, ensuring it is not already marked as overflowed. And finally check that the resulting workspace complies with the defined invariants, as captured in the ``WS_Assert()`` function. With CHERI, it looks like this: .. code-block:: none void WS_Init(struct ws *ws, const char *id, void *space, unsigned len) { unsigned l; DSLb(DBG_WORKSPACE, "WS_Init(%s, %p, %p, %u)", id, ws, space, len); assert(space != NULL); INIT_OBJ(ws, WS_MAGIC); assert(PAOK(space)); ws->s = cheri_bounds_set(space, len); ws->e = ws->s + len ws->f = ws->s; assert(id[0] & 0x20); // cheesy islower() bstrcpy(ws->id, id); WS_Assert(ws); } All the gunk to implement a canary to detect overruns went away, because with CHERI we can restrict the ``s`` pointer so writing outside the workspace is *by definition* impossible, as long as your pointer is derived from ``s``. Less memory wasted, much stronger check and more readable source-code, what's not to like ? When an allocation is made from the workspace, CHERI makes it possible to restrict the returned pointer to just the allocated space: .. code-block:: none void * WS_Alloc(struct ws *ws, unsigned bytes) { char *r; […] r = ws->f; ws->f += bytes; return(cheri_bounds_set(r, bytes)); } Varnish String Buffers ---------------------- Back in the mists of time, Dag-Erling Smørgrav and I designed a safe string API called ``sbuf`` for the FreeBSD kernel. The basic idea is you set up your buffer, you call functions to stuff text into it, and those functions do all the hard work to ensure you do not overrun the buffer. When the string is complete, you call a function to "finish" the buffer, and if returns a flag which tells you if overrun (or other problems) happened, and then you can get a pointer to the resulting string from another function. Varnish has adopted sbuf's under the name ``vsb``. This should really not surprise anybody: Dag-Erling was also involved in the birth of varnish. It should be obvious that internally ``vsb`` almost always operate on a bigger buffer than the result, so this is another obvious place to have CHERI cut a pointer down to size: .. code-block:: none char * VSB_data(const struct vsb *s) { assert_VSB_integrity(s); assert_VSB_state(s, VSB_FINISHED); return (cheri_bounds_set(s->s_buf, s->s_len + 1)); } Still no bugs though. */phk* varnish-7.5.0/doc/sphinx/phk/cheri6.rst000066400000000000000000000122231457605730600177640ustar00rootroot00000000000000.. _phk_cheri_6: How Varnish met CHERI 6/N ========================= Varnish Socket Addresses ------------------------ Socket addresses are a bit of a mess, in particular because nobody dared shake up all the IPv4 legacy code when IPv6 was introduced. In varnish we encapsulate all that uglyness in a ``struct suckaddr``, so named because it sucks that we have to spend time and code on this. In a case like this, it makes sense to make the internals strictly read-only, to ensure nobody gets sneaky ideas: .. code-block:: none struct suckaddr * VSA_Build(void *d, const void *s, unsigned sal) { struct suckaddr *sua; [… lots of ugly stuff …] return (RO(sua)); } It would then seem logical to use C's ``const`` to signal this fact, but since the current VSA api is currently defined such that users call ``free(3)`` on the suckaddrs when they are done with them, that does not work, because the prototype for ``free(3)`` is: .. code-block:: none void free(void *ptr); So you cannot call it with a ``const`` pointer. All hail the ISO-C standards committee! This brings me to a soft point with CHERI: Allocators. How to free things with CHERI ----------------------------- A very common design-pattern in encapsulating classes look something like this: .. code-block:: none struct real_foo { struct foo foo; [some metadata about foo] }; const struct foo * Make_Foo([arguments]) { struct real_foo *rf; rf = calloc(1, sizeof *rf); if (rf == NULL) return (rf); [fill in rf] return (&rf->foo); } void Free_Foo(const struct foo **ptr) { const struct foo *fp; struct real_foo *rfp; assert(ptr != NULL); fp = *ptr; assert(fp != NULL); *ptr = NULL; rfp = (struct real_foo*)((uintptr_t)fp); [clean stuff up] } We pass a ``**ptr`` to ``Free_Foo()``, another varnish style-pattern, so we can NULL out the holding variable in the calling function, to avoid a dangling pointer to the now destroyed object from causing any kind of trouble later. In the calling function this looks like: .. code-block:: none const struct foo *foo_ptr; […] Free_Foo(&foo_ptr); If we use CHERI to make the foo truly ``const`` for the users of the API, we cannot, as above, wash the ``const`` away with a trip through ``uintptr_t`` and then write to the metadata. The CHERI C/C++ manual, a terse academic tome, laconically mention that: *»Therefore, some additional work may be required to derive a pointer to the allocation’s metadata via another global capability, rather than the one that has been passed to free().«* Despite the understatement, I am very much in favour of this, because this is *precisely* why my own `phkmalloc `_ became a big hit twenty years ago: By firmly separating the metadata from the allocated space, several new classes of mistakes using the ``malloc(3)`` API could, and were, detected. But this *is* going to be an embuggerance for CHERI users, because with CHERI getting from one pointer to different one is actual work. The only "proper" solution is to build some kind of datastructure: List, tree, hash, DB2 database, pick any poison you prefer, and search out the metadata pointer using the impotent pointer as key. Given that CHERI pointers are huge, it may be a better idea to embed a numeric index in the object and use that as the key. An important benefit of this »additional work« is that if your free-function gets passed a pointer to something else, you will find out, because it is not in your data-structure. It would be a good idea if CHERI came with a quality implementation of "Find my other pointer", so that nobody is forced to crack The Art of Computer Programming open for this. When the API is "fire-and-forget" like VSA, in the sense that there is no metadata to clean up, we can leave the hard work to the ``malloc(3)`` implementation. Ever since ``phkmalloc`` no relevant implementation of ``malloc(3)`` has dereferenced the freed pointer, in order to find the metadata for the allocation. Despite its non-const C prototype ``free(3)``, will therefore happily handle a ``const`` or even CHERIed read-only pointer. But you *will* have to scrub the ``const`` off with a ``uintptr_t`` to satisfy the C-compiler: .. code-block:: none void VSA_free(const struct suckaddr **vsap) { const struct suckaddr *vsa; AN(vsap); vsa = *vsap; *vsap = NULL; free((char *)(uintptr_t)vsa); } Or in varnish style: .. code-block:: none void VSA_free(const struct suckaddr **vsap) { const struct suckaddr *vsa; TAKE_OBJ_NOTNULL(vsa, vsap, SUCKADDR_MAGIC); free(TRUST_ME(vsa)); } Having been all over this code now, I have decided to constify ``struct suckaddr`` in varnish, even without CHERI, it is sounder that way. It is not bug, but CHERI made it a lot simpler and faster for me to explore the consequences of this change, so I will forfeit a score of "half a bug" to CHERI at this point. */phk* varnish-7.5.0/doc/sphinx/phk/cheri7.rst000066400000000000000000000017441457605730600177730ustar00rootroot00000000000000.. _phk_cheri_7: How Varnish met CHERI 7/N ========================= Not much news ------------- I have been occupied with other activities, so there isn't much news about CHERI to report, except that I have still not been able to find any actual bugs in Varnish with it. Having spent almost two decades with the quality knob stuck at "11" that is how it should be, and there's no denying that I am a little bit proud of that. But it also means I do not have a bug killed, classified and on display for the rest of the world to see just how amazing CHERI is, and why it should become standard in all computers: Embedded, handheld, servers, development, test and production. Revisiting obscure corners of Varnish has caused me to commit a few of "spit&polish" changes, including an API which should have returned ``const`` but did not. But I have not given up yet, and I will find a good example of what CHERI can do, sooner or later, but it may not be in the Varnish source code. */phk* varnish-7.5.0/doc/sphinx/phk/dough.rst000066400000000000000000000256611457605730600177240ustar00rootroot00000000000000.. Copyright (c) 2014-2016 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _phk_dough: ==================================================== Raking in the dough on Free and Open Source Software ==================================================== I'm writing this on the third day after the "Heartbleed" bug in OpenSSL devasted internet security, and while I have been very critical of the OpenSSL source code since I first saw it, I have nothing but admiration for the OpenSSL crew and their effort. In particular considering what they're paid for it. Inspired by an article in `Wall Street Journal`_ which tangentially touches on the lack of funding for OpenSSL development, I have decided to write up my own experiences with funding Open Source Software development in some detail. I've been in the software industry for 30 years now, and I have made a living more or less directly from Open Source Software for the most recent 15 years. Sometimes the money came from helping a customer use Open Source Software, some times I wrote the Open Source Software for their needs and sometimes, as with the `Varnish Moral License`_ I get paid to develop and maintain Open Source Software for the greater common good. FreeBSD community funding ========================= My first crowd-funding of Free and Open Source Software, was in 2004, where I `solicited the FreeBSD community`_ for money, so that I could devote three to six months of my time to the FreeBSD disk-I/O subsystem. At that time I had spent 10 years as one of the central and key FreeBSD developers, so there were no question about my ability or suitability for the task at hand. But in 2004 crowd-funding was not yet "in", and I had to figure out how to do it myself. My parents brought me up to think that finances is a private matter but I concluded that the only way you could ask strangers to throw money at you, would be to run an open book, where they could see what happened to them, so I did open books. My next dilemma was about my rate, again I had always perceived my rate to be a private matter between me and my customers. My rate is about half of what most people expect -- because I wont work for most people: I only work on things I really *care* about. One of my worries therefore were that publishing my rate would undercut friends and colleagues in the FreeBSD project who made a living consulting. But again, there were no way around it, so I published my rate but made every attempt to distinguish it from a consulting rate, and I never heard any complaints. And so, having agonized over the exact text and sounded it off on a couple of close friends in the FreeBSD project, I threw the proposal out there -- and wondered what would happen next. I had a perfectly safe fall-back plan, you have to when you have two kids and a mortgage to feed, but I really had no idea what would happen. Worst case, I'd cause the mother of all `bikesheds`_ get thrown out of the FreeBSD project, and be denounced for my "ideological impurity" with respect to Free and Open Source Software. Best case, I expected to get maybe one or two months funded. The FreeBSD community responded overwhelmingly, my company has never sent as many invoices as it did in 2004, and my accountant nearly blew a fuse. And suddenly I found myself in a situation I had never even considered how to handle: How to stop people from sending me money. I had simply set up a PayPal account, (more on that in a bit), and at least at that time, there were no way to prevent people from dropping money into it, no matter how much you wanted to stop them. In the end I managed to yell loud enough and only got overfunded a few percent, and I believe that my attempt to deflect the surplus to the FreeBSD Foundation gave them a little boost that year. So about PayPal: The first thing they did was to shut my account, and demand all kinds of papers to be faxed to them, including a copy of my passport, despite the fact that Danish law was quite clear on that being illegal. Then, as now, their dispute resolution process was less than user-friendly, and in the end it took an appeal to a high-ranking officer in PayPal and quite a bit of time to actually get the money people had donated. I swore to myself that next time, if there ever came a next time, PayPal would not be involved. Besides, I found their fees quite excessive. In total I made EUR27K, and it kept my kids fed and my bank happy for the six months I worked on it. And work I did. I've never had a harsher boss than those six months, and it surprised me how much it stressed me, because I felt like I was working on a stage, with the entire FreeBSD project in audience, wondering if I were going to deliver the goods or not. As a result, the 187 donors certainly got their moneys worth, most of that half year I worked 80 hour weeks, which made me decide not to continue, despite many donors indicating that they were perfectly willing to fund several more months. Varnish community funding ========================= Five years later, having developed Varnish 1.0 for Norways "Verdens Gang" newspaper, I decided to give community funding a go again. Wiser from experience, I structured the `Varnish Moral License`_ to tackle the issues which had caused me grief the first time around: Contact first, then send money, not the other way around, and also a focus on fewer larger sponsors, rather than people sending me EUR10 or USD15 or even, in one case, the EUR1 which happened to linger in his PayPal Account. I ran even more open books this time, on the VML webpages you can see how many hours and a one-line description of what I did in them, for every single day I've been working under the VML since 2010. I also decided to be honest with myself and my donors, one hour of work was one hour of work -- nobody would benefit from me dying from stress. In practice it doesn't quite work like that, there are plenty of thinking in the shower, emails and IRC answers at all hours of the day and a lot of "just checking a detail" that happens off the clock, because I like my job, and nothing could stop me anyway. In each of 2010, 2011 and 2013 I worked around 950 hours work on Varnish, funded by the community. In 2012 I only worked 589 hours, because I was building a prototype computer cluster to do adaptive optics real-time calculations for the ESO `Extremely Large Telescope`_ ("ELT") -- There was no way I could say no to that contract :-) In 2014 I actually have hours available do even more Varnish work, and I have done so in the ramp up to the 4.0.0 release, but despite my not so subtle hints, the current outlook is still only for 800 hours to be funded, but I'm crossing my fingers that more sponsors will appear now that V4 is released. (Nudge, nudge, wink, wink, he said knowingly! :-) Why Free and Open Source costs money ==================================== Varnish is about 90.000 lines of code, the VML brings in about EUR90K a year, and that means that Varnish has me working and caring about issues big and small. Not that I am satisfied with our level of effort, we should have much better documentation, our wish-list of features is far too long and we take too long to close tickets. But I'm not going to complain, because the Heartbleed vulnerability revealed that even though OpenSSL is about three to five times larger in terms of code, the OpenSSL Foundation Inc. took in only about EUR700K last year. And most of that EUR700K was for consulting and certification, not for "free-range" development and maintenance of the OpenSSL source code base so badly needs. I really hope that the Heartbleed vulnerability helps bring home the message to other communities, that Free and Open Source Software does not materialize out of empty space, it is written by people. People who love what we do, which is why I'm sitting here, way past midnight on a Friday evening, writing this pamphlet. But software *is* written by people, real people with kids, cars, mortgages, leaky roofs, sick pets, infirm parents and all other kinds of perfectly normal worries of an adult human being. The best way to improve the quality of Free and Open Source Software, is to make it possible for these people to spend time on it. They need time to review submissions carefully, time to write and run test-cases, time to respond and fix to bug-reports, time to code and most of all, time to think about the code. But it would not even be close to morally defensible to ask these people to forego time to play with their kids, so that they instead develop and maintain the software that drives other peoples companies. The right way to go -- the moral way to go -- and by far the most productive way to go, is to pay the developers so they can make the software they love their living. How to fund Free and Open Source Software ========================================= One way is to hire them, with the understanding that they spend some company time on the software. Experience has shown that these people almost invariably have highly desirable brains which employers love to throw at all sorts of interesting problems, which tends to erode the "donated" company time. But a lot of Free and Open Source Software has been, and still is developed and maintained this way, with or without written agreements or even knowledge of this being the case. Another way is for software projects to set up foundations to collect money and hire developers. This is a relatively complex thing to do, and it will only be available for larger projects. The Apache Foundation "adopts" smaller projects inside their field of interest, and I believe that works OK, but I'm not sure if it can easily be transplanted to different topics. The final way is to simply throw money a the developers, the way the FreeBSD and Varnish communities have done with me. It is a far more flexible solution with respect to level of engagement, national boundaries etc. etc, but in many ways it demands more from both sides of the deal, in particular with respect to paperwork, taxes and so on. Conclusion ========== I am obviously biased, I derive a large fraction of my relatively modest income from community funding, for which I am the Varnish community deeply grateful. But biased as I may be, I believe that the Varnish community and I has shown that a tiny investment goes a long way in Free and Open Source Software. I hope to see that mutual benefit spread to other communities and projects, not just to OpenSSL and not just because they found a really bad bug the other day, but to any community around any piece of software which does serious work for serious companies. Thanks in advance, Poul-Henning, 2014-04-11 .. _Wall Street Journal: http://online.wsj.com/news/articles/SB10001424052702303873604579491350251315132 .. _Varnish Moral License: http://phk.freebsd.dk/VML .. _solicited the FreeBSD community: https://people.freebsd.org/~phk/funding.html .. _Extremely Large Telescope: http://www.eso.org/public/teles-instr/e-elt/ .. _bikesheds: http://bikeshed.org/ varnish-7.5.0/doc/sphinx/phk/farfaraway.rst000066400000000000000000000063141457605730600207330ustar00rootroot00000000000000.. Copyright (c) 2016 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _phk_farfaraway: ============= Far, far away ============= I realize I'm showing my age when I admit that Slades 1974 hit `"Far Far Away" `_ was one of the first rock-ballads I truly loved. (In case you have never heard of Slade or the 1970'ies British glam-rock, you may want to protect your innocence and *not* click on that link.) Some years back I got invited to a conference in New Zealand, and that is "far far away" from Denmark. So far away in fact, that I downloaded the entire `Bell Systems Technical Journal `_ to my Kobo eReader in order to have something to do during the 24 hour air-traffic "experience". BSTJ is good reading, for instance you learn that they invented `Agile Programming `_ back in 1983, but failed to come up with a hip name. Anyway, Internet Access in New Zealand is like time-travel back to around Y2K or so, and when one of my time-nuts friends launched a `Kickstarter project `_ it didn't take much before his residential connection folded. As it happens, I am in the process of setting up the new Varnish-Cache.org project server just now, generously sponsored/donated by `RootBSD.com `_, so it was natural for me to offer to help him out. I don't need to explain varnishhist to this audience:: | | || || || || || || || || || || || ## ||| ## ||| # ## # ||||| # ##### +-------+-------+-------+-------+-------+-------+-------+-------+------- |1e-6 |1e-5 |1e-4 |1e-3 |1e-2 |1e-1 |1e0 |1e1 |1e2 Most of us who live in civilized places, tend to forget that the InterNet is very unevenly distributed. My ISP enabled IPv6 on the VDSL2+ line to my beach-house today, some people have fiber, but in terms head-count, the majority of the world has really horrible internet connections. In some cases it is the last mile, for instance if you live out at some remote fjord in Norway. In other cases it is a mid-net bottle-neck, in the case of New Zealand a shortage of transoceanic fiber cables [#f1]_ . Caching is not a cure-all, it is far from a miracle cure, even thought it might seem that way sometimes. But as prophylactic for bandwidth troubles, it is second to none. One of the goals of Varnish was that it should be easy to roll out in a crisis situation, start it, repoint your DNS, suffer less, tune it a little bit (usually: ignore cookies) and suffer a lot less. Today was a good sanity-check for me, trying exactly that. All in all it worked out pretty well, as the varnishhist above shows. *phk* .. [#f1] These `BSTJ articles about the first Atlantic phone cable `_ will give you an appreciation of why that is not a trivial problem to solve. varnish-7.5.0/doc/sphinx/phk/fastly_503.png000066400000000000000000001200331457605730600204500ustar00rootroot00000000000000‰PNG  IHDR M^\gAMA± üa=tEXtSoftwareXV version 3.10a-jumboFix+Enh of 20081216 (interim!)°à| IDATxÚìu`TÇÖÀÏÜ»îqw!!F "‚»S¤-Z(-j¯”öÑòêî´¥Ô P\‚HpBB’ îî²Ù¬ïïÕÐ$„–×÷Þ×ûû+Ù½w|Ι9çÌ, Íß‚nZÐÐÐÐÐÐ €†††††V44444´ ¡¡¡¡¡ ­hhhhhh@CCCCC+ZÐÐÐÐÐÐ €†††††V44444ÿ¥ àÝ÷7½nßO½-žÅƒ·Ø@ßÝ;·?Ð%Ýÿµ¾î·ÿé’ßoþú®¹ç[ã¾=ôÿ{ÆÝ÷4ÒÿCý5èCyû¯é54XFZ™öÎþÙ/ï—Ãb‘¦ÇÒÊ{åJ…šdL ˆ ™l›Íb`-[h5ﵟç0ùŒ¿£ü@ +É>qèýŸ.k¸¬~¢´2ǹo®Ÿî@º!@ÐStãÄá}Ç®\ªéeÚùŒ;oÎü‰ñ<†~ô˜ãÿúÔåwÎg]ÉÌÌ.­ièÐqùNa#GOŸ=-Üš{דCá¦G;ïœI9îâµÂŠÊ¶ œƒ¼übâGÇÆDÚ2’–ïÿ±nÜÓQvL>ñß/&!PÔe\Ù½gïµzVÌ‚µ- µGL06ë¬d2yEÊû«?»ª¡4\¾€Ëf3iv"E¯BÙ#ëõö¾âS}õË7ªìê·?nßžR* x.Qê^¹´«Ó>>î¡uŸ<ÈþÛ,¸!×_Ùµó@úmvìô•«–Žp1ä•·Ïî^ñ^²ÄږσZ­ôñ ^ôò'cºhæ˜0;£ÚFÈ0ð`U}Þ‘ßöœHÍÌ)¼]ZÛ®Õ§Or/]¼œváü´%+Œ‰tæ#Œ‡2àÍei¾šôÛÁ—J¥G·Ð‘Îjƒ¾X¨©ÏüíÝ3¿‰%.n"yW]Vóøˆõávÿ=ƒ@ûµ³{·¾ùᾜ‘YZªµ{wíèQŽô.Iòücc±VQ™“|&¥¬MÝçk–Ïèĸ¨1¶^¶.væ9…xÖ^ÃGÄu°Ú ¯:VÐ `9.!,>2$ÈYð7²ð"ÐV|fÏÖ·>ú9·CÆÈÌ/ÔðßY;'ÖQÈà‰|bBœn<}[7ññÍo¾úp„;Ë`šÄ@0]bæoüiÛ7o¿4Ý “Åô¿ÓCJuEYŒËFI©ÿÓ Xda’f…ÅÇùúëóDBT€'Ëh@FB¦I va³yÂC´âá£‚ÄæÇü/uÍŸB¥‘b¬²èW­V¡ÕeZ8<*žÅæ<°þ2šç\¢­ï»·1Œ¸nÃ….žÿ±6#îYP„6—Áæfï6[¤ìF%¬~nSE" þ†ÆŒn]»ÐP™¤Pð{ø<®@låÿü¬x/1×(rAÓÓ–·óÝon#€«—C€“½I²<"y\+•\=öÖ·G[cƒ0oº’V׆½_ù×£ÃyF÷Û9aÃó›6nŒ¶ð74¶×¶´±²Æ’s?¾{¸ÂÇú{ò¬Àdmd/vñkÖ=ÿü:ª<ý¯Gb7ÒÎÑϼæçê`Ïù¯0TêÅ:hÕjVÛç;ŽQIÜíÅA†(µmh‚›«€`ö÷ØÿD6~6ö&û Œw¶µòÀ8`)Zó ]«Æö%)5ÂÔëkÐj)î?ÖfCöϹå`–ÈzÄÄGcŽ‘¤eþPpyW¥dؼ]Ç/u•+µý6ƒÍašUp¯¢íò™ooꔓÃ"ÍÏ:¸$°9·š £¹ûÌÇ×Í_é¥@ñsí(€ß¹œxbÏP€LS b[áPK/«­ÏÊú¹rËš›äRI?Eà»àÉ—Û/?Ùü?$m¡‹V>EQMïv¾’;ví–÷VMöµúï÷^ß{;A'`³HDþ=§+lôʧޠšÞøúBwìâ×Þy|ª¯'ù—tÌÿ,CR÷šÛ!  +õ—ënã"¼<X,¾Ä34tÄß-T¿úGŸyŒíæ:aÚ:#ÈÅø6@ñ„¶þ& ôµ™Ò=ú`Б1POßC`4  ÔgE°EÖî"ë>ùô5ˆÞ£X¿ûz°çïÕÿ¦齪0@ õ÷Ú=ˆ16¬~†h:ÎCTw]εë7K‹°]ô¤É±þbFÿá”ù¯,?º¸«¼©júíöy‹½Pâàåçãçæå5:!&ØW„^÷`Cz{š*K+-{u`ëYî¼RR¥ÕØ‘\¦…86h4YwMaæéý»<~¥I±ÆÎ?ÂI?½†6Vb[[!@è¤ÕyI?¶Ö––d&ŽN˜8>6ØC`‘l‰KàÂ¥…¿®µüFV^I]MY]g¯Z+±±rò ô ‹ t㘻{vgp„\Öçg^Ï,¬nêQZ9{úEŒŠf…Ì£þ¾U’´µ ?÷Fv~EU£œËtôígǾÏ$¼Ðõ¤Ÿº¨ˆ‹ònmeúëäÍUŽï*,®4¹ê /ÛõK'HÒNÞ³÷dÊ¥K%ݘ óŒÖó­…þª·!ÿúÑ=»2n’'¬¦„x8Ô–Ø{àð©³ç³ º1ø?"ûjóšIÃì ⤾àê¹ó©gRrÚ:ÚTLR£å2ø¢üÔ§ÆÇÇ%Œ‰1ÌÛ†mØ+ P÷Tܾ¶û»ßZÕS>Ã^d{{GØÕgìÛ³÷À‘Ô´ür)pž|ëç×Íð¸þצøbr~[A~cY}³¬¾¹ñÎÍ À³Šš½x颅SF r‡“VÞÙ\eNŠËfq¸ª¨n–kulÓ2!„uê®Òƒ©—J󲮜IN¿Ñ  Ú«2“vìpF 'Çû ¸ßs €ÈÅcø¨v~i5 Xª³>;é§ì¤ƒ£]9Òø‘¡Ã‡ó´ç2žP2b²Ä¼ö6ÍÞ†ü ig’“2kê›{YZyVéN_8—5}Ú¼™³b<„ Ó¹JZuëžßNöZ”ä1ÿÕAvºÆ¬ýû9‘r9»FEñ¢òñ“™¯:˜Ú}WÁù’Ä5k|â %ÖvÖÝÊ8}òxZ;0l¢¦ ÷žG97ÝÊ8vìD]X¸eçÄæ)b[ÈhÜ[[~ãRêÉ‹ -ÙÕRßUš[Žmœý'N™5sÆøø7[›Å:%ë,θ–œrü\N%fÈ{ZTy&5%ùÆôGÍ™ê#‚»%ë_±ÐAO]NzRFÊÍË%Û«aóEumi‡v>rêrY“€aë3váÒUë–Ï ö3Æ`Tq!Ðt4ç¤ß(,¨é’K›;ï§ç´Ê‘}ðıã&Ì™8sÒ„ +¾~òIëËÒöÿr¤±õ®È*ÖðØ +ãTXvTÙQS–vh÷™º¦^¾›—«uÄÃq:0†ÈtÝÈÈÌ,nmí赕×UÞ¹™W]ج±3fÚ„™³§$Z1‘aø(Ú ósNŸ8R×)7O`˜Å÷Š0î&ÔÝMw®g\Ï-é”wwv5•–dÝhh“1}Â"Æ$Μ2gö„QÞ¶< Úiù¥ä —Oî:”têf€´q‹=iúìy³§%„8‰™C˜„} Ú¦ük—ÏœLÊ(jhîå€\J)t§/œ½ž8mîÜÙñážBŽåÑ ûÝ_ ‰–ÔOŸ×çÍèM^nÔG-wT¦ïX.„éÿ’×z¯È×úë9.&g¤m¨ç³û³ó¼÷PTˆ»‹0,x’Ñïå`¹cŒ{®~ðÒT‹ŒŸùää…Sû^Zà*òMóÒžsucLõT–%½2Ç€óâ»Úõ+Kvo ç2À}Ò£_ž/’ZÆScŒq㥽g‘OÌÆÍO§îzue”³„g\̰Æ=³3µ½ÿ:bŒ±N… >ØÅ†7X« 'nÚq«[ª5½×U}úÃ'-žÿü¶s­QûÒÓoÇ 3Åš‘<ñØíeM²>AÌZewþ×á¾6ƒù;×—]ôS;’JÔx('(Œ1V·]~‚ãncÒk¢-¿žÈªhëQiúRWw4gí|mœ„€‘/¾}±cŒUÉOyòÀwá3¿åÔ(ûÆD7ûbA¤­E.OîÌ=wäës£Ýx<¾É"5åÛÃåµÇ_\ЧH’¹nÏUÞü^º{Î<¾[»+Íõ«=¸fåËWŸûì|m·eQÔ]·~{nQ œ÷åy)ÆX^ýÃ?k}!¸¶ ÏzCaÑúêÆ¬-ô³ñÜÒeZŒ±îæñW&{¿êûÂúÅ‹{¡ÉÚ³nrß0Ðuo­èäT]±sñâ_.·éîŽ6/<´4$Ä´ç aÝ»Ÿ­Ÿ3#ØßÃÉÁÁŠoöPIþ3¹¨Ír`Œ±ªýòÖ×'ØŠ¬Û†c³WÄšÄç µÒÈ[n$¿‘èÏ€ˆYÿ:W‰1ÆTç©§F À3î©]gªMmG=¸s÷„ÒhTJ ÈënÛûÞ/íì9b’hÀí(`ЩEÎ^ˆiÖÉmmG>ûÇ)ɦ³ù¿|0+T/éœ:Ó M ±’XK,’*Izÿµ¤æˆGwݬ=óÖ‘áS9…Õ*Î}÷ôÒ÷“ZÀcÉܹ3GYëí¢,¯Ï¿:ÇÛ™5)»>~÷•Ÿ®·ƒ) WÀ±wµÈG›•ôÉ÷gJ‡­9‘W{ô¥xB¿]RD7ètÏQZUuÚ‰4u|°¶ëi¸ðõʵ_ž«ïU-:@*Ë­ÂŒ­ :ŠºTS/×ê,m°$[òTeVÚÅäÏŸ|(ÜCÐçÖ¬­|·=)GeØ©æ›G0­}F®xë£%vý'ÕUžÛ¿eÙìùËÖ|uþV·Ùlb°Ý‚¬­èèÇ+—p¡‹‚ÈÕ/X0Z@ºM~ì©8—Qvð«¯~Øz®Þ´fÁ`m'ዬ,r)Û·ñõTNð¿ÎÜÌÝñîT£˜RtwÊl]g®^¾r±¿¥]¬·£±³ó.7)µžS'N`Œ¼×ÀQ¾Ã}û·™€F¦Èúâµg¶.¢ qéÔ§' H÷øÉáB1@Ñ–yúËOU!ÊY~ãû/ߨüƒ¥bg¯—6®Šà@„FN÷p€.óçÇ7üœ×Õ¦½Çç¯2B‡Üƒ¸V&%ïm¯n—NúXAUCZÒkóÌ|º^¾s«¾¯»¹bÇ›>þè\›ÌÊÝÇÙAk X?Å3ÆÇ°æí8¹a{rCcÑ+ö‰[ûêæh$4™”u£´ÌáË*9a̽ñu1ÎLÀ€z‹JN}°ê…ƒ9J°uòðëóŠ£§. \}‚u'/ù%é¶¾–Ø“ï2<È;8üúÕ»¯íØùþ+ï^hB\/W7g€¼ÂGÇON0.íoÿ´+élAÍ`i)ö­ÿÄieU¥éÉ_?1Eb¨pÍ©7·~ó[R…aÜc$ ªŽŠ«­šÿî¥ ø®|ô¡‡=(ñ´5OÆYy0ª®mýòÓÏ“ q#÷;¸þ„ÈùtóT?>Kâ³ä™í7[€JèAßë©Ú´w®Ë–¼º8PN³ÆO÷Hðký,@}-©¬É¹øÓ{ß-Œ˜áî­ß’LìK’HOº~0ù²ô.ObŸ®N9ç4aÚ˜‰¡VV®Á‰3´³G:€áÁSg>1Ïòé37N\¿Ó§€×Ïè–¬oƒåÓ§ìŠÖ Ÿ¬þàrG¯`„›s¬· A€O@$ÿ¢J­Í•vë]“±çý‡NU€ÇZ’Ã' 0;G?ïw À…Ï~M-j–ÿÇBd,kLò lã›&x‘>.‘!ã"Í7µ4tÉ-•"ïÈi5• ª"'í³·¿¿E©t _ šÆ"±¯°¬¹×´†aò„Ñ+6>íÌ3- Ê~Ü}A…,æXSinåíì'_ZÂ!Hãˆî,O¿”´£w½¿óèõ\©¾”e§•¶vœ¬hêÇšñ›¨,;éÊ…“ Qå|¹ùÛÔêN¹!QËà†³5Å ]ƒ¤ÃxáÓ÷WÏ·grôOxäù¢Lq yiלÍW ±PÊúÆË?¼öI¾JƒÂbÂ#|ô›ìOò­ 33ëÐOd¬ÚB<º~ëñô¬ Ç·ýsýƒ¿ý^ íw°øŒ°ñ±¾@¡óÖÍIŒòø„?ñòÃÀ †7644ØÝX¾±«^Yã܈õsf&Œ’Ôß¹xåÈa“YÂf2™`XóBààÊ`‰ ¥°ìèÎTý"κƒïZÛù†† ã[ì=çù-“ø,çøµ³fŽ·¨’$“±èלôÜœœÜ¼¼¬ëÙÏœÞÿý‡ÿ\3?Jò;e!kþ峤Œ¦z0˜LŽäþv`”eyÍG I’×!zúÚMŸ}÷X ÃÜÇݵM¹…uJËc{u•á8ƒ)ô»þû_íüê©1aý^õ£h¬º°óÃåÏüX ’ëôÜÑ{õìW¥†Â ÷p´± Š.ß!&@ý­òôÓy=ƒ•e\T½#ð†y[¶i×-›’à%+ÿè)sW›ßÏ͸}áx¶Ò©MÛŸãmÇi4¨b‹¦öfÍgw|U&ïÂ".Kl°å‘ÀŒ›ÿî£SÃÂg½’ @t^Þ}ªôV €Íì¤ñ<ƒÍã ôÃMÓþó•›]Ýð×hö›› 'D:‰ôg߃i¥Ü,W6©4–bU!mí1H¬ÞîÖ+Y•R IšÃ³Ž2l1–H2áÑõñ¤³1ñ®Úú¤RËP†[¬JK å—æ-šÄd‘&ï;è4r™yYPz§º¬¦€d°}ôԔÎXIô§/P)¤=†l)äfV7ôöê³e0PŸi§ÓQƒ·¸µPÀe×­ k§á£§ŽBȰ†ªÊ­½œ‘;„I½MyéÉŸgSzÉ:ÜÉÞÞÎ8‘óøD2: ÊÓ’³›Ly‘Xèî="&2L^ÞÞÅK`aÍ}Ÿµã±a~`ˆÂ´K˜¶á ÷ÙrNX¼ó€®¯P7gýJŒÚ _üÒw’¼øè‘Ždóéœ;yÍ:ØØs…`¼h0؃ôèél½v£D:ÙU8P˜a”§ƒ‡»ÞNò…k7ÿÒê<Ü™Ù4%Ƙ H¡C@¸ƒåÇjiDÜØ)SæÌ¿uýľ£ŠkM ,õÕœ¢Êæ/.‹'°ºÞ&ˆ G{vŸˆÙó®w1¶#FÎYóRö¥…?—ƒ”uu§74¯ñ‚ïÉh˜Al‰³ÿ˜9öîþÁ‰iSÎJºPy—*j­¿sð½çFþsì+6¨«*Kö›¬S‰•þ&É 1·í-Í9ÅõxLÀ@sÁ=*ÀC¢SHàóÐ? —YÇ%x»ë?ãz;GÍš¼ãÎähÎÎO;žÓ' PŸúe‹uôèèqîæ!p/Òê¢ôS-:³XC#&9^³žxÁ-nZ‡‡»34•y íz4‡$líIE¯•HÈ5Â5g kžhÕ8‹™ÿé㈉в…©¾ñfh°¥8ãO}qÆ­ïÊŽTÊÁÎeüÃsÂ%<&@ûµüâú i¿+<‘ˆ¼pnâ·y) *€ÞÆ–+?Ÿ+™îê zs—53'½Et²>+÷Ø1óæGÎjpM7:*ʤí7/¥×¥ð%d3ú†Ìš:³èFéÅ)a=|îÜ8;¨/ÉÏ¹Ý ýγÑê(ÊhíC$ƒ%²s AoÕ®èi˯nƒ8W<°Š6è$iMåí}¦Þ%6vÆÅ`8 IDATWAZ RßtÍ9ÕT¢Ãýßûñ'¥QËe`;û'.ÝøØ¹äi±ÏENbfà0gý*cŒ!q!f[Q/ts’HØÆDYöÃ&/Ó»idÕ7krÌ#Ä[È0–Œ)ʼPh¡Ú¯U5å?@« }­]m¬ ùˆå1{¾¥„À„ü;>Kdç"²sñ=fBtÐ0¿Ÿv8ŸU¯_dáôÚºúnm„ Ÿc%t´è´= ÖfT¤£=›A dûÕ+(Â!dêSË&xçr·Z  ¥òž—Þ „´íÝm=!L@€ñ$ž‘‰ž‘£cFÅŒŒ ½š{çJÊ¥"™ÅB±­|wrÞòH?+6¿£¡©©Ò¡“¶ó‡×³sìÙJ-F„ŽÒV§·K FæÚ¶ÚÊV)ˆ(I´»³5ÏWŒXB»©ËB,—)t [°2âÇä6é{µ#§àêÑãq:!¨?q°7 Úkx¼ ùbµ¼£ºøŒ‚0îf2ѯNI¡op‚o°Ù>B!¨¯Ik“ëLƒ¬§#uÛËÄEƒ mù¹W‹L¾«ÊÆön€ÕPJ‚1þ+Ú!ÞüƒA«¥2–˜ÐècäáÈ5+žuuŠ(iäø‡Me%mÍ¦×ÆÎ9à„±ìZF3àg[>ƒÜŠq¹aóV‡}P˜®®×ØðÈñP Mºæ‚š¼Œ(5FÏ€¢"÷\r%zƒ"Áb‹FMžÌÀ´*y”‡X8Ä› Hƒ ˜÷'Å(J‡1…Ôå[&­‡%><,@YSqéÀçgO_׾ñjˆ‡£§®4H€‡É+ýnÖ^hÑ›wº3÷ì¹¹"ÄMu:¯ÑN&;ÅŒ˜‘1(šÉr þXètPuUe§Ÿ:pâD‘Ô#nÒ²‘‘ŽÀ@ðà¯íÓgËsó½Êo4@wÅë§=vüTÿ”##ƒmàØU-ˆIZÙxXˆZ«5Cªr…¬»ÁüMùMµ¬‘Ï4N$’3!vÁ t*…Ø=ÔÏæ©Æt˜ À90AxÇ– ´,ÃÂÈn$]óñõ` {E1ï³Ì…p²‘Áä3ƒš@ûʬØÌÁŒì¤~s_¾¶»e•…AEÿ·}ܤ5/Éës–þÜŠ<¾˜É®ÐÝÍß Ã`0•)ºzÊG@pÆø¹³˜Œ 8ƺ õðŠØm.v6C¹´ŸÁ’u+ª¯æHWx‘q6›çúZõѨɉ;¶¼ÿþú%.¸Õ&•+4 &ɰX3{í½[âEºþBýîtH¦†^er±Wüò6×ö¶Öö¨ëZ³íK›‘zO{Óg„ `èç¯HÔÇUÍm¥õrpá¡÷ëL–Øt÷Gâ4ýÔKíïsŒôS9¾ˆÉ±êwuß_Ka„)¬”73ñ î2’Á”FÕ^pîøO|°õJ {ÒëÉ_#ª¬Æ½šÁ„*˜3uXzé…³µ” œÈ|‚Qs¦É)Èa|´5€…©À4δò¶Æ² ÛÞÿ×;{ $Qk¿üä“ecWÖý{.Þ1d‹±º«¾èê‰oÞ|c[v‹Ç“ïí|ñ•1ºÛ?¦üq•ƒ€Âju¯…‡ÂF p1¹, ƒ`0-./³â­^[æ'LÃýž5|W`é ¸8{³¸|S)@ ªÌÔbž¼‡+\Ü·Âé‘u˜Ä>p°Gû Ø0™=ì TٰᜦtZV«ÕQ¦»)°Ñ …1ÇÑôœåA0ÖÇÙ]¶ØaXèLSZR•¶G5@>L 3lL°bé g”N«Õj´&œÁƒkÜÚ0˜,Øpws襯Öà5#IFS»tOJZ»Îr-hq¨cp™±|ãóO®nÚÍ‘L&B<+¾ØâÒ¡«9%%UÒ»ZˆÑø᪈A¸O[õ¤¯­QZê*¥iÿúâ¥;cVGڹƫV‡%±±1Ïʦòúô‚*C^¦SÇæÂö¶1 £û^®T^ÎÈ–)´Ìíÿûº¢! pˆËék3Ý þhµTGËõa®,È eímEû7ÿã‘•}y¥Dèêýò›ÏŒ°@§ÔÞ#ŒQ?ÌÈðùsÃÃG›ëVñÃÏO½µ•ëÀ›Fê»Ç¸@2ã”5]IzeÁ´%ïì)˜üô¢å3Æð Wõo»WRŸ­¢æÎá7,Zðä¶ì°Z²iýÜ€^õ'­Nm[k)ÆF]éÈáY‹a^`Ole—žQ\Vr§Ÿ‰„ïchý[€!c»ëç ÷v4- °FÛ~â‡ÃFÿ/ºj—ttósò²0q ó¨Á´ÖækÕ†;‘I äí|¿ÍÜsÚèPñÿ˜â9nÉ+ûoVö;hY†ÄÚ `Jèp½óW`c·^zO ¾W!Dí¤ /pú¨`C\AûÍ]/¯°ð_¿Þê×NA2 ±Ñ#B'N‹Z-˜,²µN±ï»ŸòtŠïªE¾‘qã&¶’³‚}DV ‘“­“·é¡æ³%·ë«ÿ}×í“ ?û‘¾qÝ+“iÏìZ3'ÆÝ–w¿Ó+pó 'ÍS¾(»þôµtõ ÃÞÞÑóŒ¢ZÖ«Üq.[ú®Àuv%q ³ü¤¤¥CÚ5˜KNÙTquš›½5‡ó`¸%éW?ûyÇ6 xK&ÆÄý¨ŸÈ‰Q‘‰¥:yª#>Øuô¯~ý05é¾xqÕŽì 0>*8z˜µQËýûèÉ?øÉǯ¿¸XÀX<~”½‹àhR tj­)!ÛPÛ˜‘‘¢¡¼(æ;Úy˜º4§ßίPý±ÅñŸRbÀûÒcŒ0&Ø<ƒA/r¢Pó¹O_>)bð¬DÜ‹†þÀáeËkÜúÒ*rxLäÚB§TÖ§î°žæh+&àøJè@}â“Ôü†U“$%z©Á`2É>^(‚@}£1Æú²=yB싽yx|\D` ûîîAÐ^•~áÈg¹*!kGÌÑ[DXì¾QoúµÛƒùÕûÚ ÇR¯í(5IEG!OAÎ`0û˜Z s`Ø@­F„¡!0h5¸»õ&\.‘‘SÆ´R¹™6NÝÌšøàÑ“{Îߦ ?ÖeºÍ¬§¬èÖñã%úwM÷4j¢y=je÷]+/Òp1 ÀÌ?@«*;òÝ«¯ýRÑãí`cË'L“€$I ™¯ó,øw`ê[ˆ ˆ>‹a“o´ïæÏ;fÚ#kÆçM~{{o/ Œ€dêÚÊU:5Ç=xúê ÜY–D Ò²=Ôýæsm¯Óµ—åT·*@§¨9{ôû·¼}ú¶ LƽíÍÅÙ)ö3[?.Ø…oÚÓ±ÄÑë^XèʺöÆÖNóLn)¯SË4„]ä„u/ö1ÈpyMvUm€¦­*ïȶͫ_ûöôfA¯Ê Ó¶½öyš’ ^¾aýÜQ‚¡KDD"€Ž–ä÷ž{ý×S5:“Þ5¶A «(.¼z€å»à© ñ¾†sö|×àøyËF˜LØÚÌÝ¿}õñGÉå톟Ж_Ýsò\z-ßÏž°Xd¨AN£`hô14C:-®‰”:X9uÃD±ÃBC!dè!NTèèø‰ÆÅ,gô²é#}†‘Пy®³£±¾ÐbËUXÙP/¥´èÒÙ\0~Wèý´2 ‚Á IK§“Ašª‚Áb1-²#dáïi)«kn±J™ÕZ Z‹ s2 ,Ò”i˜ìSË»×ûÐ¥ÔØ$v”Šú«UúP[Iâ¤ÓæG±Àâz_“ IË®a2z•í<<|ÞÊñ¦‰¤Ë>¿çË·?8y£2Ü# Úæk{“Ï]ÎåwøCÖÆ`²™¼—ÐQ’]]VÒçæúι€e|(¥õå…×Ï¥Ù³sw~8»9Z Øf? ÉÒÈ:Agú=RUai™j\4Û¸È4˃kˆR«T Ë£s¹•µ==BÔ÷ÚX}Xˆ]DÌâ/t½w(¿V}ùlêÅ‘#ýæû’zcÉñýç[êzlæ>ñøš‡%&Ñ‚€¢¨^KWQÞPÝÕ.cTåâ· ¦{ìœQnŸ_/Ò¥i¼xd+f tO/šãÆDáîÊÜKgXOÙøì£QÎb†Áqƒ1 ’còð‹/äi>ßu  óö•¼ÊÎ^m¹§SÊKÛ€6çñuëW´16·µwx€S*ºÙ¡?CÓ’—üý犚[×¢|ìD¨§£úÖå3Yò„ÕO=ùÜšñöä/¤D`﹓ñÅ¿þÙP\´d椸ÁŽã딼!ïâŽÏ·,BѯìÙG£ìôuÁ1ÂG-ùÇé¯n¿TÝÚ¶êÔÝ_K;³Â¼ÅH«–kµ”Té2Æ?Ü—gú¡1P+ÔZ•åaÙ¼’š.§wÎ ~üìÄ𬜂# ¶Âéë'Øóˆ¾.1‹¡¢h­ik²¯€Ò†V•€ÀdºŽZüʋٽ_8_Õ ZYëõýß¾«è*ˆõ³b0´²nYw¸ºû¹s€¦ÿôG—Õ65½¿·°M}Ö¶o?­«È õ·ã‚R¡îí%˜˜76v¸“˜7¤ûàÆ€×àÉK64Öu›r­JÕgùÙ™-Y·`R€Ïtá¹¢¹²¸¨äNqå;vU¬—H/™,/ÚFRY³FÑcRÈ:(©n‘Ž*mWW­e$O«T©Ô™W‘ØIKw4ÞºiSUB\¨ƒ Y|0ü~ÖÁ•Ÿ>ý¨mù¢i£‚œ$ä]•!]ãÇ-þå½Û=à2qõÄp?kԿǒòb[H S~ý‚”Öf«E\[ÐT¿¨ËÉßõåÔŒiÓFû€$ä­-M–gTªÛ»U|ÀRÖ–i5&·u—RSU«€@½æm™L pùÐ盕Eâ<»ÛI®@«¹<;¾±»¦noºd›%šàLê¹Å#—Eø°H˯'í¿ §0á5ññ ËæD»“}jÞ^«ì0©Š‚²)½iï½xÓ“i]ÛÏç´€¶·6õèÖneSv¬§˜…5 µVÛ£´ IˆŠôëWâ÷¹ ·lÙ2 ¡…ÒPç®æÝ>t×ÇSëÛ,ت«CË`¨‘ZZSR\ZVVVV^˜3óbòþï·~úÝŽ´ê€©óŸšåÅçê““Õ'íݵçÄåR㺟’*ê¤*‘“|>Ï|—B#DõÔ¦Ÿ9ôë¾äì*óµZºG$ðĶ–‘Áz)Á`ÙxDFºj»Ɦ:©¬«G«•5”æd\NÚ±û–ÒÖ5xî3Ϭ_¾b¤-© !¤ì.ºžº{×”ÒJÓ69ùüõü:7ì‘g·¼ñÒ4 ë~.í,/¿}#¥Éidˆ¯·+îÎÏ+ªmëШUm•å%eeeeyç:{«Q<îÉWÞxyi°È7‚bs†…‡0ÕrЩuj…¢GZ“ýâÙÓ§S/eVãáSV=²`¬—È8buÅ×íýõð…Ì–^Óª®x¸Gbe͉ 9ׇÇÞQVz;çB­cij[^kËD}ÃGõõÖf߸ºï§­'ÎÕ4*Í#J†ù–„o#`!Äà¸ÅrÖizdÒ…F§Tj(uKQÎåÔ”³—/gå׳‚FÍ]þâ¶vvŽŽö’°±c&¬Xû¯¯Í(mQš*Hõ¹Ò|%ÕUqéÐÛÍ qaò„|Ÿ‘Ó×oÝ“ÓÔe‘ e¾T±»ëΉ_Þ\;wj¤³³“ƒ­mH\䜵ÞúáhF­òw×±uù•œƒŸlÍÑ¿"ëºsùè7[6<<>>ЉÃåñy<ç „ùk?úþtIMu1ÿ¥*Ë>øñ¦å“¢=¬m\\=bÆ-}õã·•w?Ø{ö™±#ÜYk‹AæhkŘöüWçZ†riɹÍoHX³£üîÛ,ÿëpá¶ÄH_‚-²½«£ù$ŒX¹ú×-ÆØx»$ÕrûÚ®7_x(6ÜÏÆÞÉÅ!fæCÏ|òýÙ²Æ^sáõ÷Ÿj¡¬+;µõÕµ3Ü\Ülm¬££¾ôî/i†;Xïq÷1˜0–•¤útËs+fF…ø ù|_ z†OJXúÔ?œÉ­U˜ß¸+M.ý=G®ØÆÞÑTcG{+6cÎ'Ÿ¼¿ôá)$“gçhn '{!]øÝ¥± å…§¾zzA„‡¿[`ܼçßÞy¥Bn¸>³âÒö-s#ƒ­œã>·ãbµBÓïX£0ÆXZyvß×°Òk{Ô\JKQcJÚ}ëÀëÇF8;DÍ\¶é‡7›c¬ÃòÂ#[–Îææ,ô ûÔ{{óš LŒI°¹}ûÕÁ–Ï$FÎ_üÊÛŸg08[GGCUœì$k[öâ_Ó›ecªùú¹oVN ptŽŒ{dãûò«ô­iiMÿâùiÑnÎâáSæn9z¾JaYKõÕ÷^ýçË«¾ºt» cÜv'ù›7›7ÆÅÁŽÉ`Ú¹»Ž{ô™÷ö§”iîš”º·ýÈŠ`o¶ÐÚ¢kœì¬Ù ßÚ’ÑaÑHꆜ#?¼º|~T€µ“«ûˆ KŸþàØÕjµæþG–%ÿÝ~³‘††††æNà¨ðP¯é³x vccßBâþRÁ÷~ÏìjJFC±2þš µ¿sÚVø÷1 ÷ö(Þ__ö}ú֥߮鯳‡V¸{=Õ§Ñúp÷,OßÇÏïwßÞ£xøžÊ¡¾Õoòv ãA'î_ôÿÂÝŸ˜ìýf7DAñ‡ÛyÐrÞo·â†À u»×¸ýSÝLïhhhhè ­hhhhhh@CCCCC+ZÐÐÐÐÐÐ €†††††V44444´ ¡¡¡¡¡ ­hhhhhh@CCCCC+ZÐÐÐÐÐÐ €†††††V44444´ ¡¡¡¡¡ ­hhhhhh@CCCCC+ZÐÐÐÐÐÐ €†††††V4444´ ¡¡¡¡¡ ­hhhhhh@CCCCC+ZÐÐÐÐÐÐ €†††††V44444´ ¡¡¡¡¡ ­hhhhhþëÆ0þݧß>îÊÿ‰tzë¿øÝwT¿|ð` S–Ùâ»[` –¤v}þï¿‚4444‚‘¶c„tfí:›Yq§Y'v6yéDëÛy½^ÖL1ë HueÊ÷w˜‘nAqav*-UWù3»Sj…\6VJ¥‚acFŽe@é õÆöƒ×Zd*B§¶v ˆõP¨Õ_ØØ]¹ûu"7Ÿi“=„…ÛötÇÎó t²eÒü·¾>TÿШ@[NÍ€‚®¼‹wdí¿y‘ônކ†æ¾` "”Bƒ¦3u×íKk­éå9û‡5³m ý—‹Á[WÇX?8š¦ìÃ׸œH×?«4½¥çú%©º¨™Ãäóã׊­BÆGÙ…'Oì½’rGËÀ#¯v÷×*Yé…“µ(J8~²‡PUWšúëO¼Êù4ôIï~BˆÒöVÜ>¼ýƒg>¬};òkw³Ðk,­ÎÙûΆclVüÓsh@CCóGŒ&£ÓÊ®oÇ…o}z½cŒUÕ··NâqXÏ©¯SàEQ˜¢,ÿ¥þDR†wÕÉËíØ›Þ<Þ¨6~NaLaŒ±6óÃݧ~»ZßOÖÿ6( ÷©UoWùþ·˜ .Àòï3‹zzMÚöíËL€1ŸfÝi¿ûkåù§‡ðÀuÎì­¹LCCCs0ÕjLݸò9±ò£ ã†óXîAè ÿúc@­à<3Bƒüû‡’Ò)™v£GÙ+íGIŸ4{z„l [ôgóº¿r€EV<±÷œWzR¸VÚÔZõ€¯ âVn®³¹.È °öw_³Ç}|ò;þûºêdz)CCCs¿ n6`"äâ:™Ú‘z1½\i”:b"êÙ'£nÀé:$/îÀ^PŒH@ÿé _þ6ÈÜÁÅ1ºGá°¥ÓÖâêáwY}뎂 O_V’@€dß• aRbý»ìñóÖÓÐÐüB„ˬÇ×QÙÛ7LþÐëßiÐë¡M0½—©Z3=é7",,ÐoØ£ÛÓ05›— IDATs¤!¤ë¾“úö„ñ/{kÒCq¾+žxèå®?qLè†#Y-j„JÏòô·Qãg|r® !ÔviÅü¾:~£!¤_˜W?90Ð7 pÖó¬5,ׂ¢Ÿ—¯›äëåí?zÎÊ +y“CƒKÔ>2Q •û§M|s_iÇÍ“›_êê5öÑ?)„!/þdÓ þ~~!Á¡acbÆÊîÑ!¢-çÝ OùEÅM?î»­íçŸÜô³ëÌ'jE@5%™=.$00$00xɳo]íD†*÷ò§?½w¦JoÀ§(ÊBu ªëfö·ÃÃCÃBCB‡‡½¼ít±(¬£0ÏV[¸}å Ó==ý¦?ôøÞ[`Ða˜Â”¹æ‚üÏŸY2ÆÛË+`Šß•é?D´  ¡¡ùä–-[“ƒéàÐÓœœ|ærÚÍ’ÖÒ<ÖˆD®Á~Ò]–w'õJ{à˜àáÑqcœTÇRÔ”„ä4\ýì±ßdd¥3†Ç…yŒ1"ÚßS¤«®MO}~ÎäPgk&µ-JŠÏž4z\êÆsÞøéh_bøèH/¥–Þ¼üÄú+¢ñÞcFGX•ßN¿}ö¶UâÌáv¤²âà¯eLqÐ˜ØØá>‚žæ_β¦9 A`óºk¡)ãã_ÔsFÄÛôµT)+oÖ€ÐÚÛ—ßvåç•ϼsüâ¹ÌôÂÚÙ7ĉÙ\–“œí5s¡z3ŸÞœTx[4nõÒ¸ _Vã{Zb§ “XcVO[f][zS㫦Æxˆ ·ÖuYö±Sb=šN.\uM8nò„sÃ99õõ¹—©¹s#øšœí7½ûÉU–kXôÜP;¥«½þÁNùäÕ!‘ndmփϼSºò¡ññýå»®4²¥’Äñ>,¬4ãÓ}ÇËI›@_¿èkEEÅùäóU1 ÓÅ€PíÅCùJ©$tE¼ ¡£4e»÷VØð‚ãF9)µÍ“¸c§¹ð á{ìzhhhþv êÐË ïÌUÿpFîG9}ë×÷ÎQוžÖ.˜ä+äõ9§w}ùýEÉc/·V³øV?ئ±.O‹ 3€E¾3_]ê¯ßgPã=ýlkv'̈på€{âCk:gw+ÍÍ ¹09,& 7O}p¯8³e©?€"nÄ)Ç«t* @gÆ—ï~P»$22$À–P–¤çœ:š~aãÁ‡\œx÷%ߊÒiåZD‹’ë좖­œd;Ãí•M ¾Ì­‡Én ‘òܯ›?¡¸àôg_f@îÂöÖ¯c¦®A~G·mÓ­Np'€á=ÆÚ%3®“u,—´ýÙÛŸÆúÊn§v²ÏµåÉÁ‰£U)u$°X\&ÙOq4½„Ð/pîßlN½6Nˆf…ìxiã‹_}¶q·£ ƒƒÌV-ª2ûã·>êœ7)&0ÖËŽl¿œõЯ™K^<1wAKš¡+„ÆXÓÞ©‹œ"V>±ò‘å'¿Ÿõ⟮úGÃÏ[¦Å± nU—-iþÝæ4L²=ežž’Àñó&|Û]Ÿ¸*ü¡G @o-"˜.ž צ~|²Ñ{º“—°·\Ö%U0¢˜a3v~Ó1ò–B¥a4””f>óìöC^:sGÎxfä ÐvCñ¹Ú1÷ØÎ¢ó ­$ÉJœò&в¸+”S@ŒIzM¦·‚ ‚À:•ÄN‰kOmUò¢3×|±yÕD[ºå ‘‹Ï&Ò‚ÍÄ϶4Õ×—Ú{ãڱϒ2¬!K2õºÌ*Ü%`ç“/n.Þ,\Q$£l G¡ |ñÞí9·r³ŽœþøÈÑ­Qî#ä³ß¼^Uú¯Wù;o®€5ÏïçñÙi/TcGJO+ÛNÈ%M•Q×O"°O÷ÈÒ»ß(*Þ¥rŽÁ†‡ Ôª¢óÛ;´ÎûÞf'étˆ$Irr¿·°€EvšûØ…qëéc¥§G8Ú‹0ØùÎx­pâü¤u‘ ¶L{xT\¦ìc|ŸÝxëÃø»œ ]¥25€\©ÖQeÀäÛ{…½¼çí £ó¢®Ë{DR^”>#€î^­N‡€@€Õ]ím7€Ó%×b>éD`°!úÕmï/œïËÿÅ ÷‰é!H6ôR:­ŒÇÝ ¡8j5$Ôr•TޱZÑ#“H0Eé òO½ôô–$ŠõÞÖ³õ+“öÛÍÂZƒÙÞÞʆ“È~wWÆK¯çÕ•vc'¾·@-ý?öÎ;¾Šâëÿgvïݽ%·ä¦÷N =BゥŠ&Ò”"‚A;ˆ¨ Ò‘Þ{ )$Hï½ÞÞvç÷ǽ)@(~ýþžçõzœ÷?’»»³3³³ç3sΙµäÒ]§~ÖyÊO»ÖïœÖïϽËËíߢ5š¬â6òŽ@CÖÁÝkŸ›ÿçó_§'¼êÅ_tj}KnU§7ÙÿÐb¥ÈZ´Íþ/–¢º/úsëøÞÎŽOé@xZ00§îj´uÍ›†)¯‘³VHi%`„e7èNægµåaiV!ˆˆZyòͤ%‡®ˆ¥:¶{ø:žäàéã92‹j8þ‘B1Ê+(©×<é þC™okïgäh<¤N¸j@ùÈEòda¤ WV…¬<óÓÇêÛwðv¶ïAÔ`É;ƒ?¸©·_Rp;-;9Ý*SÈ$B"©\$´O@1°Bq—7úýòû[ÓëÕ‚0¥Ý„QBº¬ÑT£pöókÑcçôŽVWÙ–†;UÉ|›!‡ØÉ›†µþ›m›StM"“z`_¥VË=0ÁE4¸w>’?¾fçÖmE°'!îìû]~54HbûøE¬T„•JD¬ÈA$Ä: ˆ2oÖj²0€°<áð°‚Lévãêä0tÒ›û|÷vY•€qñ±¯™8Îz¸^Ã* &ûæí„›ŒƒBB…ˆMjÔg´B+•P(¯Î¤·0œ¹X­;£Õ h ãÊ©’‚‰B @ѼÕp¸¬V'Ð'øs×—‡ÍãMI@.eèŒ) õ€XIüÔMý·¾¹ê‡¿òÀžj”ö×Þr‹ÅJ¦ÿáQž¸MÌÊY‹w|±v÷®ÛfÍšÛ9.®GÏ9Ÿüš^¦1ócÞ˜zeó+À-:>¾s—¸¨°…ïoM¹–¸sóóÎ*Pø}tâz¹Ñ¶cŒ9 ÎÙ8È­çKœ¼eßk¶nîÀDÏX·+‹Ãê{I?Ì×9²c|çØè“¦®?œ®Ç[tß, Œs‰éÑ©SÜ~ƒ~8\ 7p­·ÝÚ+ߘcçҞûyuîÞµK|||×Naæ¯<|=WcÂË } ÆiêGG/ûsý2'VàÖcîü´mÏîé®í{ ÛsÁ²×#ÀÁÍÏ}éŸûr´özk²ŽfÊwfªí·6WUŸ'œCÃÃãgΙ}6]¬¨2!„ª½ôËœî}÷Z}êN=½@ €1BˆË¾wïΉDB!Œáîö¹óFDŒycÝŸ÷à¼G ¿ÁÓ¦§6« ©{Î\8¸ûh^zrMÀ3㔈EØÊS*YeÊ·/Ÿ*1e8½Ò“ó“þï4ƒvî7\äpæTVÆ©k%œ¶R}R¦Dû+ä„F€Àšy¤R[y!·H–@1½_]Ùß52È‘<ûÍA‘.ßÞy!‚Ëôc80T•¥¾~± ®v»3¥s;•ˆyÊÌÙúV}uû…d\ùÞPû!kuú‘“ÇŠsÝOû̘ÚÞ‰ Sð?.Û§ŸÕÿúyù‡_çBÔà…_oørlXó9õ×Îm¿/¥îRæŒI&JÐæ4µaûþÙ¼¶¥ÛòD8ì¥Àa£ÂÏ7)s“vïPÔœKÈ~µwP°¯aÀp€ï%úôíq¢nî:N¾Á—¿;?¸ÙÕòh­l7z°Âªð¸IóV™+v^ȱ€²rî3zéSmí(gñcË´ÕÙþ».篟¿þî8 ê=€µ­1d¡Ã/¦^—öˆö{J—=æ=À<|æ3\K þ¥`3}Xzó›wB‡¨ onüjlX‹Má‘c÷þKúôÛ>¢X@óÀ\[TVQmb„´»G{Ñú̼’zVH3¬Äѧ½ŸR@!cýýܲµ†¡F¬ô ThJ2 +i¹Ÿ›“”SßÍ«1ÏK½Bý]„ÖêÒ‚‚+@€Ü‚£Üä Cµødšk\UKËÝ»öðr:½cO=@ù¹ÃiãÃ{øŠF€0Ç[îœnìùjX¾«þ&`Ž34‚‰–¶ÙWÎÚPž{· Q#ïöqžN,ÓbýMué™E:­«|E5EZÜÊ/`®*/¯/wþÖRc…«ŠÚ+‡±®*+5§Ðȼ|}ܽäBŒ±º&ýÖOg­ß[[©¼y%•‹½Û«2ë܇ô}eÔ‰T¬T´4ÓTU\XZ\­Ç4#v ‰ R"„XÔÕe÷ŠêŬóØ3?§'€Ê't]6Z›®×¦ÿfvÃ2ðÜgG¯W4—­××ýT*tˆ>láÚ5¯Úê3w~±ã¼Ý_Ïl×TÇåÛîÖjm×érÒö,´„"ð›øâ¶T5Æ›-úó߸)ÜlWPPzMÙ•¾k¶k „íô{AsG7œzkBw`(IÄ}7¥eÖ˜0ƸþúîÇ4?ЗÛ½aÐŒÔmÖÖ|ÎÈ5qçæW=„0âç”# ¿š§g->òšÊOà1`æêë\›Àó»‘ÇÜͯ `|¦¨7ZyŒ16%íY0"€ñòœ¼k÷êç×}>É%DÞ9Vb­¿²v$ˆ}÷Ð… Œõi§7L¶¸7ÿ*+}¬º°û@š±ôÞ·0~Íš¤F›%6š®®Y}±ªWŸœ8£Ç#P}výŒ.  óý.ƒ3Mïà½|¹clЖZ+ Y€ñëOß6âÊËŸ¯ˆ‡f(ÔcŒqãÝm›_àýí™u:Œq}ÂÎ÷†±®Á’°‘¯9·,¤£@Ç—Æ­ÀóEŵû–Dú)œLÿøšcÎÊq§îì 1c{î*´5ÍxfA¯âg¾|ºk³Ò6õ£i æï;n»·:3qó+M0gÝÉì„ÓoÏ zH_ÝŸc°Ø É=´la4ˆ(zÒ·iš¢¿›'gY€/.ªT8 ®áýúùp ZкÒ@è³Od§7i­ïSH(Ò4PU«-ªä»«JÌŠD „¬€¢) €P”ýÁØÐí£oÊÉçÂ"‚ÆE¨@êâßeÈž‡ožßuôÆ}+d{—O\ûÒÈÞÁTb—@‹Ñ|¨°ÈÊ›À/(²]˜ÀÈ÷îÛÉ[(#ÎÄždOÁhªµçQ2JÌ4‡][œïÄð<ßÊÄ6ýò°·™ãyžç@îÐqÆAN Ìøb»Îªtó}µõÞl+àà-<ˆÅAcg¾¸VPXZ£`<¢‡ÄLHÿάclÑ[¸Ö?ý³ïSØv­É;.è|óp¢ÙX–píן·÷E#žóDôcl.EÑÀ-0xÎÚ{=p¬þîU{L™þö€2$¢¿—·òØÕËW¤6X-<  ¿ÕDŽçè¦" lo%pœ¥-@ <êi†Ô1fð´p?èK*Sîd »öÙðÁ¦ëÇ×ΞÀLùø`Âæ×z„ 8½¹Éb·—4…(ôÔÙîwƼÝ=ƒ@¢þŽÍ¤„à5àÃ‰í¢½ ;»øÜþ£tl´·Í†Ñ#~†a’p5 çÏ¥XZªa«¾€@­Õb±o‹nöÒS!M=¦g-å©{VÌŸ0qäªú1‡Ž¬ŸíÔÆzÅnÊÛTMŒ*ÿHG$¨³r+›-z3@œ—{¨Š†gÑ„Y]_z?)õvZQ£‰Ç-»¤ €‡ Æ€(vY¶zJØ@?€Ü¤ªŸví¯+nŽ74êŒ&ÛZB”"­€BtÓ ×æ˜æ f«Ùj3¼­B-¡dÔÊ)a;DÙ÷”YmáÎb1Д¦søÂB-ÇA6yO@;Ð8ûæwìÒW¶Ì Ì·È‹ýÖÈ+ÐÑÑ*kø-×nXñV³…Ë]ü€䜲J½úÁú5uzSëŠÙÀäì]õõÒM¿WŠß›9X®E:³ÑboyËÂÃîøG<gáR( w¿‘2™ŠJê²ËëÀd-ËOż ÄCå¦à[mCè1i©Xá©ýê³ðà=­ Dž8o`;-?½ö­Ñ£¼ªón}÷Z÷%‡ëš'Ç #°Í„ÍŽ·ý®’ù;¹G˜y¸_¦e æìg'oÎ`¢ŒÍ×MÓ”-©ˆ0¬ µ)3nža,@zyM¶˜oŸ>³s_Ót\@?¦ÖŒ@  „ÍûÆM•©Â¼]çÅû¶¸tJÍ •öÖÑB`)ðé;wHÜHO€µiãæC ÷ÄíüæåÓ}ü§C¾¿•]R Í:wò÷¶"6¿õþO§o[@Âк¹Žjêokk‹°ß-ªƒ¢‚&]mù@Û¶‰Â' T¼euÕzhOÂÀm›‹( @ãV~5*Ì RóåZ UÉ'Žs^Ù6et÷(1TKÿhЦÀžAÕÔaù÷Ó“Š€:zôjEM¦ÿ¿œ'&‰òÍ)ÿæÆªšªk§\<€U¹9»º¹¹¹¹{ÈEàæç=}SBAFFµÆÌÙ·éï&oy)€Jåó>|ãõ¡==læÉÉoÍeƒ—^ž9´Os%ä^«Îݱmk²%§ëK+~™a×¶Ï´‰óß]Ôô¥ýÁÁœÒ+j¼¸blïP†bd4’Döùá®ÑÊaŒuŽ9:òí£uc‹¡ñؼ±~­?Ñ#Px)dmO­5`ܘðãGÜDNÎ.Ž¢Y‹¾HȵíZЕ•îx ‡~FÏû`É`ˆ_}öâÍ´J-6åÙØÝš‚;‡¯¤q¼Õ–À‚1FJW߈n}ƒ›õ!fSMaò™”rdÕèLíºôëuÕ¹YUÆ!}'Eºª.^¿SXY+ 9«•³êÛõî®ÑB˜]î¡swMZµÞèèäëé…R“3iš6jáýâÛù)ZwŽ+»y:1·¬ÁÊP˜cåLDŸIí•4…@}?¹Øªc}{Ë0ÏYJ®K(j0QbV( €ç9‹Éd2‡÷Ý!ÐY,°ÔVå¤M«¦8“‰ãÌÆÐŽ##eö@¹.÷èù; õ&“£o€Ÿ/•žžïÓ}LwPçefd$äDÀ¬•GöŠðÜË͸u+O NgŒèÑÏZV\Q^\ˉUb¯˜á=ÄU$ß¼{/»˜—»J#ûŒB¥7Ž&—s:«P`5œÜãuóØ*žr75·#D‰d¢ØÁÚ9 0UågfœK+UJY `Ò; u&mY™NÂbŒx½cÇñ~ž *гo_Î2°‡Æ¸IiФÿfž*­— OÈi¥Ï’]ˆòĶÏç<ýcqMwiùVõcOy†êÏcŠjãcOnИO![„s IDAT‰1ð茠Çä'| £gÊ)˜D‘Á„ÿ@ð ò¿„$"@ @ "@ @ "@ @ "@ @ "@ @ "@ @ "@ @ "@ @ "@ @ "@ @ "@ @ "@€@ DðïŒ1~úÕÏtÒ³€,ëá»ãÿÚþ“º=Z™ÿJQøÁ‚Ÿ^ì“Îh»‡þ±³ð³]ù„»·:ð¸s0ÿ÷; ÛÚGºíiÏç1çé×'m«ŸP~S}žPèSî÷ضٌǾ©ö“êõÔ¾øÏÊ|R¶ý¦?í]À?µŸjdè ]™յy·óhŠF˜ã9²_”o»/g‘½ÏBÿS«-˨6 iç_9ÐÜ©©TŽž öÏúÛgiÌOË»(ì°l²½\!°4–ÔÖ«ŽíýÊîä™åR•§¿\ðÔ"§©*-½WP-ÐM?a‰‹Ÿ‡§—›€¡ìFF g0bpööðõ r l¡––€¡*ëVv Ï»„Äz9;Hh°bMÙÛÅ<ÆçîííÄÚ.€–GËéµ5e·²+… +ÀV“Ù,÷ôöóv¶ª¥¹¾¨°¤ RÃ2BšçŒVXO©kKËKs jX†¥±Åd‘ûx0j^ßX¥°l±ðö·Éœ¼|üýE­zì¡×!X[–_Xed|‚BœØOhjduRz¾ÞH{¶óóòp`ž«ÍJέ7›­ ôñöñòW c°®/ÍÍ/+­5Ò"™È#8ÆÇ~ð~ ¯HÎ(qòðP‰zþ¶2Ì ÅjµŽsiï&Æ`ûMSVœŸ[ª¦Šr éâ̲ŸŽÁ\›œUËófNîèãáÀ6™'[Kó «Í¬O`°½öÛÔ×”d”è!rmíï(¡[ Ò‡ûÇš²¤ûîÁÑnr!Cµœˆëî'çÔMV{yøx©›È`„˜ê Š +Õ4+8‡t rjî ÛņДŒbGß0•TD·ºycnznM£ÖŒd®*/¿0W‘½L{“,†š’;… ^¡1nRD£Ö/—U]šcñòñtv‘ íÃšç •7ï•pz¢ì. U¾áÎbšF`Ö×”çg5Ð4ây«[p¨—‹›„æÍêÆ²´Ì2º©£a‘TáêÛÞ[IÃÇPý_?]åµíi•Øv"-±,˲¬eh€éH/œՊ1Æþ_ÆT_pê›!oì>–¯ÿgE©“~œüÆøÈ—1¶ræ‚óTn“_ùdõÓ.ä1ƸòÂOo Ö~c—oJÃcC]îÕµŠ@w ãÄf5]ÈÛzžç¬VuÆÝƒï `ÆÖ¤L ÆãÚ{i?Ïš`êës›ïÙú‰i2“¶Îj=xÌY{ªòÁjVœøhVŸ¦ã,À»¿¤V1Ö¦lX?¹Õ•CVÏ›9xz'û_4Ë2,ËŠÀ»Ó°/Ž×Ø*¼}SGpœ¹ýÕ>C¶æ<2Ðxž3WU”îÿD¦phÿÚ7»²1Ƙk,ÊØ:ÎÝÉ€€‰¿¾Qß|%gQ_ZýÒ?¥ŸbÊŽL…k}˺¬ì?fÀ´­·stŒí¦qoLÚ2uÕK7Þiu´áò¾%c¢@.bÆ}v¶´ÜØtÈXR|zÕ(Š• @ìÄ·&4´*ÕjÆ¿Nëá*ùKnÓÄclÎܵiN40bz­Û}»ÂÔjtºl©×9Žã0Æêäck牔¯ü’kj0aœ½ur¯·{½x¢øQà1ÆuuÊŽ @/øy×Ý:Œ1Ç?ü¸1¶]úêý‘rê£ë Æ–bÌÅ…g>hÙÒ<›É©¹tèã%ŒrÃAk¶`œurÖààß}͉ÆÇ €ñì›C¢UÒʯ¹ ´Ú´«Ÿuа)í~•óœcܘ]yaÝø% &ŒïÜ#^¿>ÑfL›·åø®;fŒkòÎ|¾&î(©ÔóÍ–ëØÊÎ6¡š½#3¯MHÿîÍÁŽN1C·f5Ï®nÚ~Êà;ò06×ê³¶M“N\·ç^Ægì\2ÞA¹ê‚¡Z‡±ö̦ѳg ûúb“Y4œ^0 ÂÚš¶#¿•äìØ¶oÏ»gÔk4uÇ>e’n|}³¢m(<þÇâhÛ‚bñáÊÒ&àªÒ3~2~{AIãœc¯x©ÃÈeךôNwî“.ÞÓ¦¾·¯cKe}êæIÂI›Î—ÙÕýæ¥5Ým]1}ë­,mS™ê’’ߟ›øKÆ­R3ÆÅW>|a¶sÔœ3­4'ó×Ï^ Vé²êJkÐT&n~Ëæ‰îýéo©UMmÑšÍ×~ß‘W^®µXŒÇër³/o3~{–Æd«kÙù ~ÚrçƒÑ`0˜­Vc\VZzcçŸ%ƒÁh4LF%ÿå<)€hFb†BŒ€‰hÃÄ’žsî^Þ4ÃÚóö‡²Âá¶n!SòõÄ[Gt÷Ùˆ0¡‘“û¨¯Ïmé 3˜¹ÿÝE’cèÐÉ_îù|ƒ-ÿ¨.ÂÈß¾ýîs³"Ôzí1ðÓsŸ½ìîåf0XŸî¬¨“û…ú†ör§+bE"–²ÜM4…#eX´°ÌÃoìí×»8zÉÄÝOèÐs`éµLÎæÂAé¾÷æïÛK×̼2¿‹@)e)Š¢ ¦¸ÊRÃGd @p·qކîe÷²l—¢ÖÎ0H¤bFüœ€B:¯ˆDŒJ,¢±6?#¡T* „"‰D"±ˆ"ĈD"‰P2øí¹#'†äþu,É€·ö?5Ãtywû…#Û¦öÜ 1µ°!¨¾°wÇ“¾òûìâýŸ^Žhç"DÑ`,W[¯TtüfŒ§›‚h7üƒ9ƒ†údï:yC`_ºltב„N‘ýŸ[Û}OV¾Æh±—ìÒ©ïâßS¶,Aàhå,ðÀ-1B ¿xB£XÂ_à-¦æúVe_®Tç1¶g€P%ì3tØ!µ±²Z`¬ª,£OóKûö©$ÒÞû”ùžÏ.jj`·•\>²ub·Z£ýÑ# —ò¼Ý"Þî'pppì½hÏ›ŠœôÄëy…ÐV?yô½rçͳßÄXŒqó°µHüeñË7ññr ž>Ôs¼ç¾ô<ûáÊì³5㸀A]}®òîC‡í­nl¨W€ST繿¥ßúe™œ¬Öæ®àY'ˆ{÷‹1A‘žBïîcwœ×þÏÔÜ–ÊŒŸó;3[—ÎÖQs`@â9eijâõ/‡:8‰MÞÞ\‹®òì2ÚÏÙ]*°"†B†zSE ííÆ(0¥³¢ºÐi]bE"V$ išÐJÈ'~´—H$bYVÄ (Ë0 ë…}»ªlDbÏFvc9àx <æ9®Å ˈÜ#&/_=¹Ÿyiç'—`x8Ò‡‘ÍÃZ~è‹óO–ȤwCA«€ðQB¥o¸¿JÊÒÿLÑÀ¶âa¸­è#ÆmÇ$ñãBKˆ’9„ù9…0O1èÖ㢨øÑÅxyyyºI¬‘® ›ÉÃõw~ywÛ-¬5bA· £†êV“ÑÂ+= iç·ŸÙz&+$ªËØ™³¢å¶¾€†;ûO8tªHäà ˆ}íÓçC)áæcÓ4Àb±Xy[t f DZ T’½þýe…£ë˜y+»ú2LEW®%^»à>xüý-¿%ê;LxgXÏ/¶9tfÎ8›ï<Ì/Xˆ2`„ .éD¾ÆÍo̘® [D ,i_œoŒÉ‹ÍmÈ”ãÚÛ Ñà9`ÿ–÷Þðã¦x·—p{×€¨^^ÍÓ¥¶Þóø‰k„(J"— [_¬PÈćϤ%q°ĆœßÛùC‘ÀaXÿx)jkð5ÛW“…ãP˳F€ êòùˆ{ê°©ÁÇߘ—nÂTÌ ïNèâá,Hĉ¸%Æ zSƒÂIá*`Ê4Õ¥å§ì- [ñ^wGASLÔ¦§³¥ug`ŒÂñDzÄ!Ý•´¥p­Ì‹Ç †}ñãÙ_6~6öË·Û9¦î(¶`D¨¿7HC;ÆLˆÙ½aážÀ'v·®ó”D îø€}2­‡­2X7¶9À ˜¹$¤}€ŸÊƒz\G™ ³õAa )D3c@ˆ2çü”Ú`vŸ9*¢Ií½‡Îùé¶›[¿ù¶ßgóÜÙ[»+F/ëèæÚÒf‹ÁdypÕG!Ä6•É•þu§$ â­ˆc[À¶ÊÍVÌ#ÕÄfƒÑl}°D ‰aKœ¸²¬ª!±:zB0²e?P®}{ S.UºÒ´%3ý5ñÇ}¼ÝX„…4´íqbrSJ¤uœ×8ïVã„ðšœ=ÈÉ5@éó S*Ê!ÒCs<9/%¯´¦ºš¯ÓW•Â7ýoxkª ïdZkÅ “Ÿ_ØPuï¯Õ?îËá@¢rѦ§¦e&4vï¼2TŽê‹NÿþÍÂÅÙí_œüÝÛëö$%†ë¼Ã³ïgëtF®y s`É>}Û; ³s{{>`‹’OŸÝùÙùz„›œX蓜«¥»ûº;˜ RîÞ5ò•I·Á÷Ù/ö q‰ƒ;Žñí¢äÙ«W¬óŸÖC. ëÖ)(RiŸãÂòÆ Z`Ñ5düæÛ"‰X<@´7Öe]ÊA(¨%ƒ¦j£æÔ??ÍÈ­uÙ7jŸ[2øå±>TKêËÓcõI—Ò5u‘Êêì3÷îÓ5éÇÊ$µ³fÍâÄ>èNʾÔ(SËÇ¿A5Ë•®*éöù_ß9ž}I7bˆoÙÅ•ê±ÏϖϺ« õžƒ†zˆL5&+¶WHü_š·XW½ñÇf¯”Í­’v\Ó«½+´<¨×ÔUož{iÎÒuëªÇ†h\Ç kßÁSúØo™=4Å€L©{ ƒ§Æ;7=ÿg™¾ÙVØÕdí½zhÇì˲¡aF„´¹cµ›³xiÝ’ö};çú…áNÊnŸw qÁà Y–‰ª/<™tì××ÏZÜ\ÆŽh§G9z檵­óöÁ^w·²¡$/xp84Ƽ¬Ý„1í&Œ™Á™oï÷Ö{W[‡¬_1¦«§Ðv7ÔÔQÅ—r2³wç fq%öÿ @32‰ŸH"5Yrô •^ø|øá¸o&¿ñY¬öþþ#û›»ÐqÈÑÙ³õ-ì÷=4~Ëóý¬\îÞ—#—ýõù¨ºµá£OñõÙû+C;CEÊúg*,ÂÞª‡ç~H—s'eßí¾œ˜î^µküD¨.gFŠÀA2öýScJlA7[öUAè˯ßZÜ ô ÇVôœñnùÈþò”Ä»¾†è#W·t–ÖÞ?õʼn±‰7ôìçV’rdë¿jëÀtuÕk+.ܾp£fø`syëçÞ?î[óõúe½þláŒ>?ÖJ¯ù8Í©sôܯ)kª®|3}Sx—·ÝºÆ)Ÿa¸ YÉí —L¡Ak¾¹4qæ—Þa§®Ü.Pr÷À¹³×ó” dôÜqêˆüÁ´T΂óÏßPô|ÕÅÃÓ6¸mª Ï8²öÛ³…Åw®%^½] ýíâû]0@潿(J–ë8uÁàqŸÏ]i1œ™4hîZ§vò9º¨€qŒì3{Ñë¿,ÿrÉôœå?¯ÔÁó±Sÿgr¢R¡EkNßùéLJ(ªi „€³5ZŠØ2 ЂF0þ郫,WWZé1æÝ>Û:8Ôfú[ÍÀŸsNúmµä¾CÇI^‘³Îœ¨Úӿˤ ß ”«æÅ Q+óš’‘ª¸,ágÏ8D€ 1ïÀ–Ã;Ò ¸“½ joíZ(§swŒY§-¾žî5áW± Q4B´š]͔ߌ‘Óuùy‹û -eɉíƒ9-@‰ýÛÝ0ÿòØÏ¾™·rôú£†³­:=1kÏ>¯:™”8}à'½ƒdðxëüx€’›k?Ø_åV|üʥ镟íß:Ee»/üÆÄÙ å«?þ}õíÔ·ïó"[æÓ yUÚ7kþJBWkÒ'G—ë®ü9ÛCüOM.]VvCu™ÇÀ(¶eFh{h&òå£'ºîˆ:qa¤k°gogÔºën&ƒª“[Çbÿÿn àÉCÐÖ›¥•WëÊT²QC¿mvûsîüÁ]Ü\úÌž¿1Óí^JZ…•­ÁÊ™­&Cè )ßå¬&(;0~Ö‚¥ÊW9Ih:Œ=¹oÍøÎµ†6lŽ\¬+ Ï-ÜtZSÎŒÙsñ»Ÿ–õ BùÙÑCoáà÷ÛÍc_yµ‰røWwò¶-îå,t¼äëÅǶDh³Ïü4³ý„uC£ÔÁA ú„ÔûU߸ÿøi˜íñÁ/—Îÿ¾n¨3˜­<hì?ÿá ãô ¥kCS€ ëÈOé·Ö}¾ü•(/Ÿá_ûA,Ú—_Ö °ÔjµF­ÑhÔjSÛd£¦nÔ’×?Y÷s/Ö®CD)§£‚G~ñÁ'ï/ï€Í!‡&“ˆlë/Þj®¼y†éÝÞÑÃÙ>¸1ÆHˆ†p-%¥ôò¡%Ãd…•ßïÄ·X,&(4±"JÀJlˆÅ¶Œ-ž·blµY>­õæñ³½^Ù|fÃò žBF$}àë€)„èÖ•Rˆû÷Ÿ´þnÚ­¤«ë¸[:sÙgk’ž¼ðuóèÆÔ‰ô hÚñl›CÃÝ­ë–Žüùâ¥w†yK]ЦxžGg™ÕUw¦¦ñÑmñ¸•z<<-ÆK:c#ØÞt‰”çð×W¯OLLNJIJNHNž72H!°M…ýt· ~à?;V£²n¤÷‹u’º0M«[Ô4õíÙt÷u‰Œ2zªT1ñ¯Ðçè‚3¹ö¬PLëÊ •®~ŽBÈ>›~§%2Ò  ´›µb¶õångnçëÚ÷ÕĤK))ÉIII‰?lZÙÙì¹^Bªi{¦= È|æ·ã÷‹­ÍŠ)¤¨‡O-E4Àû¾µà9VQ¾eɶ|{îcÛ V€(„LSFuꮣ$g’²›j¬¿Ø Ðš<ëÚ3‹N©FÇF mçЪK› €H¦ŒŠî×Û=PD3­»ÙXÓ3Q¹uù]Ê­Ää䔤[·³Om}eÀKþqËw^¿ùÉ@1Åß8~ÊÝRçÛ„ŽÒvS>þ|F`)®¨ÌÊ,¾qr`X˜1 ë÷ÊÂçŽöº™e¶/|šH=úê!¨IϹôÅíØ cü嬠©mô“튢Ûr±·´×Å'¨[XßÎJw°Ü8z"PØÿ*‘VFIDATáB§ðç?Y? Ì¤+¯ÃM³—‰žP¦£»Wçˆþ=œ¼ší‹íÅk£2MfˆbÚ ÏbPq»TFÅÇ9ÚþnÇcû¢ùÊÛSÃ|£<œHõɽ‘Ñ!Àè/!Ù?_š¼+ˆ…”Fb‘T©j Д]ŸõÜ+3•v?wÅë£ä@1žÆ;¾zêODz´ME诜¿Ëó\UQŒ¸¬Ü±Øp#Ô³³üþ= W_ðãÊ–ùoõ¢ÿöÏ3É©sƒ7Æx¿ö|í®6ÏÊ´þºZê¢#\î_üáHîaÅJ©{Þ=ztïßÅÀd(Iع~¿qÔ›£»ú{I˜ý Ÿ¡ógÜúéÀ‰ “ï e9x©*fÒ°¡QaL‰틦º÷ò08yk(ÝÝ]ß-©VŽë?fÄ[·î.sÖë}}ÄbÊR#VÅö‰îÍ]þ꣯Ο¸æ=7ÑÚ¾ ¾öëG›OÔóç›>xʪÑ%Ëž|yjݘÖ„1=z”ŸìÑé_‘¸í{þ8Y\Q¾|™ãÜÉ5[7ŸÈ½Wú§FÔ‰é;¤qߊ¯.3 ?[l© ®Mý}×™ÂúŠUoú®XÕ×WÁÒ¶‰¬¥¶!ÿÒ5÷þÏKĦH©I[zëüš„+ð<çæ¬PÅö62RhŸàÎ~kdö—'þœúüý¾NK‰ÈoæÒ1݃½ÀwÈÀ!u3óΛ^%!ÅJ;FëÙMõHõM•Å©ç>þé`Þ9(LX³êþÁ8WÖhÅe4›ÊÏÞð^õ¹S;AÞ¹½G¶þºãnb}aö›3žíƒ±žR®äÂõÓ[~>u øÌ5³î›2Ö-gŸØ²ÿ¶µÁôÉ ýÝWßz¾³—‹rðs“*JO¼ÿÖó l·Ð±¯‹õ¶N±fúmßþÍû®æ×Šï­üØôí±‹ô¸ò³§…RÌ[ùJNâæ½|¨ïíû|R}©V@#HøÐˆ©“ù;KìéAH]pÿʶ:ăã 3«çÏÑ¥¿·øÁ®Ñi5juƒZÓ<·ü¹»›Ï~eJYO“•×9+_Û>€ŠèÔg›·¾~ïõ7Ox!­‘°tHçiýÛÙiÉúë×½û7¼QØèp冴‰S?èéÁÐu÷·ÿúóÆ­guÈ=¯à4¯±.ÛäöÒ˜a£Ûx{²Ò.üñÅÖCÇÜÖ̼“ÿÚ¢a»¸³úªôĽßÿ•Åò<vÝÆëÚÔœÐIM¼»åÆŽ—^¸ïfä°ÞUõÚÈ¿`% ¤-É»þçç?ÿuÐ ²¯—d6,˜9¢ûp‰Y“Ÿ¶gãž»ÈhE˜'•¹]Ç)“ú(hú¶QUÊÅS;¿ü~ÿe‹úÖÊ…w§ÎZ1¨}ˆR`Ò–¤YõýIEå[¾\Tc˜;ièÐ0Ï–d¨š©BÊ͹‡O«œKîÕí‡ÏÞ̪f…´A>¡]ÆÅ9;¶Î†€¢3WœCgÊCZýJø{Y@¼ES“›|£L>t„’7Vd^*áÆ@±²¬^60RlrˆçN+?]eX·ÿlêå*l‘D9OŒB<:딾/³®ö¾cü²¸Úw'Åo.Ï»sõ®ûÀ‘Q£†´ì­JÈ©5ÆX’Ê!ªwÚìªËÉ©5v×åe˜Ã† Wkk5¥Ù¨w\ðˆàâ­—“06ë“Dt{ɇ¬áô•¹ÉIúÎZ}«„gdŸ,{Ƽöþ«ôwßøJ=+ˆÍ=–Ïñe•ÝúŽ»Ÿ_ýWâ…›eL÷Žó½ÈŸIM.»Û8¼OϬC~›^££Q3p\ÿøv ÛË jô(cuYcCQ¹Q¤áŒ²ˆQî–œ”òšzKT‡·æNŠ6í<|¥†ÇE[á¨r@d1òšâœzÚ·]T‡8yeBiC¿¢ ½ß°Áµ¯ºUZÓÇZž 1#£õõ%U÷“rãîÐAZ‘T®3µ ¤XÐÿkï>㣨ö>€ÿÏöÍn6Ùô$B€@¨!ˆ"†Ž¨ D/‚{åR®( ØPª€tPºt¤„&i$¤é½×-Ù>çy‘) âs}|¡¿ï>lvgÎÌÎÎïœ9gÎ(m®avuq‘]?¸{s]YBB²\f#»I×%öå—{Š h«Lq¦ õQ²5)k“Ê\f¬þbhXKÝ\Õ¡ç¤}¢m˜¼à\ª`7é{Nžÿì}wõk«¹ý¶‚ÉXUœVâÖsüxÁ /J;›Ïq¤®¾>ÑÑMdÍ.¼v%£RîÛgÌx{mÆ¥ò*£M Œe¥yWê|zïh̺–_œ§Ï°äWè4ÆåÆü¤üšf“ƒˆHÖ/vÒwº£ÿ¨KN*¢kco½#Œë Ò¯ÕSØÐ¾r»©$>©r¢À9ã$ÒÞÿÔ™,çå¿&ªì¶Aolœ:"L¥ }S}UÕ• Yt4Ë»”éà‘Õlîí7v„Q]SQjFjuN£Td3„üú¿c|opeÕ5T¦–»ô?^¨/Î+.*Ñ÷¿í·¢ôé3¬ŸUç­i©§’  óú©ÛºV½P,KdO{úR’ 6pÈÓ_z*|iez5·õ÷½øù€ #»ËN"FÄ› Ò®5Š»ß×Wfm.‰O®œ p"2×fæ¥îŠ UsâÉ_9#"‹±yÔ›¾ÃNÆè˜ë«K‹3kÝú憎zµ¤¼¢¹9,ÕÙ IÉJ“ÕÐ4xÆ¢ 1cz(Zû­¹¦û£ïÏ·~µjé¶“çó¥ •ûôåkGt!IËò­z]eAJ‰vâ³ÞÓŸ×ßôÛ´‰NÞ#Üx1“ÿ‰}ñç¢['‹¼uöHgeºÛàòÖßàí~$&rÞ]Ânh“üÑÁzN—wó69{Íi nùnìÐqþM;›aòŠ/8½ª($ÝÛ&:9Œîí0㬵Æ~ûuAÆDweœv2´lí~7z7îܶ³9Ûá×÷¸“e¶|ðÎèô»×_Âõï‹DwÝ~Çb;Ý»NÖ}ÛËw*Œó/û†£ú¦ŸLÛ‘Nw|ÃZœýDp¦ÿƒÿÐN`@€ @€ @þxàYÁhÀß»KM_¯×ïÝ»M€¿%ÉÝÂ1F$‘Hÿ €Ýn¯©©ñòò‰p¥àœsÆvÀ?±c¸¶€ @€ @€€ÿàwyV0çtÛ9ÑópÉ[>îtí·¾&ü%³¼µ(x„&ü]Üí™ÀvK}Åå³iµR±XÂmÍ2¯àva½Â´Œˆ õ×Î_-m¬2pW7ÏNýb‚Õô_?C^hÌ»Xb”KÛõëêqûy¸õõMWO¦ðŽAíB4’¿àìÏ#n±T\Ëpí5´ƒ—B"ÆQë‘U—{~ÙÓÏO3*ö¡Ñã~îãüöǼƒóçN›0iâÄ™,:Yþ§F(<²xëÆ•{rcĹY_yá§·¾hÛÖkÆ¿"#âÆ¼Ó¾xᡇÖÇÕš¬8dàïBrçº/¹xx|cÓ¸ÃÏG>³V7bñ'óßy²KË%F¤Œ|ãÐ'â%Stê7_À…ÑõJúÿ¡ªMDœKz¿¼¯wÛÚo_–¹!óçÙƒß:@4Eí&gÑ*ùåÐê·&/Í!+£Ëþ-€–:°jôŒUîý‡]vËߊËý£µzÀå/*«Rñðê´•O(5îV³ý¯ÚCíÿêò£ó‡pýþþÀcœ·ÔÍ£¦Ìµd—ÚÞDĈ#bDÅ-Z•½{÷–÷ÓMUváõ•¶ö*ó›Î %UºxªÄRwôö&ËÍM çEàw~ùNe–k<<DĹXvûgœ}ŠßÒ¥íl“Z>+8ÛÂïï°›ÿï$™ø:Éôb@ É]kÿ-W{Ègì¨.ËVæ&Îä=1NÄQuz)9nj?oÞœ{æ›7W¦Èš–îÏÌ{bTßžj"[eꦹ«‚g=ZðÃɸLsŸS§Oí¯ÅaDDâyóæÝ%Z*‹Lå/ÎÝUW[ì=%¶“˜'jLN|üƒºz³œ_vøá¶ ߎžn¾ÂÕõ õ {‡‡ù”$nž³ð?«Ö%­Ò$a$ˆå cö_,ønGrA”Õ×Ù3Î\+ú9ÛcÚèp7IMö¶o–¾öÁ Gûqã{ºÚ„ò¤GŽ•Wн5,;.ýüù†Ž#È{hÅáÔ†j 5˜ì¹Éei?WMî©•‹Z[ œ1FŒ u¿~½ó—øK•R¥½±*÷×í'‹¢‡>ØÙ½öâÒå[¦hÚµS Bá‘íÉ="£ƒ:j¥Ö²3|ÿSj³Êš„-GR­Úûú)-5•i‡WɱZåTUinL;žYvºÔë騮*cÕ.\)J)jæ‚D.±ÔUžÍöÐÛS­”´ä'cD‚½©pïæ¸Ôª*‘VVq!/åR¢lXlw9Ó_Ý”Ÿ’S­‰”»©¼$Ñ¿ã`UÁá5_¾=gõÁüFƒ¿¯¬¾^Ü.¢·,¯(áàºÔНþÝ78Të°–§m?’`Ðt88¢“¢!÷ø‘,®Î¬V»êksÅ"//osÚ¦wæ-ܸ=AÙÁ]ßl²)zwPâ°€›¯c8'\8çÖø×g>ÕÃ÷©uEœsóú»dqnɌߴ|àC'«8çMyqŸvé6bdìòÔ†¦„í3ÛwÔÉ'ÌÙ_jäœÛó“W<â«"òt÷™»&ÙÊyÁ™E³ýTÏn©äž}äñ |ìÍ=•œÛê¬'ß¡§÷üRǹáèÆ>šðÜÉbÎ9¯8ô‚Wˆ¯\4uÉ—9œóìóŸzÒ˜ïÏT~+2ç\°scÚ¢Ç;=ûÕîsÍœscíé¥C¢£¿¹h2ëâ×ýká²/’«9ç»v8„èͯ.TÙ¹îìšg<1jaçœóÆã/<3ãÓ­IzÎ3.}u?‘¯W»%Ûó8登ߞÙÁï•] \àúÜO†Žyé?ïn<—•‘yöÀšÏ'ÝWb¼a':l¦Ü £û¿¢´–óú >÷¯ÏSì\0÷>§~rÎËk]ËÈL8üýŠ×úRøÇçήŸ6)ZADQÓ§ž¾a9EËßøÏw3×–¶m¥u¥”ÛÎõ)[—¼0pØ{ûSÎ_L¸|e×¼—Ñ÷Ow×ÍÓ}üÔ"r›öM¼ÁÑVAà-·tý>sÞxñëYÑ!=?½Ì9·—oZ´7­ ÛrýU•eWö¼Ò×›¨ÓÔÉkÒ9çÜVøóÓ"Ù{Û÷˜Zßf*/ß8IL 7ç6Y9çüÊ¯ËæNöšõc5·88çG‰zoä¬mœ MÖ‹K¤ÃžÿîÄŲ&ƒÉÒVŠCÏ©´³¿]–aæœóº+ik¥G7¯4ÜTb[3?÷‘zêú •Î6Èn·[ âׯ}‘rÑ»«Rt¼zçûó?›s±e=7$ë.'1ŽhÉá&«s.\>6÷Ý©ÁoîÓqÎ/,÷ #‰ÔU­R¹¨Ô*6ÈKýéŽì†Àn)<X¡kù¬¾©¤¬jýÌn"2L6«C°[ì­»†1" ìúpª¹ßæ­§ˆ^µ§ÜÑÙÂÔDV‹Õ7Òÿ•%:ƒÞ`0uz“ýàWkb´†ÉFd±Ù¼m!­îm`;'"êЭo߀Ú%+Žå$^óë«VjåDTzêè®/×$䯞Ù_é%e‚HLÿÅœ"ÆD""b"qûq_f&ä7%¬ÚðÃüçù÷9ç'ggÝ×bÏÀnÂI]}µÅL7*ÝúÄìªÂN«Çï˜l”HĘ 8T¾\*]ugk¨u(Îïo#"Án>\gÐYÎ.®]ÿG$¥^¯œŽÏªÍÛ·ì iÑþÙãîÿ1MJV»-¿ÁbrÜi—ßú‚_ï1¢òðm½&Î>gîJJ" Ž|ƒáDcÎððgãDÄB££‡W²iöÄ]<ØÃÅSNDd×éò*ÊOi|C|\‰Š’N—”¨´ZF$U»»Ê)]5.ŠÖ˜R­Uˈ\Ôî. ""¥Jí"c µ;ɹº*d:“£ÊLDFGæ–Ó<\<5‘ÏZa‚çÙ"#ùÆ>9ëâ7/}·åôõ;©Í'Ž\æDnj•T.‘+ÕRj»KŽq"ò3Ä}¬íßë–$4öl'WHˆˆ¼¢F¹Œ‘Ï™°,£¹u1ÕE…©éµÖ]-#Rj´.rÂP¸ÙÝGÝœsWPUcâ¶-ªgæÌÔM+eD$15e§ß±1.ûâÁÝ×jê¯$ggž?QY“S¯Ï\°xC^ÑåôK¢€ŽÃBeÅé{>›µpgnÍ•ô$upoIÝ…íË?Z¹·,íêµLuÏn•_/\»aß¹W ôbm‡ðu»ï¼´gÓ†#ûöíÏ´Ù»Œœ<<¤nóóó×§e$efp?7»kÑžâúìŒ$÷Îý„âƒ?,_¸þçâË9…®Ý'DÆôæÙ«OŸØ¶ùH⩽Û÷î?d Ñ=4À]v}Ò .8jÎíXqrëŽÍ{îúùP¡MñØŒÑC}<Ûww+>xìø‰Í;Ïž;¸}ßÁÙšÑîYß}ºl厣…¹Õ&¯ÐAa¾Rb¬e»äž¢Â†¢¸…éƒÿgVd{%)¼Ú»x‹³¿µöhFÜþ}?íØ”W%içï““ºòl/¯ÎÊH”‡Dt l§–àˆ€?­÷ˆ%R—.vÛˆ—g„»ËÅŒ1™ÊÅE᪫¬k¶~‘£††…xº4+¤‚k°oqµfP¹(¤OxX—p—ºŠÜô3•~ƒ£ý•ÖÎ}ºóÆšŠ ‡ÏÀP­Ý:l(;s¹)0Ø£gp¨¯§w˜~>d7ÊJJjʬ^Q=FÌš9ÀQxöX®:*Â_ãÖ©G{?U¹0®Â'¦»»Ðܱ×ð°Nþ*iKZqF¤tÞ\ÚTZ’m³8l."ÕϾî-óð·é%¼¹Òb»xˆíîV§0«CºGÕ7Â×#)1Ñî°YMkÔˆ‰1ÃÂõ%éWâëýcúK¡QÃ4Í%•µ±od°wtŠ×µC@TX³½¹± Èb·Y¸H=õåo±ô·ûˆHÔ†æ¬â²ú¦ŠfÁ¿Wô#o “Iµ½"l eeUV«•®Ac§RKªÑ´èÕ9ØKq_éõv“Hä¬óï?=ÚOI¬eÖ ‘"0°cçö¹i™V»ÙjÒë»GŽˆÐSWv1¥ÜgH´¿ØÚ1j`—öAÜTµ¿×Ëœ Œ‰nø/gDÎæìqNø > p.j½Œˆn.ïõÚ81ÑŸZDçK¼>})¿ÃIBÛÝj¿»ûx[ ì¶9’Úf+½ÇY˜~Û=(àïÓ[ @€ @€ @€@€ @€ @€€ßó¿=¹V€n¹FtIMEå 3;1 ùNIEND®B`‚varnish-7.5.0/doc/sphinx/phk/firstdesign.rst000066400000000000000000001513431457605730600211340ustar00rootroot00000000000000.. Copyright (c) 2016-2018 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _phk_firstdesign: =========================== The first design of Varnish =========================== I have been working on a "bit-storage" facility for datamuseum.dk, and as part of my "eat your own dog-food" policy, I converting my own personal archive (41 DVD's worth) as a test. Along the way I passed through 2006 and found some files from the birth of Varnish 10 years ago. The first Varnish Design notes ------------------------------ This file are notes taken during a meeting in Oslo on 2nd feb 2006, which in essence consisted of Anders Berg cursing Squid for a couple of hours. (Originally the meeting was scheduled for jan 24th but a SAS pilot strike put an end to that.) To be honest I knew very little about web-traffic, my own homepage was written in HTML in vi(1), so I had a bit of catching up to do on that front, but the overall job was pretty simple: A program to move bytes ... fast. It is quite interesting to see how many things we got right and where we kept thinking in the old frame of reference (ie: Squid):: Notes on Varnish ---------------- Collected 2006-02-02 to 2006-02-.. Poul-Henning Kamp Philosophy ---------- It is not enough to deliver a technically superior piece of software, if it is not possible for people to deploy it usefully in a sensible way and timely fashion. Deployment scenarios -------------------- There are two fundamental usage scenarios for Varnish: when the first machine is brought up to offload a struggling backend and when a subsequent machine is brought online to help handle the load. The first (layer of) Varnish ---------------------------- Somebodys webserver is struggling and they decide to try Varnish. Often this will be a skunkworks operation with some random PC purloined from wherever it wasn't being used and the Varnish "HOWTO" in one hand. If they do it in an orderly fashion before things reach panic proportions, a sensible model is to setup the Varnish box, test it out from your own browser, see that it answers correctly. Test it some more and then add the IP# to the DNS records so that it takes 50% of the load off the backend. If it happens as firefighting at 3AM the backend will be moved to another IP, the Varnish box given the main IP and things had better work real well, really fast. In both cases, it would be ideal if all that is necessary to tell Varnish are two pieces of information: Storage location Alternatively we can offer an "auto" setting that makes Varnish discover what is available and use what it find. DNS or IP# of backend. IP# is useful when the DNS settings are not quite certain or when split DNS horizon setups are used. Ideally this can be done on the commandline so that there is no configuration file to edit to get going, just varnish -d /home/varnish -s backend.example.dom and you're off running. A text, curses or HTML based based facility to give some instant feedback and stats is necessary. If circumstances are not conductive to strucured approach, it should be possible to repeat this process and set up N independent Varnish boxes and get some sort of relief without having to read any further documentation. The subsequent (layers of) Varnish ---------------------------------- This is what happens once everybody has caught their breath, and where we start to talk about Varnish clusters. We can assume that at this point, the already installed Varnish machines have been configured more precisely and that people have studied Varnish configuration to some level of detail. When Varnish machines are put in a cluster, the administrator should be able to consider the cluster as a unit and not have to think and interact with the individual nodes. Some sort of central management node or facility must exist and it would be preferable if this was not a physical but a logical entity so that it can follow the admin to the beach. Ideally it would give basic functionality in any browser, even mobile phones. The focus here is scaleability, we want to avoid per-machine configuration if at all possible. Ideally, preconfigured hardware can be plugged into power and net, find an address with DHCP, contact preconfigured management node, get a configuration and start working. But we also need to think about how we avoid a site of Varnish machines from acting like a stampeeding horde when the power or connectivity is brought back after a disruption. Some sort of slow starting ("warm-up" ?) must be implemented to prevent them from hitting all the backend with the full force. An important aspect of cluster operations is giving a statistically meaninful judgement of the cluster size, in particular answering the question "would adding another machine help ?" precisely. We should have a facility that allows the administrator to type in a REGEXP/URL and have all the nodes answer with a checksum, age and expiry timer for any documents they have which match. The results should be grouped by URL and checksum. Technical concepts ------------------ We want the central Varnish process to be that, just one process, and we want to keep it small and efficient at all cost. Code that will not be used for the central functionality should not be part of the central process. For instance code to parse, validate and interpret the (possibly) complex configuration file should be a separate program. Depending on the situation, the Varnish process can either invoke this program via a pipe or receive the ready to use data structures via a network connection. Exported data from the Varnish process should be made as cheap as possible, likely shared memory. That will allow us to deploy separate processes for log-grabbing, statistics monitoring and similar "off-duty" tasks and let the central process get on with the important job. Backend interaction ------------------- We need a way to tune the backend interaction further than what the HTTP protocol offers out of the box. We can assume that all documents we get from the backend has an expiry timer, if not we will set a default timer (configurable of course). But we need further policy than that. Amongst the questions we have to ask are: How long time after the expiry can we serve a cached copy of this document while we have reason to believe the backend can supply us with an update ? How long time after the expiry can we serve a cached copy of this document if the backend does not reply or is unreachable. If we cannot serve this document out of cache and the backend cannot inform us, what do we serve instead (404 ? A default document of some sort ?) Should we just not serve this page at all if we are in a bandwidth crush (DoS/stampede) situation ? It may also make sense to have a "emergency detector" which triggers when the backend is overloaded and offer a scaling factor for all timeouts for when in such an emergency state. Something like "If the average response time of the backend rises above 10 seconds, multiply all expiry timers by two". It probably also makes sense to have a bandwidth/request traffic shaper for backend traffic to prevent any one Varnish machine from pummeling the backend in case of attacks or misconfigured expiry headers. Startup/consistency ------------------- We need to decide what to do about the cache when the Varnish process starts. There may be a difference between it starting first time after the machine booted and when it is subsequently (re)started. By far the easiest thing to do is to disregard the cache, that saves a lot of code for locating and validating the contents, but this carries a penalty in backend or cluster fetches whenever a node comes up. Lets call this the "transient cache model" The alternative is to allow persistently cached contents to be used according to configured criteria: Can expired contents be served if we can't contact the backend ? (dangerous...) Can unexpired contents be served if we can't contact the backend ? If so, how much past the expiry ? It is a very good question how big a fraction of the persistent cache would be usable after typical downtimes: After a Varnish process restart: Nearly all. After a power-failure ? Probably at least half, but probably not the half that contains the most busy pages. And we need to take into consideration if validating the format and contents of the cache might take more resources and time than getting the content from the backend. Off the top of my head, I would prefer the transient model any day because of the simplicity and lack of potential consistency problems, but if the load on the back end is intolerable this may not be practically feasible. The best way to decide is to carefully analyze a number of cold starts and cache content replacement traces. The choice we make does affect the storage management part of Varnish, but I see that is being modular in any instance, so it may merely be that some storage modules come up clean on any start while other will come up with existing objects cached. Clustering ---------- I'm somewhat torn on clustering for traffic purposes. For admin and management: Yes, certainly, but starting to pass objects from one machine in a cluster to another is likely to be just be a waste of time and code. Today one can trivially fit 1TB into a 1U machine so the partitioning argument for cache clusters doesn't sound particularly urgent to me. If all machines in the cluster have sufficient cache capacity, the other remaining argument is backend offloading, that would likely be better mitigated by implementing a 1:10 style two-layer cluster with the second level node possibly having twice the storage of the front row nodes. The coordination necessary for keeping track of, or discovering in real-time, who has a given object can easily turn into a traffic and cpu load nightmare. And from a performance point of view, it only reduces quality: First we send out a discovery multicast, then we wait some amount of time to see if a response arrives only then should we start to ask the backend for the object. With a two-level cluster we can ask the layer-two node right away and if it doesn't have the object it can ask the back-end right away, no timeout is involved in that. Finally Consider the impact on a cluster of a "must get" object like an IMG tag with a misspelled URL. Every hit on the front page results in one get of the wrong URL. One machine in the cluster ask everybody else in the cluster "do you have this URL" every time somebody gets the frontpage. If we implement a negative feedback protocol ("No I don't"), then each hit on the wrong URL will result in N+1 packets (assuming multicast). If we use a silent negative protocol the result is less severe for the machine that got the request, but still everybody wakes up to to find out that no, we didn't have that URL. Negative caching can mitigate this to some extent. Privacy ------- Configuration data and instructions passed forth and back should be encrypted and signed if so configured. Using PGP keys is a very tempting and simple solution which would pave the way for administrators typing a short ascii encoded pgp signed message into a SMS from their Bahamas beach vacation... Implementation ideas -------------------- The simplest storage method mmap(2)'s a disk or file and puts objects into the virtual memory on page aligned boundaries, using a small struct for metadata. Data is not persistant across reboots. Object free is incredibly cheap. Object allocation should reuse recently freed space if at all possible. "First free hole" is probably a good allocation strategy. Sendfile can be used if filebacked. If nothing else disks can be used by making a 1-file filesystem on them. More complex storage methods are object per file and object in database models. They are relatively trival and well understood. May offer persistence. Read-Only storage methods may make sense for getting hold of static emergency contents from CD-ROM etc. Treat each disk arm as a separate storage unit and keep track of service time (if possible) to decide storage scheduling. Avoid regular expressions at runtime. If config file contains regexps, compile them into executable code and dlopen() it into the Varnish process. Use versioning and refcounts to do memory management on such segments. Avoid committing transmit buffer space until we have bandwidth estimate for client. One possible way: Send HTTP header and time ACKs getting back, then calculate transmit buffer size and send object. This makes DoS attacks more harmless and mitigates traffic stampedes. Kill all TCP connections after N seconds, nobody waits an hour for a web-page to load. Abuse mitigation interface to firewall/traffic shaping: Allow the central node to put an IP/Net into traffic shaping or take it out of traffic shaping firewall rules. Monitor/interface process (not main Varnish process) calls script to config firewalling. "Warm-up" instructions can take a number of forms and we don't know what is the most efficient or most usable. Here are some ideas: Start at these URL's then... ... follow all links down to N levels. ... follow all links that match REGEXP no deeper than N levels down. ... follow N random links no deeper than M levels down. ... load N objects by following random links no deeper than M levels down. But... ... never follow any links that match REGEXP ... never pick up objects larger than N bytes ... never pick up objects older than T seconds It makes a lot of sense to not actually implement this in the main Varnish process, but rather supply a template perl or python script that primes the cache by requesting the objects through Varnish. (That would require us to listen separately on 127.0.0.1 so the perlscript can get in touch with Varnish while in warm-up.) One interesting but quite likely overengineered option in the cluster case is if the central monitor tracks a fraction of the requests through the logs of the running machines in the cluster, spots the hot objects and tell the warming up varnish what objects to get and from where. In the cluster configuration, it is probably best to run the cluster interaction in a separate process rather than the main Varnish process. From Varnish to cluster info would go through the shared memory, but we don't want to implement locking in the shmem so some sort of back-channel (UNIX domain or UDP socket ?) is necessary. If we have such an "supervisor" process, it could also be tasked with restarting the varnish process if vitals signs fail: A time stamp in the shmem or kill -0 $pid. It may even make sense to run the "supervisor" process in stand alone mode as well, there it can offer a HTML based interface to the Varnish process (via shmem). For cluster use the user would probably just pass an extra argument when he starts up Varnish: varnish -c $cluster_args $other_args vs varnish $other_args and a "varnish" shell script will Do The Right Thing. Shared memory ------------- The shared memory layout needs to be thought about somewhat. On one hand we want it to be stable enough to allow people to write programs or scripts that inspect it, on the other hand doing it entirely in ascii is both slow and prone to race conditions. The various different data types in the shared memory can either be put into one single segment(= 1 file) or into individual segments (= multiple files). I don't think the number of small data types to be big enough to make the latter impractical. Storing the "big overview" data in shmem in ASCII or HTML would allow one to point cat(1) or a browser directly at the mmaped file with no interpretation necessary, a big plus in my book. Similarly, if we don't update them too often, statistics could be stored in shared memory in perl/awk friendly ascii format. But the logfile will have to be (one or more) FIFO logs, probably at least three in fact: Good requests, Bad requests, and exception messages. If we decide to make logentries fixed length, we could make them ascii so that a simple "sort -n /tmp/shmem.log" would put them in order after a leading numeric timestamp, but it is probably better to provide a utility to cat/tail-f the log and keep the log in a bytestring FIFO format. Overruns should be marked in the output. *END* The second Varnish Design notes ------------------------------- You will notice above that there is no mention of VCL, it took a couple of weeks for that particular lightning to strike. Interestingly I know exactly where the lightning came from, and what it hit. The timeframe was around GCC 4.0.0 which was not their best release, and I had for some time been pondering a pre-processor for the C language to make up for the ISO-C stagnation and braindamage. I've read most of the "classic" compiler books, and probably read more compilers many people (Still to go: `GIER Algol 4 `_) but to be honest I found them far too theoretical and not very helpful from a *practical* compiler construction point of view. But there is one compiler-book which takes an entirely different take: `Hanson and Fraser's LCC book. `_ which throws LEX and YACC under the truck and concentrates on compiling. Taking their low-down approach to parsing, and emitting C code, there really isn't much compiler left to write, and I had done several interesting hacks towards my 'K' language. The lightning rod was all the ideas Anders had for how Varnish should be able to manipulate the traffic passing through, how to decide what to cache, how long time to cache it, where to cache it and ... it sounded like a lot of very detailed code which had to be incredibly configurable. Soon those two inspiratons collided:: Notes on Varnish ---------------- Collected 2006-02-24 to 2006-02-.. Poul-Henning Kamp ----------------------------------------------------------------------- Policy Configuration Policy is configured in a simple unidirectional (no loops, no goto) programming language which is compiled into 'C' and from there binary modules which are dlopen'ed by the main Varnish process. The dl object contains one exported symbol, a pointer to a structure which contains a reference count, a number of function pointers, a couple of string variables with identifying information. All access into the config is protected by the reference counts. Multiple policy configurations can be loaded at the same time but only one is the "active configuration". Loading, switching and unloading of policy configurations happen via the managment process. A global config sequence number is incremented on each switch and policy modified object attributes (ttl, cache/nocache) are all qualified by the config-sequence under which they were calculated and invalid if a different policy is now in effect. ----------------------------------------------------------------------- Configuration Language XXX: include lines. BNF: program: function | program function function: "sub" function_name compound_statement compound_statement: "{" statements "}" statements: /* empty */ | statement | statements statement statement: if_statement | call_statement | "finish" | assignment_statement | action_statement if_statement: "if" condition compound_statement elif_parts else_part elif_parts: /* empty */ | elif_part | elif_parts elif_part elif_part: "elseif" condition compound_statement | "elsif" condition compound_statement | "else if" condition compound_statement else_part: /* empty */ | "else" compound_statement call_statement: "call" function_name assign_statement: field "=" value field: object field "." variable action_statement: action arguments arguments: /* empty */ arguments | argument ----------------------------------------------------------------------- Sample request policy program sub request_policy { if (client.ip in 10.0.0.0/8) { no-cache finish } if (req.url.host ~ "cnn.no$") { rewrite s/cnn.no$/vg.no/ } if (req.url.path ~ "cgi-bin") { no-cache } if (req.useragent ~ "spider") { no-new-cache } if (backend.response_time > 0.8s) { set req.ttlfactor = 1.5 } elseif (backend.response_time > 1.5s) { set req.ttlfactor = 2.0 } elseif (backend.response_time > 2.5s) { set req.ttlfactor = 5.0 } /* * the program contains no references to * maxage, s-maxage and expires, so the * default handling (RFC2616) applies */ } ----------------------------------------------------------------------- Sample fetch policy program sub backends { set backend.vg.ip = {...} set backend.ads.ip = {...} set backend.chat.ip = {...} set backend.chat.timeout = 10s set backend.chat.bandwidth = 2000 MB/s set backend.other.ip = {...} } sub vg_backend { set backend.ip = {10.0.0.1-5} set backend.timeout = 4s set backend.bandwidth = 2000Mb/s } sub fetch_policy { if (req.url.host ~ "/vg.no$/") { set req.backend = vg call vg_backend } else { /* XXX: specify 404 page url ? */ error 404 } if (backend.response_time > 2.0s) { if (req.url.path ~ "/landbrugspriser/") { error 504 } } fetch if (backend.down) { if (obj.exist) { set obj.ttl += 10m finish } switch_config ohhshit } if (obj.result == 404) { error 300 "http://www.vg.no" } if (obj.result != 200) { finish } if (obj.size > 256k) { no-cache } else if (obj.size > 32k && obj.ttl < 2m) { obj.tll = 5m } if (backend.response_time > 2.0s) { set ttl *= 2.0 } } sub prefetch_policy { if (obj.usage < 10 && obj.ttl < 5m) { fetch } } ----------------------------------------------------------------------- Purging When a purge request comes in, the regexp is tagged with the next generation number and added to the tail of the list of purge regexps. Before a sender transmits an object, it is checked against any purge-regexps which have higher generation number than the object and if it matches the request is sent to a fetcher and the object purged. If there were purge regexps with higher generation to match, but they didn't match, the object is tagged with the current generation number and moved to the tail of the list. Otherwise, the object does not change generation number and is not moved on the generation list. New Objects are tagged with the current generation number and put at the tail of the list. Objects are removed from the generation list when deleted. When a purge object has a lower generation number than the first object on the generation list, the purge object has been completed and will be removed. A log entry is written with number of compares and number of hits. ----------------------------------------------------------------------- Random notes swap backed storage slowstart by config-flipping start-config has peer servers as backend once hitrate goes above limit, management process flips config to 'real' config. stat-object always URL, not regexp management + varnish process in one binary, comms via pipe Change from config with long expiry to short expiry, how does the ttl drop ? (config sequence number invalidates all calculated/modified attributes.) Mgt process holds copy of acceptor socket -> Restart without lost client requests. BW limit per client IP: create shortlived object (<4sec) to hold status. Enforce limits by delaying responses. ----------------------------------------------------------------------- Source structure libvarnish library with interface facilities, for instance functions to open&read shmem log varnish varnish sources in three classes ----------------------------------------------------------------------- protocol cluster/mgt/varnish object_query url -> TTL, size, checksum {purge,invalidate} regexp object_status url -> object metadata load_config filename switch_config configname list_configs unload_config freeze # stop the clock, freezes the object store thaw suspend # stop acceptor accepting new requests resume stop # forced stop (exits) varnish process start restart = "stop;start" ping $utc_time -> pong $utc_time # cluster only config_contents filename $inline -> compilation messages stats [-mr] -> $data zero stats help ----------------------------------------------------------------------- CLI (local) import protocol from above telnet localhost someport authentication: password $secret secret stored in {/usr/local}/etc/varnish.secret (400 root:wheel) ----------------------------------------------------------------------- HTML (local) php/cgi-bin thttpd ? (alternatively direct from C-code.) Everything the CLI can do + stats popen("rrdtool"); log view ----------------------------------------------------------------------- CLI (cluster) import protocol from above, prefix machine/all compound stats accept / deny machine (?) curses if you set termtype ----------------------------------------------------------------------- HTML (cluster) ditto ditto http://clustercontrol/purge?regexp=fslkdjfslkfdj POST with list of regexp authentication ? (IP access list) ----------------------------------------------------------------------- Mail (cluster) pgp signed emails with CLI commands ----------------------------------------------------------------------- connection varnish -> cluster controller Encryption SSL Authentication (?) IP number checks. varnish -c clusterid -C mycluster_ctrl.vg.no ----------------------------------------------------------------------- Filer /usr/local/sbin/varnish contains mgt + varnish process. if -C argument, open SSL to cluster controller. Arguments: -p portnumber -c clusterid@cluster_controller -f config_file -m memory_limit -s kind[,storage-options] -l logfile,logsize -b backend ip... -d debug -u uid -a CLI_port KILL SIGTERM -> suspend, stop /usr/local/sbin/varnish_cluster Cluster controller. Use syslog Arguments: -f config file -d debug -u uid (?) /usr/local/sbin/varnish_logger Logfile processor -i shmemfile -e regexp -o "/var/log/varnish.%Y%m%d.traffic" -e regexp2 -n "/var/log/varnish.%Y%m%d.exception" (NCSA format) -e regexp3 -s syslog_level,syslogfacility -r host:port send via TCP, prefix hostname SIGHUP: reopen all files. /usr/local/bin/varnish_cli Command line tool. /usr/local/share/varnish/etc/varnish.conf default request + fetch + backend scripts /usr/local/share/varnish/etc/rfc2616.conf RFC2616 compliant handling function /usr/local/etc/varnish.conf (optional) request + fetch + backend scripts /usr/local/share/varnish/etc/varnish.startup default startup sequence /usr/local/etc/varnish.startup (optional) startup sequence /usr/local/etc/varnish_cluster.conf XXX {/usr/local}/etc/varnish.secret CLI password file. ----------------------------------------------------------------------- varnish.startup load config /foo/bar startup_conf switch config startup_conf !mypreloadscript load config /foo/real real_conf switch config real_conf resume The third Varnish Design notes ------------------------------- A couple of days later the ideas had gel'ed:: Notes on Varnish ---------------- Collected 2006-02-26 to 2006-03-.. Poul-Henning Kamp ----------------------------------------------------------------------- Objects available to functions in VCL client # The client req # The request obj # The object from which we satisfy it backend # The chosen supplier ----------------------------------------------------------------------- Configuration Language XXX: declare IP lists ? BNF: program: part | program part part: "sub" function_name compound | "backend" backend_name compound compound: "{" statements "}" statements: /* empty */ | statement | statements statement statement: conditional | functioncall | "set" field value | field "=" value | "no_cache" | "finish" | "no_new_cache" | call function_name | fetch | error status_code | error status_code string(message) | switch_config config_id | rewrite field string(match) string(replace) conditional: "if" condition compound elif_parts else_part elif_parts: /* empty */ | elif_part | elif_parts elif_part elif_part: "elseif" condition compound | "elsif" condition compound | "else if" condition compound else_part: /* empty */ | "else" compound functioncal: "call" function_name field: object field "." variable condition: '(' cond_or ')' cond_or: cond_and | cond_or '||' cond_and cond_and: cond_part | cond_and '&&' cond_part cond_part: '!' cond_part2 | cond_part2 cond_part2: condition | field(int) '<' number | field(int) '<=' number | field(int) '>' number | field(int) '>=' number | field(int) '=' number | field(int) '!=' number | field(IP) ~ ip_list | field(string) ~ string(regexp) ----------------------------------------------------------------------- Sample request policy program sub request_policy { if (client.ip in 10.0.0.0/8) { no-cache finish } if (req.url.host ~ "cnn.no$") { rewrite s/cnn.no$/vg.no/ } if (req.url.path ~ "cgi-bin") { no-cache } if (req.useragent ~ "spider") { no-new-cache } if (backend.response_time > 0.8s) { set req.ttlfactor = 1.5 } elseif (backend.response_time > 1.5s) { set req.ttlfactor = 2.0 } elseif (backend.response_time > 2.5s) { set req.ttlfactor = 5.0 } /* * the program contains no references to * maxage, s-maxage and expires, so the * default handling (RFC2616) applies */ } ----------------------------------------------------------------------- Sample fetch policy program sub backends { set backend.vg.ip = {...} set backend.ads.ip = {...} set backend.chat.ip = {...} set backend.chat.timeout = 10s set backend.chat.bandwidth = 2000 MB/s set backend.other.ip = {...} } sub vg_backend { set backend.ip = {10.0.0.1-5} set backend.timeout = 4s set backend.bandwidth = 2000Mb/s } sub fetch_policy { if (req.url.host ~ "/vg.no$/") { set req.backend = vg call vg_backend } else { /* XXX: specify 404 page url ? */ error 404 } if (backend.response_time > 2.0s) { if (req.url.path ~ "/landbrugspriser/") { error 504 } } fetch if (backend.down) { if (obj.exist) { set obj.ttl += 10m finish } switch_config ohhshit } if (obj.result == 404) { error 300 "http://www.vg.no" } if (obj.result != 200) { finish } if (obj.size > 256k) { no-cache } else if (obj.size > 32k && obj.ttl < 2m) { obj.tll = 5m } if (backend.response_time > 2.0s) { set ttl *= 2.0 } } sub prefetch_policy { if (obj.usage < 10 && obj.ttl < 5m) { fetch } } ----------------------------------------------------------------------- Purging When a purge request comes in, the regexp is tagged with the next generation number and added to the tail of the list of purge regexps. Before a sender transmits an object, it is checked against any purge-regexps which have higher generation number than the object and if it matches the request is sent to a fetcher and the object purged. If there were purge regexps with higher generation to match, but they didn't match, the object is tagged with the current generation number and moved to the tail of the list. Otherwise, the object does not change generation number and is not moved on the generation list. New Objects are tagged with the current generation number and put at the tail of the list. Objects are removed from the generation list when deleted. When a purge object has a lower generation number than the first object on the generation list, the purge object has been completed and will be removed. A log entry is written with number of compares and number of hits. ----------------------------------------------------------------------- Random notes swap backed storage slowstart by config-flipping start-config has peer servers as backend once hitrate goes above limit, management process flips config to 'real' config. stat-object always URL, not regexp management + varnish process in one binary, comms via pipe Change from config with long expiry to short expiry, how does the ttl drop ? (config sequence number invalidates all calculated/modified attributes.) Mgt process holds copy of acceptor socket -> Restart without lost client requests. BW limit per client IP: create shortlived object (<4sec) to hold status. Enforce limits by delaying responses. ----------------------------------------------------------------------- Source structure libvarnish library with interface facilities, for instance functions to open&read shmem log varnish varnish sources in three classes ----------------------------------------------------------------------- protocol cluster/mgt/varnish object_query url -> TTL, size, checksum {purge,invalidate} regexp object_status url -> object metadata load_config filename switch_config configname list_configs unload_config freeze # stop the clock, freezes the object store thaw suspend # stop acceptor accepting new requests resume stop # forced stop (exits) varnish process start restart = "stop;start" ping $utc_time -> pong $utc_time # cluster only config_contents filename $inline -> compilation messages stats [-mr] -> $data zero stats help ----------------------------------------------------------------------- CLI (local) import protocol from above telnet localhost someport authentication: password $secret secret stored in {/usr/local}/etc/varnish.secret (400 root:wheel) ----------------------------------------------------------------------- HTML (local) php/cgi-bin thttpd ? (alternatively direct from C-code.) Everything the CLI can do + stats popen("rrdtool"); log view ----------------------------------------------------------------------- CLI (cluster) import protocol from above, prefix machine/all compound stats accept / deny machine (?) curses if you set termtype ----------------------------------------------------------------------- HTML (cluster) ditto ditto http://clustercontrol/purge?regexp=fslkdjfslkfdj POST with list of regexp authentication ? (IP access list) ----------------------------------------------------------------------- Mail (cluster) pgp signed emails with CLI commands ----------------------------------------------------------------------- connection varnish -> cluster controller Encryption SSL Authentication (?) IP number checks. varnish -c clusterid -C mycluster_ctrl.vg.no ----------------------------------------------------------------------- Filer /usr/local/sbin/varnish contains mgt + varnish process. if -C argument, open SSL to cluster controller. Arguments: -p portnumber -c clusterid@cluster_controller -f config_file -m memory_limit -s kind[,storage-options] -l logfile,logsize -b backend ip... -d debug -u uid -a CLI_port KILL SIGTERM -> suspend, stop /usr/local/sbin/varnish_cluster Cluster controller. Use syslog Arguments: -f config file -d debug -u uid (?) /usr/local/sbin/varnish_logger Logfile processor -i shmemfile -e regexp -o "/var/log/varnish.%Y%m%d.traffic" -e regexp2 -n "/var/log/varnish.%Y%m%d.exception" (NCSA format) -e regexp3 -s syslog_level,syslogfacility -r host:port send via TCP, prefix hostname SIGHUP: reopen all files. /usr/local/bin/varnish_cli Command line tool. /usr/local/share/varnish/etc/varnish.conf default request + fetch + backend scripts /usr/local/share/varnish/etc/rfc2616.conf RFC2616 compliant handling function /usr/local/etc/varnish.conf (optional) request + fetch + backend scripts /usr/local/share/varnish/etc/varnish.startup default startup sequence /usr/local/etc/varnish.startup (optional) startup sequence /usr/local/etc/varnish_cluster.conf XXX {/usr/local}/etc/varnish.secret CLI password file. ----------------------------------------------------------------------- varnish.startup load config /foo/bar startup_conf switch config startup_conf !mypreloadscript load config /foo/real real_conf switch config real_conf resume Fourth Varnish Design Note -------------------------- You'd think we'd be cookin' with gas now, and indeed we were, but now all the difficult details started to raise ugly questions, and it has never stopped since:: Questions: * Which "Host:" do we put in the request to the backend ? The one we got from the client ? The ip/dns-name of the backend ? Configurable in VCL backend declaration ? (test with www.ing.dk) * Construction of headers for queries to backend ? How much do we take from client headers, how much do we make up ? Some sites discriminate contents based on User-Agent header. (test with www.krak.dk/www.rs-components.dk) Cookies * Mapping of headers from backend reply to the reply to client Which fields come from the backend ? Which fields are made up on the spot ? (expiry time ?) (Static header fields can be prepended to contents in storage) * 3xx replies from the backend Does varnish follow a redirection or do we pass it to the client ? Do we cache 3xx replies ? The first live traffic ---------------------- The final bit of history I want to share is the IRC log from the first time tried to put real live traffic through Varnish. The language is interscandinavian, but I think non-vikings can get still get the drift:: **** BEGIN LOGGING AT Thu Jul 6 12:36:48 2006 Jul 06 12:36:48 * Now talking on #varnish Jul 06 12:36:48 * EvilDES gives channel operator status to andersb Jul 06 12:36:53 * EvilDES gives channel operator status to phk Jul 06 12:36:53 hehe Jul 06 12:36:56 sÃ¥nn Jul 06 12:37:00 Jepps, er dere klare? Jul 06 12:37:08 Jeg har varnish oppe og køre med leonora som backend. Jul 06 12:37:12 * EvilDES has changed the topic to: Live testing in progress! Jul 06 12:37:16 * EvilDES sets mode +t #varnish Jul 06 12:37:19 Da setter jeg pÃ¥ trafikk Jul 06 12:37:36 andersb: kan du starte med bare at give us trafiik i 10 sekunder eller sÃ¥ ? Jul 06 12:37:49 * edward (edward@f95.linpro.no) has joined #varnish Jul 06 12:38:32 hmm, først mÃ¥ jeg fÃ¥ trafikk dit. Jul 06 12:38:55 Har noe kommet? Eller har det blitt suprt etter /systemmeldinger/h.html som er helsefilen? Jul 06 12:39:10 s/suprt/spurt/ Jul 06 12:39:41 ser ingenting Jul 06 12:39:45 jeg har ikke set noget endnu... Jul 06 12:40:35 den prøver pÃ¥ port 80 Jul 06 12:41:24 okay.. Jul 06 12:41:31 kan vi ikke bare kjøre varnishd pÃ¥ port 80? Jul 06 12:41:46 ok, jeg ville bare helst ikke køre som root. Jul 06 12:41:47 Prøver den noe annet nÃ¥? Jul 06 12:41:59 nej stadig 80. Jul 06 12:42:03 Jeg starter varnishd som root Jul 06 12:42:08 nei, vent Jul 06 12:42:08 Topp Jul 06 12:42:11 okay Jul 06 12:42:15 kom det 8080 nÃ¥? Jul 06 12:42:18 sysctl reserved_port Jul 06 12:43:04 okay? FÃ¥r dere 8080 trafikk nÃ¥? Jul 06 12:43:08 sysctl net.inet.ip.portrange.reservedhigh=79 Jul 06 12:44:41 Okay, avventer om vi skal kjøre 8080 eller 80. Jul 06 12:45:56 starter den pÃ¥ port 80 som root Jul 06 12:46:01 den kører nu Jul 06 12:46:01 Okay, vi har funnet ut at mÃ¥ten jeg satte 8080 pÃ¥ i lastbalanserern var feil. Jul 06 12:46:07 okay pÃ¥ 80? Jul 06 12:46:12 vi kører Jul 06 12:46:14 ja, masse trafikk Jul 06 12:46:29 omtrent 100 req/sec Jul 06 12:46:37 and we're dead... Jul 06 12:46:40 stopp! Jul 06 12:46:58 den stopper automatisk. Jul 06 12:47:04 Vi kan bare kjøre det slik. Jul 06 12:47:06 tok noen sekunder Jul 06 12:47:20 Npr den begynner svar pÃ¥ 80 sÃ¥ vil lastbalanserern finne den fort og sende trafikk. Jul 06 12:47:41 ca 1500 connection requests kom inn før den sluttet Ã¥ sende oss trafikk Jul 06 12:47:49 altsÃ¥, 1500 etter at varnishd døde Jul 06 12:48:02 tror det er en god nok mÃ¥te Ã¥ gjøre det pÃ¥. SÃ¥ slipper vi Ã¥ configge hele tiden. Jul 06 12:48:07 greit Jul 06 12:48:11 det er dine lesere :) Jul 06 12:48:19 ja :) Jul 06 12:48:35 kan sette ned retry raten litt. Jul 06 12:49:15 >> AS3408-2 VG Nett - Real server 21 # retry Jul 06 12:49:16 Current number of failure retries: 4 Jul 06 12:49:16 Enter new number of failure retries [1-63]: 1 Jul 06 12:49:33 ^^ before de decalres dead Jul 06 12:49:41 he declairs :) Jul 06 12:51:45 I've saved the core, lets try again for another shot. Jul 06 12:52:09 sure :) Jul 06 12:52:34 When you start port 80 loadbalancer will send 8 req's for h.html then start gicing traficc Jul 06 12:53:00 ^^ Microsoft keyboard Jul 06 12:53:09 ok, jeg starter Jul 06 12:53:10 you need to get a Linux keyboard Jul 06 12:53:16 Yeah :) Jul 06 12:53:18 woo! Jul 06 12:53:21 boom. Jul 06 12:53:25 oops Jul 06 12:53:35 18 connections, 77 requests Jul 06 12:53:40 that didn't last long... Jul 06 12:54:41 longer than me :) *rude joke Jul 06 12:55:04 bewm Jul 06 12:55:22 can I follow a log? Jul 06 12:55:39 with: lt-varnishlog ? Jul 06 12:56:27 samme fejl Jul 06 12:56:38 andersb: jeg gemmer logfilerne Jul 06 12:57:00 bewm Jul 06 12:57:13 phk: Jepp, men for min egen del for Ã¥ se nÃ¥r dere skrur pÃ¥ etc. Da lærer jeg loadbalancer ting. Jul 06 12:57:51 ok, samme fejl igen. Jul 06 12:58:02 jeg foreslÃ¥r vi holder en lille pause mens jeg debugger. Jul 06 12:58:09 sure. Jul 06 12:58:16 andersb: cd ~varnish/varnish/trunk/varnish-cache/bin/varnishlog Jul 06 12:58:21 andersb: ./varnishlog -o Jul 06 12:58:37 andersb: cd ~varnish/varnish/trunk/varnish-cache/bin/varnishstat Jul 06 12:58:43 andersb: ./varnishstat -c Jul 06 12:58:44 eller ./varnislog -r _vlog3 -o | less Jul 06 13:00:02 Jeg gÃ¥r meg en kort tur. Straks tilbake. Jul 06 13:01:27 vi kører igen Jul 06 13:02:31 2k requests Jul 06 13:02:57 3k Jul 06 13:03:39 5k Jul 06 13:03:55 ser veldig bra ut Jul 06 13:04:06 hit rate > 93% Jul 06 13:04:13 95% Jul 06 13:05:14 800 objects Jul 06 13:05:32 load 0.28 Jul 06 13:05:37 0.22 Jul 06 13:05:52 CPU 98.9% idle :) Jul 06 13:06:12 4-5 Mbit/sec Jul 06 13:06:42 nice :) Jul 06 13:06:49 vi kjører til det krasjer? Jul 06 13:06:58 jep Jul 06 13:07:05 du mÃ¥ gerne Ã¥bne lidt mere Jul 06 13:07:20 okay Jul 06 13:07:41 3 ganger mer... Jul 06 13:08:04 si fra nÃ¥r dere vil ha mer. Jul 06 13:08:24 vi gir den lige et par minutter pÃ¥ det her niveau Jul 06 13:09:17 bewm Jul 06 13:09:31 3351 0.00 Client connections accepted Jul 06 13:09:31 23159 0.00 Client requests received Jul 06 13:09:31 21505 0.00 Cache hits Jul 06 13:09:31 1652 0.00 Cache misses Jul 06 13:10:17 kører igen Jul 06 13:10:19 here we go again Jul 06 13:11:06 20mbit/sec Jul 06 13:11:09 100 req/sec Jul 06 13:12:30 nice :) Jul 06 13:12:46 det er gode tall, og jeg skal fortelle dere hvorfor senere Jul 06 13:12:49 steady 6-8 mbit/sec Jul 06 13:12:52 okay. Jul 06 13:13:00 ca 50 req/sec Jul 06 13:13:04 skal vi øke? Jul 06 13:13:14 ja, giv den det dobbelte hvis du kan Jul 06 13:13:19 vi startet med 1 -> 3 -> ? Jul 06 13:13:22 6 Jul 06 13:13:23 6 Jul 06 13:13:34 done Jul 06 13:13:42 den hopper opp graceful. Jul 06 13:13:54 boom Jul 06 13:14:06 :) Jul 06 13:14:11 men ingen ytelsesproblemer Jul 06 13:14:19 bare bugs i requestparsering Jul 06 13:14:20 kører igen Jul 06 13:14:26 bewm Jul 06 13:14:31 ok, vi pauser lige... Jul 06 13:17:40 jeg har et problem med "pass" requests, det skal jeg lige have fundet inden vi gÃ¥r videre. Jul 06 13:18:51 Sure. Jul 06 13:28:50 ok, vi prøver igen Jul 06 13:29:09 bewm Jul 06 13:29:35 more debugging Jul 06 13:33:56 OK, found the/one pass-mode bug Jul 06 13:33:58 trying again Jul 06 13:35:23 150 req/s 24mbit/s, still alive Jul 06 13:37:02 andersb: tror du du klarer Ã¥ komme deg hit til foredraget, eller er du helt ødelagt? Jul 06 13:37:06 andersb: giv den 50% mere trafik Jul 06 13:39:46 mer trafikk Jul 06 13:39:56 EvilDES: Nei :(( Men Stein fra VG Nett kommer. Jul 06 13:41:25 btw, har du noen data om hva load balanceren synes om varnish? Jul 06 13:41:50 jeg regner med at den følger med litt pÃ¥ hvor god jobb vi gjør Jul 06 13:43:10 Jeg genstarter lige med flere workerthreads... Jul 06 13:43:43 jeg tror 20 workerthreads var for lidt nu... Jul 06 13:43:47 nu har den 220 Jul 06 13:44:40 2976 107.89 Client connections accepted Jul 06 13:44:41 10748 409.57 Client requests received Jul 06 13:44:41 9915 389.59 Cache hits Jul 06 13:45:13 det var altsÃ¥ 400 i sekundet :) Jul 06 13:45:45 og ingen indlysende fejl pÃ¥ www.vg.no siden :-) Jul 06 13:45:54 bewm Jul 06 13:47:16 andersb: hvor stor andel av trafikken hadde vi nÃ¥? Jul 06 13:48:06 altsÃ¥, vekt i load balanceren i forhold til totalen Jul 06 13:49:20 ok, kun 120 threads sÃ¥... Jul 06 13:50:48 9 Jul 06 13:52:45 andersb: 9 -> 12 ? Jul 06 13:52:48 andersb: 9 til varnish, men hvor mye er den totale vekten? Jul 06 13:52:58 har vi 1%? 5%? 10%? Jul 06 13:54:37 nÃ¥ passerte vi nettopp 50000 requests uten kræsj Jul 06 13:55:36 maskinen laver ingenting... 98.5% idle Jul 06 13:56:21 12 maskiner med weight 20 Jul 06 13:56:26 1 med weight 40 Jul 06 13:56:29 varnish med 9 Jul 06 13:57:01 si fra nÃ¥r dere vil ha mer trafikk. Jul 06 13:57:02 9/289 = 3.1% Jul 06 13:57:12 andersb: giv den 15 Jul 06 13:57:44 gjort Jul 06 13:59:43 dette er morro. Jeg mÃ¥ si det. Jul 06 14:00:27 20-23 Mbit/sec steady, 200 req/sec, 92.9% idle Jul 06 14:00:30 bewm Jul 06 14:00:46 OK Jul 06 14:00:57 jeg tror vi kan slÃ¥ fast at ytelsen er som den skal være Jul 06 14:01:33 det er en del bugs, men de bør det gÃ¥ an Ã¥ fikse. Jul 06 14:01:34 Jepp :) Det sÃ¥ pent ut... Jul 06 14:01:53 jeg tror ikke vi har set skyggen af hvad Varnish kan yde endnu... Jul 06 14:01:53 andersb: hvordan ligger vi an i forhold til Squid? Jul 06 14:01:58 pent :) Jul 06 14:02:13 Jeg har ikke fÃ¥tt SNMP opp pÃ¥ dene boksen, jeg burde grafe det... Jul 06 14:02:23 snmp kjører pÃ¥ c21 Jul 06 14:02:33 tror agero satte det opp Jul 06 14:02:36 aagero Jul 06 14:02:38 Ja, men jeg har ikke mal i cacti for bsnmpd Jul 06 14:02:43 ah, ok Jul 06 14:03:03 men den burde støtte standard v2 mib? Jul 06 14:03:26 det er ikke protocoll feil :) Jul 06 14:03:42 Hva er byte hitratio forresetn? Jul 06 14:03:52 det tror jeg ikke vi mÃ¥ler Jul 06 14:03:55 enda Jul 06 14:03:59 andersb: den har jeg ikke stats pÃ¥ endnu. Jul 06 14:04:22 ok, forrige crash ligner en 4k+ HTTP header... Jul 06 14:04:27 (eller en kodefejl) Jul 06 14:06:03 andersb: prøv at øge vores andel til 20 Jul 06 14:06:26 hvilken vekt har hver av de andre cachene? Jul 06 14:06:49 20 og en med 40 Jul 06 14:07:50 gjort Jul 06 14:08:59 440 req/s 43mbit/s Jul 06 14:09:17 bewm Jul 06 14:09:18 bewm Jul 06 14:10:30 oj Jul 06 14:10:39 vi var oppe over 800 req/s et øyeblikk Jul 06 14:10:46 60mbit/sec Jul 06 14:10:52 og 90% idle :-) Jul 06 14:10:59 ingen swapping Jul 06 14:11:58 og vi bruker nesten ikke noe minne - 3 GB ledig fysisk RAM Jul 06 14:13:02 ca 60 syscall / req Jul 06 14:14:31 nice :) Jul 06 14:14:58 andersb: prøv at give os 40 Jul 06 14:17:26 gjort Jul 06 14:18:17 det ligner at trafikken falder her sidst pÃ¥ eftermiddagen... Jul 06 14:19:07 ja :) Jul 06 14:19:43 andersb: sÃ¥ skal vi nok ikke øge mere, nu nærmer vi os hvad 100Mbit ethernet kan klare. Jul 06 14:19:58 bra :) Jul 06 14:20:36 42mbit/s steady Jul 06 14:20:59 40 av 320? Jul 06 14:21:06 12,5% Jul 06 14:21:43 * nicholas (nicholas@nfsd.linpro.no) has joined #varnish Jul 06 14:22:00 det der cluster-noget bliver der da ikke brug for nÃ¥r vi har 87% idle Jul 06 14:23:05 hehe :) Jul 06 14:24:38 skal stille de andre ned litt for 48 er max Jul 06 14:24:57 jeg tror ikke vi skal gÃ¥ højere før vi har gigE Jul 06 14:25:14 4-5MB/s Jul 06 14:25:32 lastbalanserer backer off pÃ¥ 100 Mbit Jul 06 14:25:35 :) Jul 06 14:25:42 SÃ¥ vi kan kjøre nesten til taket. Jul 06 14:26:01 hvis det har noe poeng. Jul 06 14:26:09 crash :) Jul 06 14:27:33 bewm Jul 06 14:29:08 Stilt inn alle pÃ¥ weight 5 Jul 06 14:29:17 bortsett fra 1 som er 10 Jul 06 14:29:20 varnish er 5 Jul 06 14:29:24 sÃ¥ giv os 20 Jul 06 14:29:51 gjort Jul 06 14:30:58 vi fÃ¥r kun 300 req/s Jul 06 14:31:04 Ahh der skete noget. Jul 06 14:32:41 ok, ved denne last bliver backend connections et problem, jeg har set dns fejl og connection refused Jul 06 14:33:10 dns fejl Jul 06 14:33:21 okay, pek den mot 10.0.2.5 Jul 06 14:33:28 det er layer 2 squid cache Jul 06 14:33:35 morro Ã¥ teste det og. Jul 06 14:33:54 det gør jeg næste gang den falder Jul 06 14:34:48 jeg kunne jo ogsÃ¥ bare give leonors IP# istedet... men nu kører vi imod squid Jul 06 14:36:05 ja, gi leonora IP det er sikkert bedre. Eller det kan jo være fint Ã¥ teste mot squid og :) Jul 06 14:39:04 nu kører vi med leonora's IP# Jul 06 14:39:33 nu kører vi med leonora's *rigtige* IP# Jul 06 14:41:20 Nu er vi færdige med det her 100Mbit/s ethernet, kan vi fÃ¥ et til ? :-) Jul 06 14:41:42 lol :) Jul 06 14:42:00 For Ã¥ si det slik. Det tar ikke mange dagene før Gig switch er bestilt :) Jul 06 14:43:05 bewm Jul 06 14:43:13 ok, jeg synes vi skal stoppe her. Jul 06 14:43:41 jepp, foredrag om 15 min Jul 06 14:43:57 jepp Jul 06 14:44:23 disabled server Jul 06 14:45:29 dette har vært en veldig bra dag. Jul 06 14:45:49 hva skal vi finne pÃ¥ i morgen? skifte ut hele Squid-riggen med en enkelt Varnish-boks? ;) Jul 06 14:45:53 lol Jul 06 14:46:15 * EvilDES mÃ¥ begynne Ã¥ sette i stand til foredraget Jul 06 14:46:17 da mÃ¥ jeg har Gig switch. Eller sÃ¥ kan være bære med en HP maskin Ã¥ koble rett pÃ¥ lastbal :) Jul 06 14:46:22 kan vi ikke nøjes med en halv varnish box ? Jul 06 14:46:41 vi mÃ¥ ha begge halvdeler for failover Jul 06 14:47:01 :) Jul 06 14:47:14 kan faile tilbake til de andre. Jul 06 14:47:25 Jeg klarer ikke holde meg hjemme. Jul 06 14:47:33 Jeg kommer oppover om litt :) Jul 06 14:47:39 Ringer pÃ¥. Jul 06 14:48:19 mÃ¥ gÃ¥ en tur nÃ¥ :) Jul 06 14:48:29 * andersb has quit (BitchX: no additives or preservatives) Jul 06 14:49:44 http://www.des.no/varnish/ **** ENDING LOGGING AT Thu Jul 6 14:52:04 2006 *phk* varnish-7.5.0/doc/sphinx/phk/gzip.rst000066400000000000000000000134121457605730600175560ustar00rootroot00000000000000.. Copyright (c) 2011-2015 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _phk_gzip: ======================================= How GZIP, and GZIP+ESI works in Varnish ======================================= First of all, everything you read about GZIP here, is controlled by the parameter: http_gzip_support Which defaults to "on" if you do not want Varnish to try to be smart about compression, set it to "off" instead. What does http_gzip_support do ------------------------------ A request which is sent into 'pipe' or 'pass' mode from vcl_recv{} will not experience any difference, this processing only affects cache hit/miss requests. Unless vcl_recv{} results in "pipe" or "pass", we determine if the client is capable of receiving gzip'ed content. The test amounts to: Is there a Accept-Encoding header that mentions gzip, and if is has a q=# number, is it larger than zero. Clients which can do gzip, gets their header rewritten to: Accept-Encoding: gzip And clients which do not support gzip gets their Accept-Encoding header removed. This ensures conformity with respect to creating Vary: strings during object creation. During lookup, we ignore any "Accept-encoding" in objects Vary: strings, to avoid having a gzip and gunzip'ed version of the object, varnish can gunzip on demand. (We implement this bit of magic at lookup time, so that any objects stored in persistent storage can be used with or without gzip support enabled.) Varnish will not do any other types of compressions than gzip, in particular we will not do deflate, as there are browser bugs in that case. Before vcl_miss{} is called, the backend requests Accept-Encoding is always set to: Accept-Encoding: gzip Even if this particular client does not support To always entice the backend into sending us gzip'ed content. Varnish will not gzip any content on its own (but see below), we trust the backend to know what content can be sensibly gzip'ed (html) and what can not (jpeg) If in vcl_backend_response{} we find out that we are trying to deliver a gzip'ed object to a client that has not indicated willingness to receive gzip, we will ungzip the object during deliver. Tuning, tweaking and frobbing ----------------------------- In vcl_recv{} you have a chance to modify the client's Accept-Encoding: header before anything else happens. In vcl_pass{} the clients Accept-Encoding header is copied to the backend request unchanged. Even if the client does not support gzip, you can force the A-C header to "gzip" to save bandwidth between the backend and varnish, varnish will gunzip the object before delivering to the client. In vcl_miss{} you can remove the "Accept-Encoding: gzip" header, if you do not want the backend to gzip this object. In vcl_backend_response{} two new variables allow you to modify the gzip-ness of objects during fetch: set beresp.do_gunzip = true; Will make varnish gunzip an already gzip'ed object from the backend during fetch. (I have no idea why/when you would use this...) set beresp.do_gzip = true; Will make varnish gzip the object during fetch from the backend, provided the backend didn't send us a gzip'ed object. Remember that a lot of content types cannot sensibly be gziped, most notably compressed image formats like jpeg, png and similar, so a typical use would be:: sub vcl_backend_response { if (bereq.url ~ "html$") { set beresp.do_gzip = true; } } GZIP and ESI ------------ First, note the new syntax for activating ESI:: sub vcl_backend_response { set beresp.do_esi = true; } In theory, and hopefully in practice, all you read above should apply also when you enable ESI, if not it is a bug you should report. But things are vastly more complicated now. What happens for instance, when the backend sends a gzip'ed object we ESI process it and it includes another object which is not gzip'ed, and we want to send the result gziped to the client ? Things can get really hairy here, so let me explain it in stages. Assume we have a ungzipped object we want to ESI process. The ESI parser will run through the object looking for the various magic strings and produce a byte-stream we call the "VEC" for Varnish ESI Codes. The VEC contains instructions like "skip 234 bytes", "deliver 12919 bytes", "include /foobar", "deliver 122 bytes" etc and it is stored with the object. When we deliver an object, and it has a VEC, special esi-delivery code interprets the VEC string and sends the output to the client as ordered. When the VEC says "include /foobar" we do what amounts to a restart with the new URL and possibly Host: header, and call vcl_recv{} etc. You can tell that you are in an ESI include by examining the 'req.esi_level' variable in VCL. The ESI-parsed object is stored gzip'ed under the same conditions as above: If the backend sends gzip'ed and VCL did not ask for do_gunzip, or if the backend sends ungzip'ed and VCL asked for do_gzip. Please note that since we need to insert flush and reset points in the gzip file, it will be slightly larger than a normal gzip file of the same object. When we encounter gzip'ed include objects which should not be, we gunzip them, but when we encounter gunzip'ed objects which should be, we gzip them, but only at compression level zero. So in order to avoid unnecessary work, and in order to get maximum compression efficiency, you should:: sub vcl_miss { if (object needs ESI processing) { unset req.http.accept-encoding; } } sub vcl_backend_response { if (object needs ESI processing) { set beresp.do_esi = true; set beresp.do_gzip = true; } } So that the backend sends these objects uncompressed to varnish. You should also attempt to make sure that all objects which are esi:included are gziped, either by making the backend do it or by making varnish do it. varnish-7.5.0/doc/sphinx/phk/h2againagainagain.rst000066400000000000000000000122021457605730600221120ustar00rootroot00000000000000.. _phk_h2_again_again_again: On the deck-chairs of HTTP/2 ============================ Last week some people found out that the HTTP/2 protocol is not so fantastic when it comes to Denial-of-Service attacks. Funny that. The DoS vulnerability "found" last week, and proudly declared "zero-day" in the heroic sagas of supreme DevOps-ness, can be found in section 6.5.2, page 38 bottom, of RFC7540, from 2015: .. code-block:: none SETTINGS_MAX_CONCURRENT_STREAMS (0x3): Indicates the maximum number of concurrent streams that the sender will allow. This limit is directional: it applies to the number of streams that the sender permits the receiver to create. Initially, there is no limit to this value. It is recommended that this value be no smaller than 100, so as to not unnecessarily limit parallelism. Long before that RFC was made official, some of us warned about ``there is no limit``, but that would not be a problem because "We have enough CPU's for that", we were told, mainly by the people from the very large company who pushed H2 down our throat. Overall HTTP/2 has been a solid disappointment, at least if you believed any of the lofty promises and hype used to market it. Yes, we got fewer TCP connections, which is nice when there are only sixty-something thousand port numbers available, and yes we got some parallelism per TCP connection. Of course the price for that parallelism is that a dropped packet no longer delays a single resource: Now it delays *everything*. During the ratification period, this got the nickname "Head-Of-Line-Blocking" or just "HoL-blocking", but that would also not be a problem because "the ISPs would be forced to (finally) fix that" - said some guy with a Google Fiber connection to his Silly Valley home. But we also got, as people "discovered" last week, a lot more sensitive to DoS attacks - by design - because the entire point of H2 was to get as much work done as soon as possible. Then there is the entire section in the H2 RFC about "Stream Priority", intended to reduce the risk of people reading the words in the article until all the "monetization" were in place. As far as I know, nobody ever got that to do anything useful, unless they had somebody standing on the toes 24*7, pointing their nose at the ever-moving moon, while reciting Chaucer from memory. I think all browsers just ignore it now. Oh… and "PUSH": The against-all-principles reverse primitive, so the server could tell the browser, that come hell or high water, you will need these advertisements right away. Nobody got that working either. But my all time personal favorite is this one: The static HPACK compression table contains entries for the headers ``proxy-authentication`` and ``proxy-authorization``, which by definition can never appear in a H2 connection. But they were in some random dataset somebody used to construct the table, and "it was far too late to change that now", because we were in a hurry to get H2 deployed. We have a saying in Danish: »Hastværk er lastværk«, which roughly translates to »Hurry and be Sorry«, and well, yeah... (If you want to read what I thought at the time, this is a draft I never completed because I realized that H2 was just going to be a rubber-stamping exercise: https://phk.freebsd.dk/sagas/httpbis/) Once it became clear that H2 would happen, DoS vulnerabilities and all, I dialed down my complaining about the DoS problems, partly because I saw no reason to actively tell the script-kiddies and criminals what to do, but mostly because clearly nobody was listening: "It's a bypass. You've got to build bypasses." Somewhat reluctantly we implemented H2 in Varnish, because people told us they really, really would need this, it was going to be a checkbox item for the C-team once they read about it in the WSJ, and all the cool kids already had it &c &c. We didn't do a bad job of it, but we could probably have done it even better, if we had felt it would worth it. But given that the ink on H2 was barely dry before QUIC was being launched to replace it, and given that DoS vulnerabilities were literally written into the standard, we figured that H2 was unlikely to overtake the hot plate and the deep water as Inventions of The Century, and economized our resources accordingly. I am pleasantly surprised that it took the bad guys this long to weaponize H2. Yes, there are 100 pages in the main RFC and neither Hemmingway or Prachett were on the team, but eight years ? Of course there is no knowing how long time it has been a secret weapon in some country's arsenal of "cyber weapons". But now that the bad guys have found it, and weaponized it, what do we do ? My advice: Unless you have solid numbers to show that H2 is truly improving things for you and your clients, you should just turn it off. Remember to also remove it from the ALPN string in hitch or whatever TLS off-loader you use. If for some reason you cannot turn H2 off, we are implementing some parameters which can help mitigate the DoS attacks, and we will roll new releases to bring those to you. But other than that, please do not expect us to spend a lot of time rearranging the deck-chairs of HTTP/2. */phk* varnish-7.5.0/doc/sphinx/phk/http20.rst000066400000000000000000000276101457605730600177330ustar00rootroot00000000000000.. Copyright (c) 2012-2015 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _phk_http20_lack_of_interest: ====================================== Why HTTP/2.0 does not seem interesting ====================================== This is the email I sent to the IETF HTTP Working Group:: From: Poul-Henning Kamp Subject: HTTP/2 Expression of luke-warm interest: Varnish To: HTTP Working Group Message-Id: <41677.1342136900@critter.freebsd.dk> Date: Thu, 12 Jul 2012 23:48:20 GMT This is Varnish' response to the call for expression of interest in HTTP/2[1]. Varnish ------- Presently Varnish[2] only implements a subset of HTTP/1.1 consistent with its hybrid/dual "http-server" / "http-proxy" role. I cannot at this point say much about what Varnish will or will not implement protocol wise in the future. Our general policy is to only add protocols if we can do a better job than the alternative, which is why we have not implemented HTTPS for instance. Should the outcome of the HTTP/2.0 effort result in a protocol which gains traction, Varnish will probably implement it, but we are unlikely to become an early implementation, given the current proposals at the table. Why I'm not impressed --------------------- I have read all, and participated in one, of the three proposals presently on the table. Overall, I find all three proposals are focused on solving yesteryears problems, rather than on creating a protocol that stands a chance to last us the next 20 years. Each proposal comes out of a particular "camp" and therefore all seem to suffer a certain amount from tunnel-vision. It is my considered opinion that none of the proposals have what it will take to replace HTTP/1.1 in practice. What if they made a new protocol, and nobody used it ? ------------------------------------------------------ We have learned, painfully, that an IPv6 which is only marginally better than IPv4 and which offers no tangible benefit for the people who have the cost/trouble of the upgrade, does not penetrate the network on its own, and barely even on governments mandate. We have also learned that a protocol which delivers the goods can replace all competition in virtually no time. See for instance how SSH replaced TELNET, REXEC, RSH, SUPDUP, and to a large extent KERBEROS, in a matter of a few years. Or I might add, how HTTP replaced GOPHER[3]. HTTP/1.1 is arguably in the top-five most used protocols, after IP, TCP, UDP and, sadly, ICMP, and therefore coming up with a replacement should be approached humbly. Beating HTTP/1.1 ---------------- Fortunately, there are many ways to improve over HTTP/1.1, which lacks support for several widely used features, and sports many trouble-causing weeds, both of which are ripe for HTTP/2.0 to pounce on. Most notably HTTP/1.1 lacks a working session/endpoint-identity facility, a shortcoming which people have pasted over with the ill-conceived Cookie hack. Cookies are, as the EU commission correctly noted, fundamentally flawed, because they store potentially sensitive information on whatever computer the user happens to use, and as a result of various abuses and incompetences, EU felt compelled to legislate a "notice and announce" policy for HTTP-cookies. But it doesn't stop there: The information stored in cookies have potentially very high value for the HTTP server, and because the server has no control over the integrity of the storage, we are now seeing cookies being crypto-signed, to prevent forgeries. The term "bass ackwards" comes to mind. Cookies are also one of the main wasters of bandwidth, disabling caching by default, sending lots of cookies were they are are not needed, which made many sites register separate domains for image content, to "save" bandwidth by avoiding cookies. The term "not really helping" also comes to mind. In my view, HTTP/2.0 should kill Cookies as a concept, and replace it with a session/identity facility, which makes it easier to do things right with HTTP/2.0 than with HTTP/1.1. Being able to be "automatically in compliance" by using HTTP/2.0 no matter how big dick-heads your advertisers are or how incompetent your web-developers are, would be a big selling point for HTTP/2.0 over HTTP/1.1. However, as I read them, none of the three proposals try to address, much less remedy, this situation, nor for that matter any of the many other issues or troubles with HTTP/1.x. What's even worse, they are all additive proposals, which add a new layer of complexity without removing any of the old complexity from the protocol. My conclusion is that HTTP/2.0 is really just a grandiose name for HTTP/1.2: An attempt to smooth out some sharp corners, to save a bit of bandwidth, but not get anywhere near all the architectural problems of HTTP/1.1 and to preserve faithfully its heritage of badly thought out sedimentary hacks. And therefore, I don't see much chance that the current crop of HTTP/2.0 proposals will fare significantly better than IPv6 with respect to adoption. HTTP Routers ------------ One particular hot-spot in the HTTP world these days is the "load-balancer" or as I prefer to call it, the "HTTP router". These boxes sit at the DNS resolved IP numbers and distributes client requests to a farm of HTTP servers, based on simple criteria such as "Host:", URI patterns and/or server availability, sometimes with an added twist of geo-location[4]. HTTP routers see very high traffic densities, the highest traffic densities, because they are the focal point of DoS mitigation, flash mobs and special event traffic spikes. In the time frame where HTTP/2.0 will become standardized, HTTP routers will routinely deal with 40Gbit/s traffic and people will start to architect for 1Tbit/s traffic. HTTP routers are usually only interested in a small part of the HTTP request and barely in the response at all, usually only the status code. The demands for bandwidth efficiency has made makers of these devices take many unwarranted shortcuts, for instance assuming that requests always start on a packet boundary, "nulling out" HTTP headers by changing the first character and so on. Whatever HTTP/2.0 becomes, I strongly urge IETF and the WG to formally recognize the role of HTTP routers, and to actively design the protocol to make life easier for HTTP routers, so that they can fulfill their job, while being standards compliant. The need for HTTP routers does not disappear just because HTTPS is employed, and serious thought should be turned to the question of mixing HTTP and HTTPS traffic on the same TCP connection, while allowing a HTTP router on the server side to correctly distribute requests to different servers. One simple way to gain a lot of benefit for little cost in this area, would be to assign "flow-labels" which each are restricted to one particular Host: header, allowing HTTP routers to only examine the first request on each flow. SPDY ---- SPDY has come a long way, and has served as a very worthwhile proof of concept prototype, to document that there are gains to be had. But as Frederick P. Brooks admonishes us: Always throw the prototype away and start over, because you will throw it away eventually, and doing so early saves time and effort. Overall, I find the design approach taken in SPDY deeply flawed. For instance identifying the standardized HTTP headers, by a 4-byte length and textual name, and then applying a deflate compressor to save bandwidth is totally at odds with the job of HTTP routers which need to quickly extract the Host: header in order to route the traffic, preferably without committing extensive resources to each request. It is also not at all clear if the built-in dictionary is well researched or just happens to work well for some subset of present day websites, and at the very least some kind of versioning of this dictionary should be incorporated. It is still unclear for me if or how SPDY can be used on TCP port 80 or if it will need a WKS allocation of its own, which would open a ton of issues with firewalling, filtering and proxying during deployment. (This is one of the things which makes it hard to avoid the feeling that SPDY really wants to do away with all the "middle-men") With my security-analyst hat on, I see a lot of DoS potential in the SPDY protocol, many ways in which the client can make the server expend resources, and foresee a lot of complexity in implementing the server side to mitigate and deflect malicious traffic. Server Push breaks the HTTP transaction model, and opens a pile of cans of security and privacy issues, which would not be sneaked in during the design of a transport-encoding for HTTP/1+ traffic, but rather be standardized as an independent and well analysed extension to HTTP in general. HTTP Speed+Mobility ------------------- Is really just SPDY with WebSockets underneath. I'm really not sure I see any benefit to that, except that the encoding chosen is marginally more efficient to implement in hardware than SPDY. I have not understood why it has "mobility" in the name, a word which only makes an appearance in the ID as part of the name. If the use of the word "mobility" only refers only to bandwidth usage, I would call its use borderline-deceptive. If it covers session stability across IP# changes for mobile devices, I have missed it in my reading. draft-tarreau-httpbis-network-friendly-00 ----------------------------------------- I have participated a little bit in this draft initially, but it uses a number of concepts which I think are very problematic for high performance (as in 1Tbit/s) implementations, for instance variant-size length fields etc. I do think the proposal is much better than the other two, taking a much more fundamental view of the task, and if for no other reason, because it takes an approach to bandwidth-saving based on enumeration and repeat markers, rather than throwing everything after deflate and hope for a miracle. I think this protocol is the best basis to start from, but like the other two, it has a long way to go, before it can truly earn the name HTTP/2.0. Conclusion ---------- Overall, I don't see any of the three proposals offer anything that will make the majority of web-sites go "Ohh we've been waiting for that!" Bigger sites will be enticed by small bandwidth savings, but the majority of the HTTP users will see scant or no net positive benefit if one or more of these three proposals were to become HTTP/2.0 Considering how sketchy the HTTP/1.1 interop is described it is hard to estimate how much trouble (as in: "Why doesn't this website work ?") their deployment will cause, nor is it entirely clear to what extent the experience with SPDY is representative of a wider deployment or only of 'flying under the radar' with respect to people with an interest in intercepting HTTP traffic. Given the role of HTTP/1.1 in the net, I fear that the current rush to push out a HTTP/2.0 by purely additive means is badly misguided, and approaching a critical mass which will delay or prevent adoption on its own. At the end of the day, a HTTP request or a HTTP response is just some metadata and an optional chunk of bytes as body, and if it already takes 700 pages to standardize that, and HTTP/2.0 will add another 100 pages to it, we're clearly doing something wrong. I think it would be far better to start from scratch, look at what HTTP/2.0 should actually do, and then design a simple, efficient and future proof protocol to do just that, and leave behind all the aggregations of badly thought out hacks of HTTP/1.1. But to the extent that the WG produces a HTTP/2.0 protocol which people will start to use, the Varnish project will be interested. Poul-Henning Kamp Author of Varnish [1] http://trac.tools.ietf.org/wg/httpbis/trac/wiki/Http2CfI [2] https://www.varnish-cache.org/ [3] Yes, I'm that old. [4] Which is really a transport level job, but it was left out of IPv6 along with other useful features, to not delay adoption[5]. [5] No, I'm not kidding. varnish-7.5.0/doc/sphinx/phk/index.rst000066400000000000000000000017251457605730600177200ustar00rootroot00000000000000.. Copyright (c) 2010-2019 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _phk: Poul-Hennings random outbursts ============================== You may or may not want to know what Poul-Henning thinks. .. toctree:: :maxdepth: 1 h2againagainagain.rst cheri7.rst cheri6.rst cheri5.rst cheri4.rst cheri3.rst cheri2.rst cheri1.rst routine.rst 503aroundtheworld.rst legacy.rst ip_address.rst vdd19q3.rst quic.rst VSV00003.rst patent.rst lucky.rst apispaces.rst VSV00001.rst somethinghappened.rst trialerror.rst farfaraway.rst thatslow.rst firstdesign.rst 10goingon50.rst brinch-hansens-arrows.rst ssl_again.rst persistent.rst dough.rst wanton_destruction.rst spdy.rst http20.rst varnish_does_not_hash.rst thetoolsweworkwith.rst three-zero.rst ssl.rst gzip.rst vcl_expr.rst ipv6suckage.rst backends.rst platforms.rst barriers.rst thoughts.rst autocrap.rst sphinx.rst notes.rst varnish-7.5.0/doc/sphinx/phk/ip_address.rst000066400000000000000000000105351457605730600207250ustar00rootroot00000000000000.. Copyright (c) 2021 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _phk_ip_address: ====================================== IP Addresses - A long expected problem ====================================== I'm old enough to remember `HOSTS.TXT` and the introduction of the DNS system. Those were the days when you got a class B network by sending a polite letter to California, getting a polite letter back, and then, some months later, when `RFC1166 INTERNET NUMBERS `_ arrived with in semi-annual packet of printed RFCs, find out that letter had at typo and you had configured all of the European Parliaments 1200 computers on 136.172/16 instead of 136.173/16. But things were not simpler, if anything they were far more complex, because TCP/IP was not, as today, the only protocol that mattered. In addition to TCP/IP, there were IBM's SNA, Digitals DecNet, ApolloRing, Banyan/VINES, Novell NetWare, X.21, X.25, X.75, and the whole CCITT-OSI-"Intelligent Network" telecom disaster that never got off the ground. This is why DNS packets have a `class` field which can be set to `Hesiod` or `CHAOS` in addition to `the Internet`: The idea was that all the different protocols would get a number each, and we would have "The One Directory To Rule Them All". Largely because of this, a new and "protocol agnostic" lookup functions were designed: `getaddrinfo(3)` and `getnameinfo(3)`, but of course for IP they were supposed to be backwards compatible because there were *thousands* of users out there already. This is why `telnet 80.1440` tries to connect to `80.0.5.160`, why `ping 0x7f000001` becomes `127.0.0.1` and `0127.0.0.1` becomes `87.0.0.1`. If you read the manual page for `getaddrinfo(3)` you will find that it does not tell you that, it merely says it `conforms to IEEE Std 1001`. But everybody knew what that was back in 1990, and nobody had firewalls anyway because Cheswick & Bellowins book `Firewalls and Internet Security `_ was not published until 1994, so no worries ? As is often the case with 'designed for the future' the `getaddrinfo(3)` API instantly fossilized, hit by a freeze-ray in the 'the Unixâ„¢ wars'. This is why, when IPv4 numbers started to look like a finite resource, and the old A-, B- and C- class networks got dissolved into Classless Inter-Domain Routing or "CIDR" netmasks of any random size, getaddrinfo(3) did not grow to be able to translate "192.168.61/23" into something useful. I believe there were also some lilliputian dispute about the fact that `192.168.61` would return `192.168.0.61` to stay backwards compatible, whereas `192.168.61/23` would return `192.168.61.0 + 255.255.254.0`. Because of this, Varnish uses `getaddrinfo(3)` everywhere but one single place: Parsing of ACL specifications in VCL. First we have to use our own parser to check if it is a CIDR entry and if not we ask `getaddrinfo(3)`. The reason for this rant, is that somebody noticed that `ping 0127.0.0.1` didn't go to `127.0.0.1` as they expected. That has just become CVE-2021-29418 and CVE-2021-28918 and will probably become a dozen more, once the CVE-trophy-hunters go to town. All IP number strings enter Varnish from trusted points, either as command line arguments (`-a`, `-b`, `-M` etc.), in the VCL source (`backend`, `acl` etc.) or as PROXYv1 header strings from the TLS-stripper in front of Varnish. Of course, VCL allows you to do pretty much anything, including:: if (std.ip(req.http.trustme) ~ important_acl) { ... } If you do something like that, you may want to a) Consider the wisdom of trusting IP#'s from strangers and b) Think about this "critical netmask problem". Otherwise, I do not expect this new "critical netmask problem" to result in any source code changes in Varnish. If and when the various UNIX-oid operating systems, and the smoking remains of the "serious UNIX industry", (IEEE ? The Austin Group ? The Open Group ? Whatever they are called these days) get their act together, and renovate the `getaddrinfo(3)` API, Varnish will automatically pick that up and use it. Should they, in a flash of enlightenment, also make `getaddrinfo(3)` useful for parsing these newfangled CIDR adresses we got in 1993, I will be more than happy to ditch `vcc_acl_try_netnotation()` too. Until next time, Poul-Henning, 2021-03-30 varnish-7.5.0/doc/sphinx/phk/ipv6suckage.rst000066400000000000000000000046601457605730600210410ustar00rootroot00000000000000.. Copyright (c) 2010-2011 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _phk_ipv6suckage: ============ IPv6 Suckage ============ In my drawer full of cassette tapes, is a 6 tape collection published by Carl Malamuds "Internet Talk Radio", the first and by far the geekiest radio station on the internet. The tapes are from 1994 and the topic is "IPng", the IPv4 replacement that eventually became IPv6. To say that I am a bit jaded about IPv6 by now, is accusing the pope of being religious. IPv4 addresses in numeric form, are written as 192.168.0.1 and to not confuse IPv6 with IPv4, it was decided in RFC1884 that IPv6 would use colons and groups of 16 bits, and because 128 bits are a lot of bits, the secret '::' trick was introduced, to supress all the zero bits that we may not ever need anyway: 1080::8:800:200C:417A Colon was chosen because it was already used in MAC/ethernet addresses and did no damage there and it is not a troublesome metacharacter in shells. No worries. Most protocols have a Well Known Service number, TELNET is 23, SSH is 22 and HTTP is 80 so usually people will only have to care about the IP number. Except when they don't, for instance when they run more than one webserver on the same machine. No worries, says the power that controls what URLs look like, we will just stick the port number after the IP# with a colon: http://192.168.0.1:8080/... That obviously does not work with IPv6, so RFC3986 comes around and says "darn, we didn't think of that" and puts the IPV6 address in [...] giving us: http://[1080::8:800:200C:417A]:8080/ Remember that "harmless in shells" detail ? Yeah, sorry about that. Now, there are also a RFC sanctioned API for translating a socket address into an ascii string, getnameinfo(), and if you tell it that you want a numeric return, you get a numeric return, and you don't even need to know if it is a IPv4 or IPv6 address in the first place. But it returns the IP# in one buffer and the port number in another, so if you want to format the sockaddr in the by RFC5952 recommended way (the same as RFC3986), you need to inspect the version field in the sockaddr to see if you should do "%s:%s", host, port or "[%s]:%s", host, port Careless standardization costs code, have I mentioned this before ? Varnish reports socket addresses as two fields: IP space PORT, now you know why. Until next time, Poul-Henning, 2010-08-24 varnish-7.5.0/doc/sphinx/phk/legacy.rst000066400000000000000000000057511457605730600200600ustar00rootroot00000000000000.. Copyright (c) 2021 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _phk_legacy: ========= Legacy-.* ========= In the middle of an otherwise pleaseant conversation recently, the other person suddenly burst out that *"Varnish was part of our legacy software."* That stung a bit. But fair enough: They have been running varnish since 2009 or so. Neither Raymond's "New Hacker's Dictionary", nor the legacy publication it tried to replace, Kelly-Bootle's "The Devils DP Dictionary", define "legacy software". The former because the collective "we" did not bother with utter trivialities such as invoicing, the latter because back then people didnt abandon software. Tomorrow I will be sitting in a small hut in the middle of a field, trying to figure out what somebody could possibly have been thinking about, when 10 years ago they claimed to implement "V.42" while also using their own HDLC frame layout. "V.42" and "HDLC" are also a legacy at this point, but chances are you have used it: That was the hot way to do error-correction on modems, when dialing into a BBS or ISP in the 1990ies. I guess I should say "legacy-modems" ? Big-endianess, storing the bytes the sensible way for hex-dumps, is rapidly becoming legacy, as the final old HP and SUN irons are finally become eWaste. Objectively there is no difference of merit between little-endian and big-endian, the most successful computers architectures of all time picked equally one or the other, and the consolidation towards little-endian is driven more by *"It's actually not that important"* than by anything else. But we still have a bit of code which cares about endianess in Varnish, in particular in the imported `zlib` code. For a while I ran a CI client on a WLAN access point with a big-endian MIPS-processor. But with only 128MB RAM the spurious error rate caused too much noise. Nothing has been proclaimed "Legacy" more often and with more force, than the IBM mainframe, but they are still around, keeping the books balanced, as they have for half a century. And because they were born with variable length data types, IBM mainframes are big-endian, and because we in Varnish care about portability, you can also run Varnish Cache on your IBM mainframe: Thanks to Ingvar, there are "s390x" architecture Varnish packages if you need them. So I reached out to IBM's FOSS-outreach people and asked if we could borrow a cup of mainframe to run a CI-client, and before I knew it, the Varnish Cache Project had access to a virtual s390x machine somewhere in IBM's cloud. For once "In the Cloud" *literally* means "On somebody's mainframe" :-) I'm not up to date on IBM Mainframe technology, the last one I used was a 3090 in 1989, so I have no idea how much performance IBM has allocated to us, on what kind of hardware, or what it might cost. But it runs a full CI iteration on Varnish Cache in 3 minutes flat, making it one of the fastest CI-clients we have. Thanks IBM! Poul-Henning, 2021-05-24 varnish-7.5.0/doc/sphinx/phk/lucky.rst000066400000000000000000000143651457605730600177440ustar00rootroot00000000000000.. Copyright (c) 2019 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _phk_lucky: =================== Do you feel lucky ? =================== Whenever a corporate cotton-mouth says anything, their lawyers always make them add a footnote to the effect that *"Past performance is not predictive of future results."* so they do not get sued for being insufficiently clairvoyant. The lawyers are wrong. Past performance *is* predictive of future results: That is how we determine the odds. Just never forget that the roll of the dice is still pure luck. Or as author Neil Gaiman said it in `a commencement speech in 2012 `_: *»Often you will discover that the harder you work, and the more wisely you work, the luckier you get. But there is luck, and it helps.«* Approximately four million real websites use Varnish now and the number seems to grow by half a million sites per year, as it has been doing for the last 8 years. That took a lot of luck, and wisdom was probably also involved, but mostly it was lot of hard work by a lot of people. Wisdom is a tricky thing, the hypothesized "older—wiser" correlation is still in clinical testing and in the meantime Neil Gaiman suggests: *»So be wise, because the world needs more wisdom, and if you cannot be wise, pretend to be someone who is wise, and then just behave like they would.«* Works for me. ---- 2018 ---- Despite sucking in pretty much any other aspect, 2018 was a good year for Open Source Software in general and Varnish Cache in particular. People have finally started to understand that Free does not mean Gratis, and that quality software takes time and effort. From to the dot-com generation reinventing reproducible builds (like we had then in the 1980'ies) to the EU setting up a pot of `850M€ bug-bounties `_ [#f1]_, things are moving in the right direction with respect to software quality. In 2018 the Varnish Cache project had settled into our "March and September 15th" release strategy, and released 6.0 in March and 6.1 in September, as promised, and the next release will be out in ten weeks. We also had no security issues, and we have managed to keep the number of open issues and bug reports down. Writing it like that makes it sound boring, but with four million web sites depending on Varnish, boring is good thing. No news is indeed good news. --------------- 2019 and HTTP/3 --------------- The Next Big Thing in our world seems like it will be HTTP/3 ("The protocol formerly known as QUIC"), and I suspect it will drive much of our work in 2019. It is far too early to say anything about if, when or how, but I do spend a lot of time with pencil and paper, pretending to be somebody who is good at designing secure and efficient software. Around the time of the 2019-03-15 release we will gather for a VDD (Varnish Developer Day), and the big topic there will be HTTP/3, and then we will know and be ready to say something more detailed. I don't think it is realistic to roll out any kind of H3 support in the September release, that release will probably only contain a some of the necessary preparatory reorganization, so expect to run production on 6.0 LTS for a while. --------------------- Varnish Moral License --------------------- I want to thank the companies who have paid for a `Varnish Moral License `_: * Fastly * Uplex * Varnish Software * Section.io * Globo The VML funding is why Varnish Cache is not on EU's hit-list and why another half million websites who started using Varnish in 2018 will not regret it. Much appreciated! ------- ENOLUCK ------- For me 2018 ended on a sour note, when my dear friend `Jacob Sparre Andersen `_ died from cancer a week before christmas. Society as such knows how to deal with deaths, and all sorts of procedures and rules kick in, to tie the loose ends up, respectfully and properly. The Internet is not there yet, people on the Internet have only just started dying, and there are not yet any automatic routines or generally perceived procedures for informing the people and communities who should know, or for tying up the loose ends, accounts, repositories and memberships on the Internet. But deaths happen, and I can tell you from personal experience that few things feel more awful, than having sent an email to somebody, to receive the reply from their heartbroken spouse, that you are many months too late [#f2]_. Jacob was not a major persona on the Internet, but between doing a lot of interesting stuff as a multi-discipline phd. in physics, being a really good Ada programmer, a huge Lego enthusiast, an incredibly helpful person *and* really *good* at helping, he had a lot of friends in many corners of the Internet. Jacob knew what was coming, and being his usual helpful self, he used the last few weeks to make a list of who to tell online, where things were stored, what the passwords were, and he even appointed a close friend to be his "digital executor", who will help his widow sort all these things out in the coming months. When people die in our age-bracket, they usually do not get a few weeks notice. If Jacob had been hit by a bus, his widow would have have been stuck in an almost impossible digital situation, starting with the need to guess, well, pretty much everything, including the passwords. In honour of my helpful friend Jacob, and for the sake of your loved ones, please sit down tonight, and write your own list of digital who, what and where, including how to gain access to the necessary passwords, and file it away in a way where it will be found, if you run out of luck. Good luck! *phk* .. rubric:: Footnotes .. [#f1] I am not a big fan of bug-bounties, but I will grudingly admit that wiser men than me, notably `Dan Geer `_, have proposed that tax-money be used to snatch the vulnerabilities up, before bad guys get hold of them, and they seem to have a point. .. [#f2] And it does not feel any less awful if the loved ones left behind tries to fill the blanks by asking you how you knew each other and if you have any memories you could share with them. varnish-7.5.0/doc/sphinx/phk/notes.rst000066400000000000000000000251321457605730600177370ustar00rootroot00000000000000.. Copyright (c) 2016 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _phk_notes: ======================== Notes from the Architect ======================== Once you start working with the Varnish source code, you will notice that Varnish is not your average run of the mill application. That is not a coincidence. I have spent many years working on the FreeBSD kernel, and only rarely did I venture into userland programming, but when I had occation to do so, I invariably found that people programmed like it was still 1975. So when I was approached about the Varnish project I wasn't really interested until I realized that this would be a good opportunity to try to put some of all my knowledge of how hardware and kernels work to good use, and now that we have reached alpha stage, I can say I have really enjoyed it. So what's wrong with 1975 programming ? --------------------------------------- The really short answer is that computers do not have two kinds of storage any more. It used to be that you had the primary store, and it was anything from acoustic delaylines filled with mercury via small magnetic dougnuts via transistor flip-flops to dynamic RAM. And then there were the secondary store, paper tape, magnetic tape, disk drives the size of houses, then the size of washing machines and these days so small that girls get disappointed if think they got hold of something else than the MP3 player you had in your pocket. And people program this way. They have variables in "memory" and move data to and from "disk". Take Squid for instance, a 1975 program if I ever saw one: You tell it how much RAM it can use and how much disk it can use. It will then spend inordinate amounts of time keeping track of what HTTP objects are in RAM and which are on disk and it will move them forth and back depending on traffic patterns. Well, today computers really only have one kind of storage, and it is usually some sort of disk, the operating system and the virtual memory management hardware has converted the RAM to a cache for the disk storage. So what happens with squids elaborate memory management is that it gets into fights with the kernels elaborate memory management, and like any civil war, that never gets anything done. What happens is this: Squid creates a HTTP object in "RAM" and it gets used some times rapidly after creation. Then after some time it get no more hits and the kernel notices this. Then somebody tries to get memory from the kernel for something and the kernel decides to push those unused pages of memory out to swap space and use the (cache-RAM) more sensibly for some data which is actually used by a program. This however, is done without squid knowing about it. Squid still thinks that these http objects are in RAM, and they will be, the very second it tries to access them, but until then, the RAM is used for something productive. This is what Virtual Memory is all about. If squid did nothing else, things would be fine, but this is where the 1975 programming kicks in. After some time, squid will also notice that these objects are unused, and it decides to move them to disk so the RAM can be used for more busy data. So squid goes out, creates a file and then it writes the http objects to the file. Here we switch to the high-speed camera: Squid calls write(2), the address i gives is a "virtual address" and the kernel has it marked as "not at home". So the CPU hardwares paging unit will raise a trap, a sort of interrupt to the operating system telling it "fix the memory please". The kernel tries to find a free page, if there are none, it will take a little used page from somewhere, likely another little used squid object, write it to the paging poll space on the disk (the "swap area") when that write completes, it will read from another place in the paging pool the data it "paged out" into the now unused RAM page, fix up the paging tables, and retry the instruction which failed. Squid knows nothing about this, for squid it was just a single normal memory acces. So now squid has the object in a page in RAM and written to the disk two places: one copy in the operating systems paging space and one copy in the filesystem. Squid now uses this RAM for something else but after some time, the HTTP object gets a hit, so squid needs it back. First squid needs some RAM, so it may decide to push another HTTP object out to disk (repeat above), then it reads the filesystem file back into RAM, and then it sends the data on the network connections socket. Did any of that sound like wasted work to you ? Here is how Varnish does it: Varnish allocate some virtual memory, it tells the operating system to back this memory with space from a disk file. When it needs to send the object to a client, it simply refers to that piece of virtual memory and leaves the rest to the kernel. If/when the kernel decides it needs to use RAM for something else, the page will get written to the backing file and the RAM page reused elsewhere. When Varnish next time refers to the virtual memory, the operating system will find a RAM page, possibly freeing one, and read the contents in from the backing file. And that's it. Varnish doesn't really try to control what is cached in RAM and what is not, the kernel has code and hardware support to do a good job at that, and it does a good job. Varnish also only has a single file on the disk whereas squid puts one object in its own separate file. The HTTP objects are not needed as filesystem objects, so there is no point in wasting time in the filesystem name space (directories, filenames and all that) for each object, all we need to have in Varnish is a pointer into virtual memory and a length, the kernel does the rest. Virtual memory was meant to make it easier to program when data was larger than the physical memory, but people have still not caught on. More caches ----------- But there are more caches around, the silicon mafia has more or less stalled at 4GHz CPU clock and to get even that far they have had to put level 1, 2 and sometimes 3 caches between the CPU and the RAM (which is the level 4 cache), there are also things like write buffers, pipeline and page-mode fetches involved, all to make it a tad less slow to pick up something from memory. And since they have hit the 4GHz limit, but decreasing silicon feature sizes give them more and more transistors to work with, multi-cpu designs have become the fancy of the world, despite the fact that they suck as a programming model. Multi-CPU systems is nothing new, but writing programs that use more than one CPU at a time has always been tricky and it still is. Writing programs that perform well on multi-CPU systems is even trickier. Imagine I have two statistics counters: unsigned n_foo; unsigned n_bar; So one CPU is chugging along and has to execute n_foo++ To do that, it read n_foo and then write n_foo back. It may or may not involve a load into a CPU register, but that is not important. To read a memory location means to check if we have it in the CPUs level 1 cache. It is unlikely to be unless it is very frequently used. Next check the level two cache, and let us assume that is a miss as well. If this is a single CPU system, the game ends here, we pick it out of RAM and move on. On a Multi-CPU system, and it doesn't matter if the CPUs share a socket or have their own, we first have to check if any of the other CPUs have a modified copy of n_foo stored in their caches, so a special bus-transaction goes out to find this out, if if some cpu comes back and says "yeah, I have it" that cpu gets to write it to RAM. On good hardware designs, our CPU will listen in on the bus during that write operation, on bad designs it will have to do a memory read afterwards. Now the CPU can increment the value of n_foo, and write it back. But it is unlikely to go directly back to memory, we might need it again quickly, so the modified value gets stored in our own L1 cache and then at some point, it will end up in RAM. Now imagine that another CPU wants to n_bar+++ at the same time, can it do that ? No. Caches operate not on bytes but on some "linesize" of bytes, typically from 8 to 128 bytes in each line. So since the first cpu was busy dealing with n_foo, the second CPU will be trying to grab the same cache-line, so it will have to wait, even through it is a different variable. Starting to get the idea ? Yes, it's ugly. How do we cope ? ---------------- Avoid memory operations if at all possible. Here are some ways Varnish tries to do that: When we need to handle a HTTP request or response, we have an array of pointers and a workspace. We do not call malloc(3) for each header. We call it once for the entire workspace and then we pick space for the headers from there. The nice thing about this is that we usually free the entire header in one go and we can do that simply by resetting a pointer to the start of the workspace. When we need to copy a HTTP header from one request to another (or from a response to another) we don't copy the string, we just copy the pointer to it. Provided we do not change or free the source headers, this is perfectly safe, a good example is copying from the client request to the request we will send to the backend. When the new header has a longer lifetime than the source, then we have to copy it. For instance when we store headers in a cached object. But in that case we build the new header in a workspace, and once we know how big it will be, we do a single malloc(3) to get the space and then we put the entire header in that space. We also try to reuse memory which is likely to be in the caches. The worker threads are used in "most recently busy" fashion, when a workerthread becomes free it goes to the front of the queue where it is most likely to get the next request, so that all the memory it already has cached, stack space, variables etc, can be reused while in the cache, instead of having the expensive fetches from RAM. We also give each worker thread a private set of variables it is likely to need, all allocated on the stack of the thread. That way we are certain that they occupy a page in RAM which none of the other CPUs will ever think about touching as long as this thread runs on its own CPU. That way they will not fight about the cachelines. If all this sounds foreign to you, let me just assure you that it works: we spend less than 18 system calls on serving a cache hit, and even many of those are calls tog get timestamps for statistics. These techniques are also nothing new, we have used them in the kernel for more than a decade, now it's your turn to learn them :-) So Welcome to Varnish, a 2006 architecture program. *phk* varnish-7.5.0/doc/sphinx/phk/patent.rst000066400000000000000000000210711457605730600201000ustar00rootroot00000000000000.. Copyright (c) 2019 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _phk_patent: A patently good idea ==================== When I was in USA, diplomas on the wall was very much a thing. I don't think I fully reverse-engineered the protocol for which diplomas would get hung and which would be filed away, apart from the unbreakable rule that, like it or not, anything your company handed out was mandatory on the office-wall, no matter how embarrasing. Our paediatrician had diplomas for five or six steps of her education. My favourite pizzeria had a diploma for "Authentic Italian Food" from a US organization suffering from fuzzy territorial perception. Co-workers had diplomas from their universities, OSHA, USAF, DoE, CalTrans and who knows what. But the gold-standard of diplomas, at least amongst the engineers, was having a US Patent on the wall, even if it only ever made them a single dollar in assignment fee. I asked one of them about his patent and he answered wryly: *"It proves to my boss and my mom that I had at least one unique idea in my career."* Personally I do not think the patent system does what people think it does, ie: protect the small inventor from big companies, so I have no patents to my name, and in fact no diplomas on my wall at all. But I still mentally carve a notch when I see one of my ideas being validated in some form. Containers and Zones are not jails, but they know, and I know, where they got the basic idea from, and that is plenty of validation for my ego. Today is Store Bededag in Denmark, loosely translated "All Prayers Day", by definition a friday and we, like many other danes, have eloped to the beach-house for a long weekend. But being self-employed I puttered around with VCC, the VCL compiler, this morning, and as a result, you will soon be able to say:: import vmod_with_impractically_long_name as v; (You can thank Dridi for suggesting that) My idea that Varnish would be configured in a Domain Specific Language compiled to native code is obviously one of my better, and about 10 years ago, that was becoming very obvious. In Norway `Varnish Software `_ were being spun out of the Redpill-Linpro company. Artur Bergman, one of the first Varnish Cache power users, who ran Wikias content delivery and hit our project like a blast-oven with ideas, patches, measurements, general good cheer and incredibly low tolerance for bull-shit, started the `Fastly CDN `_. Prior to that, I had done a bit of soul-searching myself, wondering if I should try to take Varnish and run with it? In conventional economic theory, I would have patented the VCL idea, and become as rich as the idea was good. But in all probable worlds, that would only have meant that the idea would be dead as a doornail, I would not have made any money from it, it would never have helped improve the web, and I would have wasted much more of my life in meetings than would be good for anybodys health. As if that wasn't enough, the very thought of having to hire somebody scared me, but not nearly as much as the realization that if I built a company with any number of employees, sooner or later I would have to fire someone again. Writing code? Yes. Running a growing company? No. The result of my soul-searching was this email to announce@ where I took myself out of the game: .. code-block:: text Subject: For the record: Varnish and Money From: Poul-Henning Kamp To: varnish-announce@varnish-cache.org Date: Fri Nov 19 14:03:22 CET 2010 Just so everybody know where I stand on this... Poul-Henning -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Introduction - ------------ As the main developer of the Varnish Software and the de-facto leader of the Varnish Open Source Project, it is my desire to see Varnish used and adopted as widely as possible. To the same ends, the founders of the Varnish Project chose the BSD license to facilitate commercial exploitation of Varnish in all forms, while protecting the independence of the Open Source Project. The BSD license is non-discriminatory, and makes no attempt to separate the good guys from the bad guys, and neither should it. The Varnish Project, as a community, is a different story. While the BSD license can guarantee that Varnish, as software, will always be available, a thriving Open Source Community takes a fair bit more effort to hold together. Nothing can rip apart an Open Source project faster than competing commercial interests playing dirty, and since Varnish has started to cause serious amounts of money to shift around, it is time to take this issue a bit more seriously. Non-competition pledge: - ----------------------- My interest in Varnish is developing capable quality software, and making a living at the same time. In addition to Varnish, I have some long time good customers for whom I do various weird things with computers and software, and since they have stuck with me and paid my bills, I will stick with them and send them more bills. The Varnish Moral License (VML) was drawn up to provide a money-stream that can fund my Varnish-habit, and it was designed as an "arms-length" construction to prevent it from taking control of the projects direction. Therefore acquiring a VML does not mean that you get to tell me what to do, or in which order I should do it. There is no "tit for tat" involved. The only thing you get out of the VML, is that the next version of Varnish will be better than the one we have now. Therefore: As long as I can keep my family fed, happy and warm this way, I will not enter any other commercial activity related to Varnish, and am more than happy to leave that field open to everybody and anybody, who wants to try their hand. Fairness pledge: - ---------------- As the de-facto leader of the Varnish community, I believe that the success or failure of open source rises and falls with the community which backs it up. In general, there is a tacit assumption, that you take something from the pot and you try put something back in the pot, each to his own means and abilities. And the pot has plenty that needs filling: From answers to newbies questions, bug-reports, patches, documentation, advocacy, VML funding, hosting VUG meetings, writing articles for magazines, HOW-TO's for blogs and so on, so this is no onerous demand for anybody. But the BSD license allows you to not participate in or contribute to the community, and there are special times and circumstances where that is the right thing, or even the only thing you can do, and I recognize that. Therefore: I will treat everybody, who do not contribute negatively to the Varnish community, equally and fairly, and try to foster cooperation and justly resolve conflicts to the best of my abilities. Policy on Gifts: - ---------------- People sometimes prefer to show their appreciation of Varnish by sending me gifts. I really love that But please understand, that any and gifts or other appreciations I may receive, from cartoons on my Amazon Wishlist, up to and including pre-owned tropical tax-shelter islands, with conveniently unlocked bank vaults filled with gold bars (one can always dream...), will all be received and interpreted the same way: As tokens of appreciation for deeds already done, and encouragement to me to keep doing what is right and best for Varnish in the future. Poul-Henning Kamp Signed with my PGP-key, November 19th, 2010, Slagelse, Denmark. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (FreeBSD) iEYEARECAAYFAkzmdRkACgkQlftZhnGqOJOJwwCffytQ5kGP+Grh2unpNIIw8G2R QcQAn18fGLT4Lx2ACBivtk5wEFy6fUcu =3V52 -----END PGP SIGNATURE----- -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk@FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. Today (20190517) Arturs `Fastly `_, company went public on the New York Stock Exchange, and went up from $16 to $24 in a matter of hours. So-called "financial analysts" write that as a consequence Fastly is now worth 2+ Billion Dollars. I can say with 100% certainty and honesty that there is no way I could *ever* have done that, that is entirely Arturs doing and I know and admire how hard he worked to make it happen. Congratulations to Artur and the Fastly Crew! But I will steal some of Arturs thunder, and point to Fastlys IPO as proof that at least once in my career, I had a unique idea worth a billion dollars :-) *phk* varnish-7.5.0/doc/sphinx/phk/persistent.rst000066400000000000000000000072671457605730600210200ustar00rootroot00000000000000.. Copyright (c) 2014-2019 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _phk_persistent: ==================== A persistent message ==================== This message is about -spersistent and why you should not use it, even though it is still present in Varnish 4.x. Starting with Varnish 6 it is only present when explicitly enabled at compile time. TL;DR: ------ Under narrow and ill defined circumstances, -spersistent works well, but in general it is more trouble than it is worth for you to run it, and we don't presently have the development resources to fix that. If you think you have these circumstances, you need to * call ``configure`` with ``--with-persistent-storage`` before compilation * use the storage engine name ``deprecated_persistent``, use a:: -sdeprecated_persistent, argument when starting varnish in order to use it. The long story -------------- When we added -spersistent, to Varnish, it was in response to, and sponsored by a specific set of customers who really wanted this. A persistent storage module is an entirely different kettle of vax than a non-persistent module, because of all the ugly consistency issues it raises. Let me give you an example. Imagine a cluster of some Varnish servers on which bans are used. Without persistent storage, if one of them goes down and comes back up, all the old cached objects are gone, and so are, by definition all the banned objects. With persistent storage, we not only have to store the still live bans with the cached objects, and keep the two painfully in sync, so the bans gets revived with the objects, we also have to worry about missing bans during the downtime, since those might ban objects we will recover on startup. Ouch: Straight into database/filesystem consistency territory. But we knew that, and I thought I had a good strategy to deal with this. And in a sense I did. Varnish has the advantage over databases and filesystems that we can actually loose objects without it being a catastrophe. It would be better if we didn't, but we can simply ditch stuff which doesn't look consistent and we'll be safe. The strategy was to do a "Log Structured Filesystem", a once promising concept which soon proved very troublesome to implement well. Interestingly, today the ARM chip in your SSD most likely implements a LFS for wear-levelling, but with a vastly reduced feature set: All "files" are one sector long, filenames are integers and there are no subdirectories or rename operations. On the other hand, there is extra book-keeping about the state of the flash array. A LFS consists of two major components: The bit that reads and writes, which is pretty trivial, and the bit which makes space available which isn't. Initially we didn't even do the second part, because in varnish objects expire, and provided they do so fast enough, the space will magically make itself available. This worked well enough for our initial users, and they only used bans sporadically so that was cool too. In other words, a classic 20% effort, 80% benefit. Unfortunately we have not been able to find time and money for the other 80% effort which gives the last 20% benefit, and therefor -spersistent has ended up in limbo. Today we decided to officially deprecate -spersistent, and start warning people against using it, but we will leave it in the source code for now, in order to keep the interfaces necessary for a persistent storage working, in the hope that we will get to use them again later. So you can still use persistent storage, if you really want to, and if you know what you're doing, by using: -sdeprecated_persistent You've been warned. Poul-Henning, 2014-05-26 varnish-7.5.0/doc/sphinx/phk/platforms.rst000066400000000000000000000073051457605730600206200ustar00rootroot00000000000000.. Copyright (c) 2010-2016 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _phk_platforms: ================= Picking platforms ================= Whenever you write Open Source Software, you have to make a choice of what platforms you are going to support. Generally you want to make your program as portable as possible and cover as many platforms, distros and weird computers as possible. But making your program run on everything is hard work very hard work. For instance, did you know that: sizeof(void*) != sizeof(void * const) is legal in a ISO-C compliant environment ? Varnish `runs on a Nokia N900 `_ but I am not going to go out of my way to make sure that is always the case. To make sense for Varnish, a platform has to be able to deliver, both in terms of performance, but also in terms of the APIs we use to get that performance. In the FreeBSD project where I grew up, we ended up instituting platform-tiers, in an effort to document which platforms we cared about and which we did love quite as much. If we did the same for Varnish, the result would look something like: A - Platforms we care about --------------------------- We care about these platforms because our users use them and because they deliver a lot of bang for the buck with Varnish. These platforms are in our "tinderbox" tests, we use them ourselves and they pass all regression tests all the time. Platform specific bug reports gets acted on. *FreeBSD* *Linux* Obviously you can forget about running Varnish on your `WRT54G `_ but if you have a real computer, you can expect Varnish to work "ok or better" on any distro that has a package available. B - Platforms we try not to break --------------------------------- We try not to break these platforms, because they basically work, possibly with some footnotes or minor limitations, and they have an active userbase. We may or may not test on these platforms on a regular basis, or we may rely on contributors to alert us to problems. Platform specific bug reports without patches will likely live a quiet life. *Mac OS/X* *Solaris-decendants* (Oracle Solaris, OmniOS, Joyent SmartOS) Mac OS/X is regarded as a developer platform, not as a production platform. Solaris-decendants are regarded as a production platform. NetBSD, AIX and HP-UX are conceivably candidates for this level, but so far I have not heard much, if any, user interest. C - Platforms we tolerate ------------------------- We tolerate any other platform, as long as the burden of doing so is proportional to the benefit to the Varnish community. Do not file bug reports specific to these platforms without attaching a patch that solves the problem, we will just close it. For now, anything else goes here, certainly the N900 and the WRT54G. I'm afraid I have to put OpenBSD here for now, it is seriously behind on socket APIs and working around those issues is just not worth the effort. If people send us a small non-intrusive patches that makes Varnish run on these platforms, we'll take it. If they send us patches that reorganizes everything, hurts code readability, quality or just generally do not satisfy our taste, they get told that thanks, but no thanks. Is that it ? Abandon all hope etc. ? ------------------------------------- These tiers are not static, if for some reason Varnish suddenly becomes a mandatory accessory to some technically sensible platform, (zOS anyone ?) that platform will get upgraded. If the pessimists are right about Oracles intentions, Solaris may get demoted. Until next time, Poul-Henning, 2010-08-03 Edited Nils, 2014-03-18 with Poul-Hennings consent varnish-7.5.0/doc/sphinx/phk/quic.rst000066400000000000000000000166511457605730600175560ustar00rootroot00000000000000.. Copyright (c) 2019 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _phk_quick_osi: QUIC visions of OSI =================== New York Times Style Magazine had an article last week `about the Italian town Ivrea `_, which you have probably never heard about. Neither had I, 30+ years ago, when I got sent there as part of a project to migrate the European Parliament from OSI protocols to TCP/IP. What ? You thought OSI protocols were only a theory ? Nothing could be further from the truth. One of the major reasons we are being bothered by Indian "Microsoft Support" all the time is that the global telephone network runs on Signalling System Number 7 ("SS7") which is very much an OSI protocol. Your electricity meter very likely talks DLMS(/COSEM), which is also an OSI protocol. In both cases, it cost serious money to just get to read the relevant standards, which is why they could persist in this madness undisturbed for decades. ITU-T finally saw the light a few years back, so now you can actually Read `Q.700 `_ if you do not belive me. Anyway, back in Luxembourg in the tail end of the 1980'ies, the European parliament ran OSI protocols, and it sucked, and the more I dug into "The Red/Yellow/Blue Book"[#f1]_, there more obvious it was that these protocols were totally unsuitable for use on a local area network. We proposed to migrate the European Parliament to use TCP/IP, and we did, which gave me a memorable year in Ivrea, but we could only do so on the explicit condition, imposed by the European Commission, that the parliament would migrate back, "…once the issues with the OSI protocols were sorted out." They never sorted them out, because the OSI protocols were designed and built by people who only considered communication between different buildings, cities, countries and continents, but not what happened inside each individual building [#f2]_. Having seen the title of this rant, you can probably already see where I'm going with this, and you will be mostly right. The good news is that IETF learned their lesson, so QUIC is not being rammed through and rubber-stamped the way HTTP/2 was, in fact, one could argue that IETF got their revenge by handing QUIC over to their arc-nemesis: `The Transport Area `_. I think that was a good thing, because pretty much all of my predictions about H2 came true, from the lack of benefits to the DoS exposure designed into it. All those aliments came by because the people who pushed "H2 the protocol previously known as SPDY" only considered the world from the perspective of a huge company with geo-diverse datacenters for whom packet loss is something that happens to other people and congestion is solved by sending an email to Bandwidth Procurement. But those concerns are precisely what the "dinosaurs" in the Transport Area care about and have studied and worked on for decades, so there is every reason to expect that QUIC will emerge from the Transport Area much better than it went in. While I was pretty certain that H2 would be a fizzle, I have a much harder time seeing where QUIC will go. On the positive side, QUIC is a much better protocol, and it looks like the kind of protocol we need in an increasingly mobile InterNet where IP numbers are an ephemeral property. This is the carrot, and it is a big and juicy one. In the neutral area QUIC is not a simple protocol, it is a full transport protocol, which means loss detection, retransmission, congestion control and all that, but you do not get better than TCP without solving the problems TCP solved, and those are real and hard problems. On the negative side, QUIC goes a long way to break through barriers of authority, both by putting it on top of UDP to get it through firewalls, but also by the very strong marriage to TLS1.3 which dials privacy up to 11: Everything but the first byte of a QUIC packet is encrypted. Authorities are not going to like that, and I can easily see more autoritarian countries outright ban QUIC, and to make that ban stick, they may even transition from "allowed if not banned" to "banned if not allowed" firewalling. Of couse QUIC would still be a thing if you are big enough to negotiate with G7-sized governments, and I would not be surprised if QUIC ends up being a feasible protocol only for companies which can point at the "job creation" their data-centers provide. The rest of us will have to wait and see where that leaves us. QUIC and Varnish ---------------- I can say with certainty that writing a QUIC implementation from scratch, including TLS 1.3 is out of the question, that is simply not happening. That leaves basically three options: 1) Pick up a TLS library, write our own QUIC 2) Pick up a QUIC library and the TLS library it uses. 3) Stick with "That belongs in a separate process in front of Varnish." The precondition for linking an TLS library to Varnishd, is that the private keys/certificates are still isolated in a different address space, these days known as "KeyLess TLS". The good news is that QUIC is designed to do precisely that [#f3]_ . The bad news is that as far as I can tell, none of the available QUIC implementations do it, at least not yet. The actual selection of QUIC implementations we could adopt is very short, and since I am not very inclined to make Go or Rust a dependency for Varnish, it rapidly becomes even shorter. Presently, The H2O projects `quicly `_ would probably be the most obvious candidate for us, but even that would be a lot of work, and there is a some room between where they have set their code quality bar, and where we have put ours. However, opting to write our own QUIC instead of adopting one is a lot of work, not in the least for the necessary testing, so all else being equal, adopting sounds more feasible. With number three we abdicate the ability to be "first line" if QUIC/H3 does become the new black, and it would be incumbent on us to make sure we work as well as possible with those "front burner" boxes using a richer PROXY protocol or maybe a "naked" QUIC, to maintain functionality. One argument for staying out of the fray is that our "No TLS in Varnish" policy looks like it was the right decision. While it is inconvenient for small sites to have to run two processes, as soon as sites grow, the feedback changes to appreciation for the decoupling for TLS from policy/caching, and once sites get even bigger, or more GDPR exposed, the ability to use diverse TLS offloaders is seen as a big benefit. Finally, there is the little detail of testing: Varnishtest, which has its own `VTest project `_ now, will need to learn about HTTP3, QUIC and possibly TLS also. And of course, when we ask the Varnish users, they say *"Ohhh... they all sound delicious, can we have the buffet ?"* :-) *phk* .. rubric:: Footnotes .. [#f1] The ITU-U's standards were meant to come out in updated printed volumes every four years, each "period" a different color. .. [#f2] Not, and I want to stress this, because they were stupid or ignorant, but it simply was not their job. Many of them, like AT&T in USA, were legally banned from the "computing" market. .. [#f3] See around figure 2 in `the QUIC/TLS draft `_. varnish-7.5.0/doc/sphinx/phk/routine.rst000066400000000000000000000030251457605730600202710ustar00rootroot00000000000000.. Copyright (c) 2022 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _phk_routine: ======================== Getting into the routine ======================== Yesterday we released `VSV00009 `_, a pretty harmless DoS from the backend side, which could trivially be mitigated in VCL. By now handling security issues seem to have become routine for the project, which is good, because that is the world we live in, and bad, because we live in a world where that is a necessary skill. From the very start of the project, we have treated backends as "trusted", in the sense that a lot of nasty stuff we try to handle from clients got "dont do that then" treatment from the backend. That was back when "cloud" were called "mainframes" and "containers" were called "jails", way back when CDNs were only for companies with more money than skill. Part of the reasoning was also maximizing compatibility. Backends were a lot more - let us call it "heterogenous" - back then. Some of them were literally kludges nailed to the side of legacy newspaper production systems, and sometimes it was obvious that they had not heard about RFCs. For the problem we fixed yesterday, one line of VCL took care of the problem, but that is not guaranteed to always be the case. These days the "web" is a lot more regimented, and expecting standards-compliance from backends makes sense, so we will tighten the screws in that department as an ongoing activity. Poul-Henning, 2022-08-05 varnish-7.5.0/doc/sphinx/phk/somethinghappened.rst000066400000000000000000000074361457605730600223200ustar00rootroot00000000000000.. Copyright (c) 2017 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _phk_somethinghappened: ==================================================== Something (funny) happened on the way to 5.1.0^H1^H2 ==================================================== Some time back we, or to be perfectly honest, I, decided that in the future we would do two varnish releases per year, march 15th and september 15th and damn the torpedoes. The 5.1.X release was the first real test of this and it went a little less smooth than I would have preferred, and it is pretty much all my own fault. As you may have heard, we're in the process of `building a new house `_ and that obviously takes a lot of time on my part, most recently we have put up our "old" house for sale etc. etc. So I was distracted, and suddenly it was march 15th but come hell or high water: We cut the release. ... and almost immediately found that it had a total show-stopper bug which would cause the worker process to restart too often. Ok, fix that and roll 5.1.1 ... and find another two, not quite as severe but still unacceptable problems. Deep breath, fix those, and a lot of HTTP/2 stuff reported by simon & xcir, who kindly subject that part of the code to some live traffic ... and roll 5.1.2. This one will stick I hope. Next release will be September 15th. ... unless something truly horrible lurks in 5.1.2. Success, Failure or Meh? (strike out the not applicable) -------------------------------------------------------- Seen from a release engineering point of view we live a very sheltered life in the Varnish Project. Our code base is small, 120 thousand lines of code and we wrote almost all of it ourselves, which means that we control the quality standard throughout. Thanks to our focus on code-quality, we have never had to rush out a bug/security-fix in the full glare of the combined scorn of Nanog, Hackernews, Reddit and Metasploit [#f2]_. We also don't link against any huge "middleware" libraries, I think the biggest ones are Ncurses and PCRE [#f1]_, both of which are quite stable, and we don't depend on any obscure single-compiler languages either. So while rushing out point releases with short notice is pretty routine for many other projects, it was a new experience for us, and it reminded us of a couple of things we had sort of forgotten [#f3]_. I am absolutely certain that if we had not had our "release by calendar" policy in place, I would probably not have been willing to sign of on a release until after all the house-building-moving-finding-where-I-put-the-computer madness is over late in summer, and then I would probably still insist on delaying it for a month just to catch my bearings. That would have held some `pretty significant new code `_ from our users for another half year, for no particular reason. So yeah, it was pretty embarrasing to have to amend our 5.1 release twice in two weeks, but it did prove that the "release by calendar" strategy is right for our project: It forced us to get our s**t toggether so users can benefit from the work we do in a timely fashion. And thanks to the heroic testing efforts of Simon and Xcir, you may even be able to use the HTTP/2 support in 5.1.2 as a result. Next time, by which I mean "September 15th 2017", we'll try to do better. Poul-Henning, 2017-04-11 .. rubric:: Footnotes .. [#f1] Ncurses is just shy of 120 thousand lines of code and PCRE is 96 thousand lines but that is getting of lightly, compared to linking against any kind of GUI. .. [#f2] The bugs which caused 5.1.1 and 5.1.2 "merely" resulted in bad stability, they were not security issues. .. [#f3] Always release from a branch, in case you need to release again. varnish-7.5.0/doc/sphinx/phk/spdy.rst000066400000000000000000000147011457605730600175660ustar00rootroot00000000000000.. Copyright (c) 2012-2015 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _phk_spdy: =================================== What SPDY did to my summer vacation =================================== It's dawning on me that I'm sort of the hipster of hipsters, in the sense that I tend to do things far before other people do, but totally fail to communicate what's going on out there in the future, and thus by the time the "real hipsters" catch up, I'm already somewhere different and more interesting. My one lucky break was the `bikeshed email `_ where I actually did sit down and compose some of my thoughts, thus firmly sticking a stick in the ground as one of the first to seriously think about how you organize open source collaborations. I mention this because what I am going to write probably seems very unimportant for most of the Varnish users right now, but down the road, three, five or maybe even ten years ahead, I think it will become important. Feel free to not read it until then. The evolution of Varnish ------------------------ When we started out, seven years ago, our only and entire goal was to build a server-side cache better than squid. That we did. Since then we have added stuff to Varnish (ESI:includes, gzip support, VMODS) and I'm staring at streaming and conditional backend fetches right now. Varnish is a bit more than a web-cache now, but it is still, basically, a layer of polish you put in front of your webserver to get it to look and work better. Google's experiments with SPDY have forced a HTTP/2.0 effort into motion, but if past performance is any indication, that is not something we have to really worry about for a number of years. The IETF WG has still to manage to "clarify" RFC2616 which defines HTTP/1.1, and to say there is anything even remotely resembling consensus behind SPDY would be a downright lie. RFC2616 is from June 1999, which, to me, means that we should look at 2035 when we design HTTP/2.0, and predicting things is well known to be hard, in particular with respect to the future. So what's a Varnish architect to do? What I did this summer vacation, was to think a lot about how Varnish can be architected to cope with the kind of changes SPDY and maybe HTTP/2.0 drag in: Pipelining, multiplexing, etc., without committing us to one particular path of science fiction about life in 2035. Profound insights often sound incredibly simplistic, bordering trivial, until you consider the full ramifications. The implementation of "Do Not Kill" in current law is surprisingly voluminous. (If you don't think so, you probably forgot to #include the Vienna Treaty and the convention about chemical and biological weapons.) So my insight about Varnish, that it has to become a socket-wrench-like toolchest for doing things with HTTP traffic, will probably elicit a lot of "duh!" reactions, until people, including me, understand the ramifications more fully. Things you cannot do with Varnish today --------------------------------------- As much as Varnish can be bent, tweaked and coaxed into doing today, there are things you cannot do, or at least things which are very hard and very inefficient to do with Varnish. For instance we consider "a transaction" something that starts with a request from a client, and involves zero or more backend fetches of finite sized data elements. That is not how the future looks. For instance one of the things SPDY has tried out is "server push", where you fetch index.html and the webserver says "you'll also want main.css and cat.gif then" and pushes those objects on the client, to save the round-trip times wasted waiting for the client to ask for them. Today, something like that is impossible in Varnish, since objects are independent and you can only look up one at a time. I already can hear some of you amazing VCL wizards say "Well, if you inline-C grab a refcount, then restart and ..." but let's be honest, that's not how it should look. You should be able to do something like:: if (req.proto == "SPDY" && req.url ~ "index.html") { req.obj1 = lookup(backend1, "/main.css") if (req.obj1.status == 200) { sess.push(req.obj1, bla, bla, bla); } req.obj2 = lookup(backend1, "/cat.gif") if (req.obj1.status == 200) { sess.push(req.obj2, bla, bla, bla); } } And doing that is not really *that* hard, I think. We just need to keep track of all the objects we instantiate and make sure they disappear and die when nobody is using them any more. A lot of the assumptions we made back in 2006 are no longer valid under such an architecture, but those same assumptions are what gives Varnish such astonishing performance, so just replacing them with standard CS-textbook solutions like "garbage collection" would make Varnish lose a lot of its lustre. As some of you know, there is a lot of modularity hidden inside Varnish but not quite released for public use in VCL. Much of what is going to happen will be polishing up and documenting that modularity and releasing it for you guys to have fun with, so it is not like we are starting from scratch or anything. But some of that modularity stands on foundations which are no longer firm; for instance, the initiating request exists for the full duration of a backend fetch. Those will take some work to fix. But, before you start to think I have a grand plan or even a clear-cut road map, I'd better make it absolutely clear that is not the case: I perceive some weird shapes in the fog of the future and I'll aim in that direction and either they are the doorways I suspect or they are trapdoors to tar-pits, time will show. I'm going to be making a lot of changes and things that used to be will no longer be as they used to be, but I think they will be better in the long run, so please bear with me, if your favourite detail of how Varnish works changes. Varnish is not speedy, Varnish is fast! --------------------------------------- As I said I'm not a fan of SPDY and I sincerely hope that no bit of the current proposal survives unchallenged in whatever HTTP/2.0 standard emerges down the road. But I do want to thank the people behind that mess, not for the mess, but for having provoked me to spend some summertime thinking hard about what it is that I'm trying to do with Varnish and what problem Varnish is here to solve. This is going to be FUN! Poul-Henning 2012-09-14 Author of Varnish PS: See you at `VUG6 `_ where I plan to talk more about this. varnish-7.5.0/doc/sphinx/phk/sphinx.rst000066400000000000000000000063261457605730600201240ustar00rootroot00000000000000.. Copyright (c) 2010-2015 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _phk_sphinx: =================================== Why Sphinx_ and reStructuredText_ ? =================================== The first school of thought on documentation, is the one we subscribe to in Varnish right now: "Documentation schmocumentation..." It does not work for anybody. The second school is the "Write a {La}TeX document" school, where the documentation is seen as a stand alone product, which is produced independently. This works great for PDF output, and sucks royally for HTML and TXT output. The third school is the "Literate programming" school, which abandons readability of *both* the program source code *and* the documentation source, which seems to be one of the best access protections one can put on the source code of either. The fourth school is the "DoxyGen" school, which lets a program collect a mindless list of hyperlinked variable, procedure, class and filenames, and call that "documentation". And the fifth school is anything that uses a fileformat that cannot be put into a version control system, because it is binary and non-diff'able. It doesn't matter if it is OpenOffice, LyX or Word, a non-diffable doc source is a no go with programmers. Quite frankly, none of these works very well in practice. One of the very central issues, is that writing documentation must not become a big and clear context-switch from programming. That precludes special graphical editors, browser-based (wiki!) formats etc. Yes, if you write documentation for half your workday, that works, but if you write code most of your workday, that does not work. Trust me on this, I have 25 years of experience avoiding using such tools. I found one project which has thought radically about the problem, and their reasoning is interesting, and quite attractive to me: #. TXT files are the lingua franca of computers, even if you are logged with TELNET using IP over Avian Carriers (Which is more widespread in Norway than you would think) you can read documentation in a .TXT format. #. TXT is the most restrictive typographical format, so rather than trying to neuter a high-level format into .TXT, it is smarter to make the .TXT the source, and reinterpret it structurally into the more capable formats. In other words: we are talking about the ReStructuredText_ of the Python project, as wrapped by the Sphinx_ project. Unless there is something I have totally failed to spot, that is going to be the new documentation platform in Varnish. Take a peek at the Python docs, and try pressing the "show source" link at the bottom of the left menu: (link to random python doc page:) https://docs.python.org/py3k/reference/expressions.html Dependency wise, that means you can edit docs with no special tools, you need python+docutils+sphinx to format HTML and a LaTex (pdflatex ?) to produce PDFs, something I only expect to happen on the project server on a regular basis. I can live with that, I might even rewrite the VCC scripts from Tcl to Python in that case. Poul-Henning, 2010-04-11 .. _Sphinx: http://sphinx.pocoo.org/ .. _reStructuredText: http://docutils.sourceforge.net/rst.html varnish-7.5.0/doc/sphinx/phk/ssl.rst000066400000000000000000000055741457605730600174200ustar00rootroot00000000000000.. Copyright (c) 2011-2014 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _phk_ssl: ============ Why no SSL ? ============ This is turning into a bit of a FAQ, but the answer is too big to fit in the margin we use for those. There are a number of reasons why there are no plans in sight that will grow SSL support in Varnish. First, I have yet to see a SSL library where the source code is not a nightmare. As I am writing this, the varnish source-code tree contains 82.595 lines of .c and .h files, including JEmalloc (12.236 lines) and Zlib (12.344 lines). OpenSSL, as imported into FreeBSD, is 340.722 lines of code, nine times larger than the Varnish source code, 27 times larger than each of Zlib or JEmalloc. This should give you some indication of how insanely complex the canonical implementation of SSL is. Second, it is not exactly the best source-code in the world. Even if I have no idea what it does, there are many aspect of it that scares me. Take this example in a comment, randomly found in s3-srvr.c:: /* Throw away what we have done so far in the current handshake, * which will now be aborted. (A full SSL_clear would be too much.) * I hope that tmp.dh is the only thing that may need to be cleared * when a handshake is not completed ... */ I hope they know what they are doing, but this comment doesn't exactly carry that point home, does it ? But let us assume that a good SSL library can be found, what would Varnish do with it ? We would terminate SSL sessions, and we would burn CPU cycles doing that. You can kiss the highly optimized delivery path in Varnish goodbye for SSL, we cannot simply tell the kernel to put the bytes on the socket, rather, we have to corkscrew the data through the SSL library and then write it to the socket. Will that be significantly different, performance wise, from running a SSL proxy in separate process ? No, it will not, because the way varnish would have to do it would be to ... start a separate process to do the SSL handling. There is no other way we can guarantee that secret krypto-bits do not leak anywhere they should not, than by fencing in the code that deals with them in a child process, so the bulk of varnish never gets anywhere near the certificates, not even during a core-dump. Would I be able to write a better stand-alone SSL proxy process than the many which already exists ? Probably not, unless I also write my own SSL implementation library, including support for hardware crypto engines and the works. That is not one of the things I dreamt about doing as a kid and if I dream about it now I call it a nightmare. So the balance sheet, as far as I can see it, lists "It would be a bit easier to configure" on the plus side, and everything else piles up on the minus side, making it a huge waste of time and effort to even think about it.. Poul-Henning, 2011-02-15 varnish-7.5.0/doc/sphinx/phk/ssl_again.rst000066400000000000000000000137051457605730600205520ustar00rootroot00000000000000.. Copyright (c) 2015-2016 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _phk_ssl_again: ============= SSL revisited ============= Four years ago, I wrote a rant about why Varnish has no SSL support (:ref:`phk_ssl`) and the upcoming 4.1 release is good excuse to revisit that issue. A SSL/TLS library ~~~~~~~~~~~~~~~~~ In 2011 I criticized OpenSSL's source-code as being a nightmare, and as much as I Hate To Say I Told You So, I Told You So: See also "HeartBleed". The good news is that HeartBleed made people realize that FOSS maintainers also have mortgages and hungry kids. Various initiatives have been launched to make prevent critical infrastructure software from being maintained Sunday evening between 11 and 12PM by a sleep-deprived and overworked parent, worried about about being able to pay the bills come the next month. We're not there yet, but it's certainly getting better. However, implementing TLS and SSL is still insanely complex, and thanks to Edward Snowdens whistle-blowing, we have very good reasons to believe that didn't happen by accident. The issue of finding a good TLS/SSL implementation is still the same and I still don't see one I would want my name associated with. OpenBSD's LibreSSL is certainly a step in a right direction, but time will show if it is viable in the long run -- they do have a tendency to be -- "SQUIRREL!!" -- distracted. Handling Certificates ~~~~~~~~~~~~~~~~~~~~~ I still don't see a way to do that. The Varnish worker-process is not built to compartmentalize bits at a cryptographic level and making it do that would be a non-trivial undertaking. But there is new loop-hole here. One night, waiting for my flight home in Oslo airport, I went though the entire TLS/SSL handshake process to see if there were anything one could do, and I realized that you can actually terminate TLS/SSL without holding the certificate, provided you can ask some process which does to do a tiny bit of work. The next morning `CloudFlare announced the very same thing`_: .. _CloudFlare announced the very same thing: https://blog.cloudflare.com/keyless-ssl-the-nitty-gritty-technical-details/ This could conceivably be a way to terminate TLS/SSL in the Varnish-worker process, while keeping the most valuable crypto-bits away from it. But it's still a bad idea ~~~~~~~~~~~~~~~~~~~~~~~~~ As I write this, the news that `apps with 350 million downloads`_ in total are (still) vulnerable to some SSL/TLS Man-In-The-Middle attack is doing the rounds. .. _apps with 350 million downloads: http://arstechnica.com/security/2015/04/27/android-apps-still-suffer-game-over-https-defects-7-months-later/ Code is hard, crypto code is double-plus-hard, if not double-squared-hard, and the world really don't need another piece of code that does an half-assed job at cryptography. If I, or somebody else, were to implement SSL/TLS in Varnish, it would talk at least half a year to bring the code to a point where I would be willing to show it to the world. Until I get my time-machine working, that half year would be taken away of other Varnish development, so the result had better be worth it: If it isn't, we have just increased the total attack-surface and bug-probability for no better reason than "me too!". When I look at something like Willy Tarreau's `HAProxy`_ I have a hard time to see any significant opportunity for improvement. .. _HAProxy: http://www.haproxy.org/ Conclusion ~~~~~~~~~~ No, Varnish still won't add SSL/TLS support. Instead in Varnish 4.1 we have added support for Willys `PROXY`_ protocol which makes it possible to communicate the extra details from a SSL-terminating proxy, such as `HAProxy`_, to Varnish. .. _PROXY: http://www.haproxy.org/download/1.5/doc/proxy-protocol.txt From a security point of view, this is also much better solution than having SSL/TLS integrated in Varnish. When (not if!) the SSL/TLS proxy you picked is compromised by a possibly planted software bug, you can pick another one to replace it, without loosing all the benefits of Varnish. That idea is called the "Software Tools Principle", it's a very old idea, but it is still one of the best we have. Political PostScript ~~~~~~~~~~~~~~~~~~~~ I realize that the above is a pretty strange stance to take in the current "SSL Everywhere" political climate. I'm not too thrilled about the "SSL Everywhere" idea, for a large number of reasons. The most obvious example is that you don't want to bog down your country's civil defence agency with SSL/TLS protocol negotiations, if their website is being deluged by people trying to survive a natural disaster. The next big issue is that there are people who do not have a right to privacy. In many countries this includes children, prisoners, stock-traders, flight-controllers, first responders and so on. SSL Everywhere will force institutions to either block any internet connectivity or impose Man-in-The-Middle proxies to comply with legal requirements of logging and inspection. A clear step in the wrong direction in my view. But one of the biggest problem I have with SSL Everywhere is that it gives privacy to the actors I think deserve it the least. Again and again shady behaviour of big transnational, and therefore law-less, companies have been exposed by security researchers (or just interested lay-people) who ran tcpdump. snort or similar traffic capture programs and saw what went on. Remember all the different kind of "magic cookies" used to track users across the web, against their wish and against laws and regulations ? Pretty much all of those were exposed with trivial packet traces. With SSL Everywhere, these actors get much more privacy to invade the privacy of every human being with an internet connection, because it takes a lot more skill to look into a SSL connection than a plaintext HTTP connection. "Sunshine is said to be the best of disinfectants" wrote supreme court justice Brandeis, SSL Everywhere puts all traffic in the shade. Poul-Henning, 2015-04-28 varnish-7.5.0/doc/sphinx/phk/thatslow.rst000066400000000000000000000163271457605730600204620ustar00rootroot00000000000000.. Copyright (c) 2016 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _phk_that_slow: ================= Going fast slowly ================= If I count in my source tree, right here and now, Varnish has 100K lines of sourcecode:: 75619 lines in .c files 18489 lines in .h files 2625 lines in .py files 670 lines in .vcc files 501 lines in .vcl files A little over 20K lines of testcases:: 21777 lines in .vtc files A little over 20K lines of documentation:: 22169 lines in .rst files And probably about 5K lines of "misc":: 1393 lines in .am files 712 lines in .ac files 613 lines in .lnt files For the sake of simplicity, let us call it a round 150K total lines [#f1]_. Varnish has been in existence for 10 years, so that's 15K lines per year. 200 workdays a year makes that 75 lines a day. 7.5 hours of work per day gives 10 lines per hour. Even though I have written the vast majority of the source code, Varnish is far from a one-person project. I have no way to estimate the average number of full time persons over the last ten years, so lets pick the worst case and say that only two persons were full time. It follows that there is *no way* average output of those two persons exceeded 5 lines per hour, measured over the ten year history of the project. Does that number seem low or high to you ? Anyway, What do programmers do all day? --------------------------------------- (`Yeah, yeah, yeah, I know... `_) Back before the dot-com disaster, people had actually spent considerable time and effort to find out what kind of productivity to expect from a programmer, after all, how could you ever estimate a project without knowing that crucial number? The results were all over the place, to put it mildly, but they were universally much lower than everybody expected. With his seminal The Mythical Man-Month, Frederick P. Brooks brought the ballpark estimate "10 lines per programmer per day" into common use, despite everything he wrote in the text surrounding that number arguing for the exact opposite. With the ultimate focus on quality and correctness, for instance the Apollo and Space Shuttle software, productivity drops to less than one line of code per day per employee. The estimated upper bound on Varnish productivity is almost an order of magnitude above Brooks ball-park estimate, and another easily ignorable magnitude away from the unrealistic goal of being the same quality as the software for the Space Shuttle. So we are inside Brooks ball-park, even if a bit on the high side [#f2]_, What took us so long ? ---------------------- The surprise over the 5LOC/h number is undoubtedly inversely proportional to the age of the reader. Back when I was a kid I could write 1000 lines in a single sleep-deprived session across midnight [#f3]_, but it didn't take that long before I discovered that I had to throw out most if it once I woke up again. I was 40 years old when I started Varnish and I had 22 years of professional experience, a *lot* of them staring at, and often fixing/improving/refactoring, other peoples source code. Over the years I came to appreciate Antoine de Saint-Exupéry's observation:: Perfection is attained, not when there is nothing more to add, but when there is nothing more to remove. And eventually I no longer think about code lines as an asset to be accumulated, but rather as an expenditure to be avoided. When I started Varnish, one of my main personal goals was to make it my highest quality program - ever [#f4]_. This is why Varnish is written in "pidgin C" style and lousy with asserts which don't do anything [#f5]_, except clarify programmer intent [#f6]_, and in case of mistakes, stop bad things before they get out of hand. And this is why there are other "pointless overheads" in the Varnish source code, from the panic/backtrace code over the "miniobj" type-safety to obscure hints to Gimpel Softwares FlexeLint product. Needless to say, it is also not by accident that the 20K lines of testcases exercise over 90% of the varnishd source code lines. And insisting on doing things right, rather than *"we can fix it properly later"* which is so widespread in FOSS source code [#f7]_, is not going to boost your line count either. But did it work ? ----------------- A 10 year project aniversary is a good reason to stop and see if the expected roses are there to be smelled. We have lots of numbers, commits (10538), bugreports (1864), CVEs (2) [#f8]_ or Coverity detections (a dozen?) but It is pretty nigh impossible to measure program quality, even though we tend to know it when we see it. There are also uncountable events which should be in the ledger, 503s [#f9]_, crashes, hair-tearing, head-scrathing, coffee-drinking, manual- and source-code thumbing and frustrated cries of help on IRC. In the other cup there are equally intangible positives, pats on the shoulder, free beers, X-mas and birthday presents from my Amazon wish-list (Thanks!), and more snarky tweets about how great Varnish is than I can remember. All in all, the best I have been able to do, to convince myself that I have not *totally* missed my goal, is a kind of "The curious case of the dog in the night-time" observation: I have never yet had a person tell me Varnish made their life more miserable. I'll take that. *phk* .. rubric:: Footnotes .. [#f1] We can do a better and more precise estimate if we want. For instance we have not typed in the 30 line BSD-2 Blurp *all* 314 times, and upwards of 30% of the rest are blank lines. However, there is no way we can reduce the number by an order of magnitude, in particular not because code that was written and subsequently removed is not part of the base data. .. [#f2] Which is to be expected really: We don't program on punched cards. .. [#f3] And I did. Migrating an oilcompany from IBM mainframes to 16-bit UNIX computers in 198x was an interesting challenge. .. [#f4] Having half the world adopt your hastily hacked up md5crypt with a glaringly obvious, but fortunately harmless, bug will do that to you. .. [#f5] Roughly 10% of the source code lines were asserts last I looked. .. [#f6] I prefer asserts over comments for this, since the compiler can also see them. The good news is, the compiler can also see that they don't do anything so a lot fewer are present in the binary program. Interestingly, a couple of them allows the compiler to optimize much harder. No, I won't tell you which those are. .. [#f7] Only code where that is a bigger problem is phd-ware: Software written as proof-of-concept and abandonned in haste when the diploma was in hand. .. [#f8] Obviously, a high count of CVE's should be a real reason for concern, but there is no meaningful difference between having one, two or three CVE's over the course of ten years. The two CVEs against Varnish were both utterly bogus "trophy-hunter" CVEs in my opinion. (But don't take my word for it, judge for yourself.) .. [#f9] There used to be a link back to the Varnish project on the default.vcl's 503 page, but we removed it after a large national institution in a non-english country showed it to a *lot* of people who clicked on the only link they could see on the page. varnish-7.5.0/doc/sphinx/phk/thetoolsweworkwith.rst000066400000000000000000000177171457605730600226150ustar00rootroot00000000000000.. Copyright (c) 2011-2016 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _phk_thetoolsweworkwith: ====================== The Tools We Work With ====================== "Only amateurs were limited by their tools" is an old wisdom, and the world is littered with art and architecture that very much proves this point. But as amazing as the Aquaeduct of Segovia is, tools are the reason why it looks nowhere near as fantastic as the Sydney Opera House. Concrete has been known since antiquity, but steel-reinforced concrete and massive numerical calculations of stress-distribution, is the tools that makes the difference between using concrete as a filler material between stones, and as gravity-defying curved but perfectly safe load-bearing wall. My tool for writing Varnish is the C-language which in many ways is unique amongst all of the computer programming languages for having no ambitions. The C language was invented as a portable assembler language, it doesn't do objects and garbage-collection, it does numbers and pointers, just like your CPU. Compared to the high ambitions, then as now, of new programming languages, that was almost ridiculous unambitious. Other people were trying to make their programming languages provably correct, or safe for multiprogramming and quite an effort went into using natural languages as programming languages. But C was written to write programs, not to research computer science and that's exactly what made it useful and popular. Unfortunately C fell in bad company over the years, and the reason for this outburst is that I just browsed the latest draft from the ISO-C standardisation working-group 14. I won't claim that it is enough to make grown men cry, but it certainly was enough to make me angry. Let me give you an example of their utter sillyness: The book which defined the C langauge had a list af reserved identifiers, all of them lower-case words. The UNIX libraries defined a lot of functions, all of them lower-case words. When compiled, the assembler saw all of these words prefixed with an underscore, which made it easy to mix assembler and C code. All the macros for the C-preprocessor on the other hand, were UPPERCASE, making them easy to spot. Which meant that if you mixed upper and lower case, in your identifiers, you were safe: That wouldn't collide with anything. First the ISO-C standards people got confused about the leading underscore, and I'll leave you guessing as to what the current text actually means: All identifiers that begin with an underscore and either an uppercase letter or another underscore are always reserved for any use. Feel free to guess, there's more such on pdf page 200 of the draft. Next, they broke the upper/lower rule, by adding special keywords in mixed case, probably because they thought it looked nicer:: _Atomic, _Bool, _Noreturn &c Then, presumably, somebody pointed out that this looked ugly:: void _Noreturn foo(int bar); So they have come up with a #include file called so that instead you can write:: #include void noreturn foo(int bar); The file according to the standard shall have exactly this content:: #define noreturn _Noreturn Are you crying or laughing yet ? You should be. Another thing brought by the new draft is an entirely new thread API, which is incompatible with the POSIX 'pthread' API which have been used for about 20 years now. If they had improved on the shortcomings of the pthreads, I would have cheered them on, because there are some very annoying mistakes in pthreads. But they didn't, in fact, as far as I can tell, the C1X draft's threads are worse than the 20 years older pthreads in all relevant aspects. For instance, neither pthreads nor C1X-threads offer a "assert I'm holding this mutex locked" facility. I will posit that you cannot successfully develop real-world threaded programs and APIs without that, or without wasting a lot of time debugging silly mistakes. If you look in the Varnish source code, which uses pthreads, you will see that I have wrapped pthread mutexes in my own little datastructure, to be able to do those asserts, and to get some usable statistics on lock-contention. Another example where C1X did not improve on pthreads at all, was in timed sleeps, where you say "get me this lock, but give up if it takes longer than X time". The way both pthreads and C1X threads do this, is you specify a UTC wall clock time you want to sleep until. The only problem with that is that UTC wall clock time is not continuous when implemented on a computer, and it may not even be monotonously increasing, since NTPD or other timesync facilites may step the clock backwards, particularly in the first minutes after boot. If the practice of saying "get me this lock before 16:00Z" was widespread, I could see the point, but I have actually never seen that in any source code. What I have seen are wrappers that take the general shape of:: int get_lock_timed(lock, timeout) { while (timeout > 0) { t0 = time(); i = get_lock_before(lock, t + timeout)); if (i == WASLOCKED) return (i); t1 = time(); timeout -= (t1 - t0); } return (TIMEDOUT); } Because it's not like the call is actually guaranteed to return at 16:00Z if you ask it to, you are only promised it will not return later than that, so you have to wrap the call in a loop. Whoever defined the select(2) and poll(2) systemcalls knew better than the POSIX and ISO-C group-think: They specified a maximum duration for the call, because then it doesn't matter what time it is, only how long time has transpired. Ohh, and setting the stack-size for a new thread ? That is apparently "too dangerous" so there is no argument in the C1X API for doing so, a clear step backwards from pthreads. But guess what: Thread stacks are like T-shirts: There is no "one size fits all." I have no idea what the "danger" they perceived were, my best guess is that feared it might make the API useful ? This single idiocy will single-handedly doom the C1X thread API to uselessness. Now, don't get me wrong: There are lot of ways to improve the C language that would make sense: Bitmaps, defined structure packing (think: communication protocol packets), big/little endian variables (data sharing), sensible handling of linked lists etc. As ugly as it is, even the printf()/scanf() format strings could be improved, by offering a sensible plugin mechanism, which the compiler can understand and use to issue warnings. Heck, even a simple basic object facility would be good addition, now that C++ have become this huge bloated monster language. But none of that is apparently as important as and a new, crippled and therefore useless thread API. The neat thing about the C language, and the one feature that made it so popular, is that not even an ISO-C working group can prevent you from implementing all these things using macros and other tricks. But it would be better to have them in the language, so the compiler could issue sensible warnings and programmers won't have to write monsters like:: #define VTAILQ_INSERT_BEFORE(listelm, elm, field) do { \ (elm)->field.vtqe_prev = (listelm)->field.vtqe_prev; \ VTAILQ_NEXT((elm), field) = (listelm); \ *(listelm)->field.vtqe_prev = (elm); \ (listelm)->field.vtqe_prev = &VTAILQ_NEXT((elm), field); \ } while (0) To put an element on a linked list. I could go on like this, but it would rapidly become boring for both you and me, because the current C1X draft is 701 pages, and it contains not a single explanatory example if how to use any of the verbiage in practice. Compare this with The C Programming Language, a book of 274 pages which in addition to define the C language, taught people how to program through well-thought-out examples. From where I sit, ISO WG14 are destroying the C language I use and love. Poul-Henning, 2011-12-20 varnish-7.5.0/doc/sphinx/phk/thoughts.rst000066400000000000000000000030611457605730600204510ustar00rootroot00000000000000.. Copyright (c) 2010 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _phk_thoughts: ========================= What were they thinking ? ========================= The reason I try to write these notes is the chinese wall. Ever since I first saw it on a school-book map, I have been wondering what the decision making process were like. We would like to think that the emperor asked for ideas, and that advisors came up with analyses, budgets, cost/benefit calculations and project plans for various proposals, and that the emperor applied his wisdom to choose the better idea. But it could also be, that The Assistant to The Deputy Viceminister of Northern Affairs, edged in sideways, at a carefully chosen time where the emperor looked relaxed and friendly, and sort of happend to mention that 50 villages had been sort of raided by the barbarians, hoping for the reply, which would not be a career opportunity for The Assistant to The Assistant to The Deputy Viceminister of Northern Affairs. And likely as not, the emperor absentmindedly grunted "Why don't you just build a wall to keep them out or something ?" probably wondering about the competence of an administration, which could not figure out to build palisades around border villages without bothering him and causing a monument to the Peter Principle and Parkinssons Law to be built, which can be seen from orbit, and possibly from the moon, if you bring your binoculars. If somebody had written some notes, we might have known. Poul-Henning, 2010-05-28 varnish-7.5.0/doc/sphinx/phk/three-zero.rst000066400000000000000000000054431457605730600206760ustar00rootroot00000000000000.. Copyright (c) 2011-2015 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _phk_3.0: ================================== Thoughts on the eve of Varnish 3.0 ================================== Five years ago, I was busy transforming my pile of random doddles on 5mm squared paper into software, according to "git log" working on the first stevedores. In two weeks I will be attending the Varnish 3.0 release party in Oslo. Sometimes I feel that development of Varnish takes for ever and ever, and that it must be like watching paint dry for the users, but 3 major releases in 5 years is actually not too shabby come to think of it. Varnish 3.0 "only" has two big new features, VMOD and GZIP, and a host of smaller changes, which you will notice if they are new features, and not notice if they are bug fixes. GZIP will probably be most important to the ESI users, and I wonder if all the time I spent fiddling bits in the middle of compressed data pays off, or if the concept of patchwork-quilting GZIP files was a bad idea from end to other. VMODs on the other hand, was an instant success, because they make it much easier for people to extend Varnish with new functionality, and I know of several VMODs in the pipeline which will make it possible to do all sorts of wonderful things from VCL. All in all, I feel happy about the 3.0 release, and I hope the users will too. We are not finished of course, ideas and patches for Varnish 4.0 are already starting to pile up, and hopefully we can get that into a sensible shape 18 months from now, late 2012-ish. "Life is what happens to you while you're busy making other plans" said John Lennon, a famous murder victim from New York. I feel a similar irony in the way Varnish happened to me: My homepage is written in raw HTML using the vi(1) editor, runs on a book-sized Soekris NET5501 computer, averages 50 hits a day with an Alexa rank just north of the 3.5 million mark. A normal server with Varnish could deliver all traffic my webserver has ever delivered, in less than a second. But varnish-cache.org has Alexa rank around 30.000, "varnish cache" shows a nice trend on Google and #varnish confuses the heck out of teenage girls and wood workers on Twitter, so clearly I am doing something right. I still worry about the `The Fraud Police `_ though, "I have no idea what I'm doing, and I totally make shit up as I go along." is a disturbingly precise summary of how I feel about my work in Varnish. The Varnish 3.0 release is therefore dedicated to all the kind Varnish developers and users, who have tested, reported bugs, suggested ideas and generally put up with me and my bumbling ways for these past five years. Much appreciated, Poul-Henning, 2011-06-02 varnish-7.5.0/doc/sphinx/phk/trialerror.rst000066400000000000000000000107121457605730600207720ustar00rootroot00000000000000.. Copyright (c) 2016-2017 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _phk_trialerror: ================================================= Trial&Error - Prototyping - Continous Integration ================================================= The other day I chatted to a friend who wrote his phd thesis with `David Wheeler `_ as his advisor, that made me feel young, because Wheeler was the guy who invented the subroutine. No, not 'a subroutine' but 'the subroutine'. In the 1980'ies, right when I had started in IT, there was the new fashion: "Prototyping". It was all over the place, in 1983 Datamation you could about *"Data driven prototyping"*, and by 1990 it had bubbled up to management and Information and Software Technology could publish *"Organization and Management of Systems Prototyping"* etc. etc. The grey-beard at my workplace laconically said *"We already do that, only we call it Trial&Error."* Programming has always been Trial&Error, and probably always will. All the early pioneers, like Wheeler, complained about how batch-scheduling of machine resources removed the "intimate" contact with the running program and argued that it prolonged the debugging process. Practically all the oral histories from back then are about people sneaking in to university or work at night, to get the computer for themselves. But we could call it "Prototyping" if that sounded better, and now that the history-deficient dot-com generation has "invented" it, we can call it "Continous Integration". I don't care - it's still Trial&Error to me. They guy I chatted with told how after his phd thesis he *"swore to never again attempt to solve a problem with inadequate tools"*. That is sound advice, and we all tend to forget it all the time, so reminded, I did a mental inventory on the train home: Which tools do I use even though I find them inadequate. And then I decided to do something about them. First thing was my lab which has - erhh... evolved? - over the last 15 years. Some of the original assumptions were plain wrong, and a lot of "As a temporary solution I can ..." hacks became permanent, and so on. I spent two days cleaning, moving, shuffling and generally tidying my lab, (Amongst other discoveries: The original two SCSI disks from the first "freefall.freebsd.org" machine built by Rod Grimes.) and it is now a lot more pleasant for the work I do these days. Second thing was the Jenkins and Travis we use for Tria^H^H^H^Continuous Integration in the Varnish Project. Jenkins and Travis are both general purpose program-test-framework-cloud-thingies, and they're fine up to a point, but they are inadequate tools for me in too many ways. Jenkins is written in Java, which is not something I want to inflict on computers volutarily, in particular not on computers people lend us to test varnish on. Travis is Linux only, which is fine if you run Linux only, but I don't. But worst of all: Neither of them fully understand of our varnishtest tool, and therefore their failure reports are tedious and cumbersome to use. So, taking my friends advice, I sat down and wrote VTEST, which consists of two small pieces of code: Tester and Reporter. The tester is a small, 173 lines, `portable and simple shell script `_ which runs on the computer, physical or virtual, where we want to test Varnish. It obviously needs the compilers and tools we require to compile Varnish, (autocrap, sphinx, graphviz) but it doesn't anything beyond that, in particular it does not need a java runtime, a GUI or a hole in your firewall. The tester sends a report to the project server with ssh(1), and the reporter, which is just 750 lines of python code, ingests and digests the report and spits out some `pidgin HTML `_ with the information I actually want to see. And just like with the varnishtest program before it, once I had done it, my first thought was *"Why didn't I do that long time ago?"* So it is only fair to dedicate VTEST to the friend I chatted with: .. image:: bjarne.jpeg `Bjarne `_ tried to model how to best distribute operating system kernels across a network, wrote a adequate programming language tool for the job, which was also an adequate tool for a lot of other programming jobs. Thanks Bjarne! Poul-Henning, 2016-11-21 varnish-7.5.0/doc/sphinx/phk/varnish_does_not_hash.rst000066400000000000000000000123021457605730600231510ustar00rootroot00000000000000.. Copyright (c) 2012-2013 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _phk_varnish_does_not_hash: ===================== Varnish Does Not Hash ===================== A spate of security advisories related to hash-collisions have made a lot of people stare at Varnish and wonder if it is affected. The answer is no, but the explanation is probably not what most of you expected: Varnish does not hash, at least not by default, and even if it does, it's still as immune to the attacks as can be. To understand what is going on, I have to introduce a concept from Shannon's information theory: "entropy." Entropy is hard to explain, and according to legend, that is exactly why Shannon recycled that term from thermodynamics. In this context, we can get away with thinking about entropy as how much our "keys" differ:: Low entropy (1 bit): /foo/bar/barf/some/cms/content/article?article=2 /foo/bar/barf/some/cms/content/article?article=3 High entropy (65 bits): /i?ee30d0770eb460634e9d5dcfb562a2c5.html /i?bca3633d52607f38a107cb5297fd66e5.html Hashing consists of calculating a hash-index from the key and storing the objects in an array indexed by that key. Typically, but not always, the key is a string and the index is a (smallish) integer, and the job of the hash-function is to squeeze the key into the integer, without losing any of the entropy. Needless to say, the more entropy you have to begin with, the more of it you can afford to lose, and lose some you almost invariably will. There are two families of hash-functions, the fast ones, and the good ones, and the security advisories are about the fast ones. The good ones are slower, but probably not so much slower that you care, and therefore, if you want to fix your web-app: Change:: foo=somedict[$somekey] To:: foo=somedict[md5($somekey)] and forget about the advisories. Yes, that's right: Cryptographic hash algorithms are the good ones, they are built to not throw any entropy away, and they are built to have very hard to predict collisions, which is exactly the problem with the fast hash-functions in the advisories. ----------------- What Varnish Does ----------------- The way to avoid having hash-collisions is to not use a hash: Use a tree instead. There every object has its own place and there are no collisions. Varnish does that, but with a twist. The "keys" in Varnish can be very long; by default they consist of:: sub vcl_hash { hash_data(req.url); if (req.http.host) { hash_data(req.http.host); } else { hash_data(server.ip); } return (hash); } But some users will add cookies, user identification and many other bits and pieces of string in there, and in the end the keys can be kilobytes in length, and quite often, as in the first example above, the first difference may not come until pretty far into the keys. Trees generally need to have a copy of the key around to be able to tell if they have a match, and more importantly to compare tree-leaves in order to "re-balance" the tree and other such arcanae of data structures. This would add another per-object memory load to Varnish, and it would feel particularly silly to store 48 identical characters for each object in the far too common case seen above. But furthermore, we want the tree to be very fast to do lookups in, preferably it should be lockless for lookups, and that means that we cannot (realistically) use any of the "smart" trees which automatically balance themselves, etc. You (generally) don't need a "smart" tree if your keys look like random data in the order they arrive, but we can pretty much expect the opposite as article number 4, 5, 6 etc are added to the CMS in the first example. But we can make the keys look random, and make them small and fixed size at the same time, and the perfect functions designed for just that task are the "good" hash-functions, the cryptographic ones. So what Varnish does is "key-compression": All the strings fed to hash_data() are pushed through a cryptographic hash algorithm called SHA256, which, as the name says, always spits out 256 bits (= 32 bytes), no matter how many bits you feed it. This does not eliminate the key-storage requirement, but now all the keys are 32 bytes and can be put directly into the data structure:: struct objhead { [...] unsigned char digest[DIGEST_LEN]; }; In the example above, the output of SHA256 for the 1 bit difference in entropy becomes:: /foo/bar/barf/some/cms/content/article?article=2 -> 14f0553caa5c796650ec82256e3f111ae2f20020a4b9029f135a01610932054e /foo/bar/barf/some/cms/content/article?article=3 -> 4d45b9544077921575c3c5a2a14c779bff6c4830d1fbafe4bd7e03e5dd93ca05 That should be random enough. But the key-compression does introduce a risk of collisions, since not even SHA256 can guarantee different outputs for all possible inputs: Try pushing all the possible 33-byte files through SHA256 and sooner or later you will get collisions. The risk of collision is very small however, and I can all but promise you, that you will be fully offset in fame and money for any inconvenience a collision might cause, because you will be the first person to find a SHA256 collision. Poul-Henning, 2012-01-03 varnish-7.5.0/doc/sphinx/phk/vcl_expr.rst000066400000000000000000000036741457605730600204400ustar00rootroot00000000000000.. Copyright (c) 2010-2016 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _phk_vcl_expr: =============== VCL Expressions =============== I have been working on VCL expressions recently, and we are approaching the home stretch now. The data types in VCL are "sort of weird" seen with normal programming language eyes, in that they are not "general purpose" types, but rather tailored types for the task at hand. For instance, we have both a TIME and a DURATION type, a quite unusual constellation for a programming language. But in HTTP context, it makes a lot of sense, you really have to keep track of what is a relative time (age) and what is absolute time (Expires). Obviously, you can add a TIME and DURATION, the result is a TIME. Equally obviously, you can not add TIME to TIME, but you can subtract TIME from TIME, resulting in a DURATION. VCL do also have "naked" numbers, like INT and REAL, but what you can do with them is very limited. For instance you can multiply a duration by a REAL, but you can not multiply a TIME by anything. Given that we have our own types, the next question is what precedence operators have. The C programming language is famous for having a couple of gottchas in its precedence rules and given our limited and narrow type repertoire, blindly importing a set of precedence rules may confuse a lot more than it may help. Here are the precedence rules I have settled on, from highest to lowest precedence: Atomic 'true', 'false', constants function calls variables '(' expression ')' Multiply/Divide INT * INT INT / INT DURATION * REAL Add/Subtract STRING + STRING INT +/- INT TIME +/- DURATION TIME - TIME DURATION +/- DURATION Comparisons '==', '!=', '<', '>', '~' and '!~' string existence check (-> BOOL) Boolean not '!' Boolean and '&&' Boolean or '||' Input and feedback most welcome! Until next time, Poul-Henning, 2010-09-21 varnish-7.5.0/doc/sphinx/phk/vdd19q3.rst000066400000000000000000000105621457605730600200030ustar00rootroot00000000000000.. Copyright (c) 2019 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _vdd19q3: Varnish Developer Day 2019Q3 ============================ We try to bring the core Varnish People into the same room a couple of times a year for "Varnish Developer Day" meetings, and last week we did that at Varnish Software's Oslo offices. Tuesday was a "Hackathon", where we lounged around and worked concrete issues and ideas, Wednesday was the "formal" day where we sat around a table and negotiated and made decisions. The main issues this time were HTTP3, backends and project organization, and here I will try to give a quick summary. HTTP3 and QUIC -------------- Everybody seemed to agree that we want H3 support, and after Dag's quick overview of the protocol, the challenge of doing that were evident. We also agreed that having certificates and secret keys inside the varnish worker process is still a no-go, so some variant of "key-less" is called for. Fortunately H3 is designed with this in mind for performance reasons. Getting from A-B is the hard part, and we may introduce a A' pit-stop where we implement key-less TLS1.3 on HTTP1+2, and possibly also a A'' pitstop to get TLS on backends. Dag and PHK will try to plot a course for this. Backends -------- There are a lot of annoying details about backends we want to do something about, from probes being near-magical to H1 to getting a proper handle on the lifetime of dynamic backends. Some concrete improvements came up during the hackathon and we will be persuing those right away. Fixing probing is probably a V7 thing, and we need to think and prototype how we expose probing in VCL. Bugwash ------- We are getting more people involved on the other side of the Atlantic, and we are moving the Monday afternoon bugwash from 13:00-14:00 EU time to 15:00-15:30 EU time, so they do not have to get out of bed so early. We will also try to make the bugwash more producive, by having PHK publish an "agenda" some hours beforehand, so people can prepare, and instead shorten the bugwash to 30 minutes to keep the time commitment the same. Everybody is welcome to attend our bugwashs, on the IRC channel #varnish-hacking on irc.linpro.no. Project organization -------------------- There has been some friction in the project this summer and we have talked a lot about how to counter that. A significant part of the problem is that too much of the project business goes through me: I am always the one nagging and no'ing peoples pull requests and that makes both them and me unhappy. We have drawn up a set of "rules of engagement" which will distribute the workload more evenly, essentially assuring that somebody from another organization will have looked at patches and pull requests before me, both to move some of the "no-ing" away from me and also to get people to pay more attention to each others work. For this to work, everybdoy will have to spend a bit more time on "project work", but everybody agreed to do that, so we think it can fly. These discussions also brought up another thing: Retirement Notice ----------------- One interesting feature of the IT industry, is that there are no retirement parties, because the industry more or less got born in the 1990'ies. There was an IT industry before then, I was part of it for most of a decade, and it did have retirement parties, because people had been going at it since the 50ies. One almost invariable part of the proceedings were the "Handling Over Of The Listing", where the retiree ceremoniously handed over a four inch thick Z-fold listing of "The XYZ Program" to the younger person now assuming responsibility for its care, feeding & maintenance, until his - or the program's - retirement. If you do the math, you will find that I am now also getting into my 50ies, and the prospect of retirement is migrating from "theoretical event in distant future" to "I need to think about this." On Tuesday the 20th of January 2026 I will be 60 years old, the Varnish Cache project will be 20 years old, and I will be retired from active project management in the Varnish Cache Project. That is six½ years in the future, a full half the current age of the project, and a long time in IT, but I want to reserve the date, so that the project has plenty of time to figure out what they want to do about it. The VDD appointed Martin and Nils to own that issue. *phk* varnish-7.5.0/doc/sphinx/phk/wanton_destruction.rst000066400000000000000000000065161457605730600225450ustar00rootroot00000000000000.. Copyright (c) 2013-2015 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _phk_wanton: ======================================= Wanton destruction for the greater good ======================================= We live in an old house, and it's all wrong for us. When I bought this house, it was perfect, or at least perfect enough. But times have changed. I have no small kids anymore, and rather than just right for two kids to tumble in, the garden is now too small for all the flowers we want. And the rooms are not where we want them, there are too many stairs in all the wrong places, and the kitchen needs to be torn out and a new built. I'm sure that some other family will be able to fall in love with this house, the way we did, but there is no realistic way to rebuild it, to become the house we want now. For one thing, doing major surgery on a house while you live in it is double-minus-fun and it always slows the rebuild project down when you have to keep at least one toilet working and sanitary and be able to cook and sleep on the building site. So we'll be building a new house on a plot of land on the other side of the road, one of the coming years, a house which is right for us, and then we will sell this old house, to a family with small children, who will love it, and rip out the old funky kitchen and make a new one before they move in. One would think that software would be nothing like an old house, but they are more alike than most people imagine. Using a major piece of software, is like moving into a house: You need to adapt your life and the house or the software to each other, since nothing is ever quite perfect, there will be limitations. And those limitations affect how you think: If you live in a 2 bedroom apartment, you won't even be considering inviting 80 guests to a party. A lot of Varnish-users have taken time to figure out how Varnish fits into their lives and made the compromises necessary to make it work, and once you've done that, you moved on to other problems, but the limitations of Varnish keeps affecting how you think about your website, even if you don't realize this. Well, I've got news for you: You'll be moving into a new house in the next couple of years, it'll be called Varnish V4 and that means that you will have to decide who gets which room and where to store the towels and grandmothers old china, all over again. I'm sure you'll hate me for it, "Why do we have to move ?", "It really wasn't that bad after all" and so on and so forth. But if I do my job right, that will soon turn into "Ohh, that's pretty neat, I always wanted one of those..." and "Hey... Watch me do THIS!" etc. I could have started with a fresh GIT repository, to make it clear that what is happening right now is the construction of an entirely new house, but software isn't built from physical objects, so I don't need to do that: You can keep using Varnish, while I rebuild it, and thanks to the wonder of bits, you won't see a trace of dirt or dust while it happens. So don't be alarmed by the wanton destruction you will see in -trunk the coming weeks, it is not destroying the Varnish you are using for your website today, it is building the new one you will be using in the future. And it's going to be perfect for you, at least for some years... Poul-Henning, 2013-03-18 varnish-7.5.0/doc/sphinx/reference/000077500000000000000000000000001457605730600172265ustar00rootroot00000000000000varnish-7.5.0/doc/sphinx/reference/cli_protocol.rst000066400000000000000000000134261457605730600224560ustar00rootroot00000000000000.. Copyright (c) 2021 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. role:: ref(emphasis) .. _ref_cli_api: =========================================== VCLI protocol - Scripting the CLI interface =========================================== The Varnish CLI has a few bells&whistles when used as an API. First: `vcli.h` contains magic numbers. Second: If you use `varnishadm` to connect to `varnishd` for API purposes, use the `-p` argument to get "pass" mode. In "pass" mode, or with direct CLI connections (more below), the first line of responses is always exactly 13 bytes long, including the NL, and it contains two numbers: The status code and the count of bytes in the "body" of the response:: 200â£19â£â£â£â£â£â£â¤ PONGâ£1613397488â£1.0 This makes parsing the response unambiguous, even in cases like this where the response does not end with a NL. The varnishapi library contains functions to implement the basics of the CLI protocol, for more, see the `vcli.h` include file. .. _ref_remote_cli: Local and remote CLI connections -------------------------------- The ``varnishd`` process receives the CLI commands via TCP connections which require PSK authentication (see below), but which provide no secrecy. "No secrecy" means that if you configure these TCP connections to run across a network, anybody who can sniff packets can see your CLI commands. If you need secrecy, use ``ssh`` to run ``varnishadm`` or to tunnel the TCP connection. By default `varnishd` binds to ``localhost`` and ask the kernel to assign a random port number. The resulting listen address is stored in the shared memory, where the ``varnishadm`` program finds it. You can configure ``varnishd`` to listen to a specific address with the ``-T`` argument, this will also be written to shared memory, so ``varnishadm`` keeps working:: # Bind to internal network varnishd -T 192.168.10.21:3245 You can also configure ``varnishd`` to actively open a TCP connection to another "controller" program, with the ``-M`` argument. Finally, when run in "debug mode" with the ``-d`` argument, ``varnishd`` will stay in the foreground and turn stdin/stdout into a CLI connection. .. _ref_psk_auth: Authentication CLI connections ------------------------------ CLI connections to `varnishd` are authenticated with a "pre-shared-key" authentication scheme, where the other end must prove they know *the contents of* the secret file ``varnishd`` uses. They do not have to read the precise same file on that specific computer, they could read an entirely different file on a different computer or fetch the secret from a server. The name of the file can be configured with the ``-S`` option, and ``varnishd`` records the name in shared memory, so ``varnishadm`` can find it. As a bare minimum ``varnishd`` needs to be able to read the file, but other than that, it can be restricted any way you want. Since it is not the file, but only the content of it that matter, you can make the file unreadable by everybody, and instead place a copy of the file in the home directories of the authorized users. The file is read only at the moment when the `auth` CLI command is issued and the contents is not cached in `varnishd`, so you can change it as often as you want. An authenticated session looks like this: .. code-block:: text critter phk> telnet localhost 1234 Trying ::1... Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. 107 59 ixslvvxrgkjptxmcgnnsdxsvdmvfympg Authentication required. auth 455ce847f0073c7ab3b1465f74507b75d3dc064c1e7de3b71e00de9092fdc89a 200 279 ----------------------------- Varnish Cache CLI 1.0 ----------------------------- FreeBSD,13.0-CURRENT,amd64,-jnone,-sdefault,-sdefault,-hcritbit varnish-trunk revision 89a558e56390d425c52732a6c94087eec9083115 Type 'help' for command list. Type 'quit' to close CLI session. Type 'start' to launch worker process. The CLI status of 107 indicates that authentication is necessary. The first 32 characters of the response text is the challenge "ixsl...mpg". The challenge is randomly generated for each CLI connection, and changes each time a 107 is emitted. The most recently emitted challenge must be used for calculating the authenticator "455c…c89a". The authenticator is calculated by applying the SHA256 function to the following byte sequence: * Challenge string * Newline (0x0a) character. * Contents of the secret file * Challenge string * Newline (0x0a) character. and dumping the resulting digest in lower-case hex. In the above example, the secret file contains ``foo\n`` and thus: .. code-block:: text critter phk> hexdump secret 00000000 66 6f 6f 0a |foo.| 00000004 critter phk> cat > tmpfile ixslvvxrgkjptxmcgnnsdxsvdmvfympg foo ixslvvxrgkjptxmcgnnsdxsvdmvfympg ^D critter phk> hexdump -C tmpfile 00000000 69 78 73 6c 76 76 78 72 67 6b 6a 70 74 78 6d 63 |ixslvvxrgkjptxmc| 00000010 67 6e 6e 73 64 78 73 76 64 6d 76 66 79 6d 70 67 |gnnsdxsvdmvfympg| 00000020 0a 66 6f 6f 0a 69 78 73 6c 76 76 78 72 67 6b 6a |.foo.ixslvvxrgkj| 00000030 70 74 78 6d 63 67 6e 6e 73 64 78 73 76 64 6d 76 |ptxmcgnnsdxsvdmv| 00000040 66 79 6d 70 67 0a |fympg.| 00000046 critter phk> sha256 tmpfile SHA256 (tmpfile) = 455ce847f0073c7ab3b1465f74507b75d3dc064c1e7de3b71e00de9092fdc89a critter phk> openssl dgst -sha256 < tmpfile 455ce847f0073c7ab3b1465f74507b75d3dc064c1e7de3b71e00de9092fdc89a The sourcefile `lib/libvarnish/cli_auth.c` contains a useful function which calculates the response, given an open filedescriptor to the secret file, and the challenge string. See also: --------- * :ref:`varnishadm(1)` * :ref:`varnishd(1)` * :ref:`vcl(7)` varnish-7.5.0/doc/sphinx/reference/directors.rst000066400000000000000000000231621457605730600217620ustar00rootroot00000000000000.. Copyright (c) 2015-2019 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _ref-writing-a-director: %%%%%%%%%%%%%%%%%% Writing a Director %%%%%%%%%%%%%%%%%% Varnish already provides a set of general-purpose directors, and since Varnish 4, it is bundled in the built-in :ref:`vmod_directors(3)`. Writing a director boils down to writing a VMOD, using the proper data structures and APIs. Not only can you write your own director if none of the built-ins fit your needs, but since Varnish 4.1 you can even write your own backends. Backends can be categorized as such: - static: native backends declared in VCL - dynamic: native backends created by VMODs - custom: backends created and fully managed by VMODs Backends vs Directors ===================== The intuitive classification for backend and director is an endpoint for the former and a loadbalancer for the latter, but the actual implementation is a bit more subtle. VMODs can accept backend arguments and return backends in VCL (see :ref:`ref-vmod-vcl-c-types`), but the underlying C type is ``struct director`` aka the ``VCL_BACKEND`` typedef. Under the hood director is a generic concept, and a backend is a kind of director. The line between the two is somewhat blurry at this point, let's look at some code instead:: // VRT interface from vrt.h struct vdi_methods { unsigned magic; #define VDI_METHODS_MAGIC 0x4ec0c4bb const char *type; vdi_http1pipe_f *http1pipe; vdi_healthy_f *healthy; vdi_resolve_f *resolve; vdi_gethdrs_f *gethdrs; vdi_getip_f *getip; vdi_finish_f *finish; vdi_event_f *event; vdi_release_f *release; vdi_destroy_f *destroy; vdi_panic_f *panic; vdi_list_f *list; }; struct director { unsigned magic; #define DIRECTOR_MAGIC 0x3336351d void *priv; char *vcl_name; struct vcldir *vdir; struct lock *mtx; }; A director can be summed up as: - being of a specific ``type`` with a set of operations which is identical for all instances of that particular type - some instance specific attributes such as a ``vcl_name`` and ``type``\ -specific private data The difference between a *load balancing* director and a *backend* director is mainly the functions they will implement. The fundamental steps towards a director implementation are: - implement the required functions - fill a ``struct vdi_methods`` with the name of your director type and your function pointers Existence of a ``healthy`` callback signifies that the director has some means of dynamically determining its health state. - in your constructor or other initialization routine, allocate and initialize your director-specific configuration state (aka private data) and call ``VRT_AddDirector()`` with your ``struct vdi_methods``, the pointer to your state and a printf format for the name of your director instance - implement methods or functions returning ``VCL_BACKEND`` - in your destructor or other finalizer, call ``VRT_DelDirector()`` - implement a ``destroy`` callback to destroy the actual director private state. It will be called when all references to the director are gone, until then the private state must remain intact and ``vdi_methods`` functions callable (but they may return errors). While vmods can implement functions returning directors, :ref:`ref-vmod-vcl-c-objects` are usually a more natural representation with vmod object instances being or referring to the director private data. Load Balancing Directors ======================== As in :ref:`vmod_directors(3)`, you can write directors that will group backends sharing the same role, and pick them according to a strategy. If you need more than the built-in strategies (round-robin, hash, ...), even though they can be stacked, it is always possible to write your own. In this case you simply need to implement the ``resolve`` function for the director. Directors are walked until a leaf director is found. A leaf director doesn't have a ``resolve`` function and is used to actually make the backend request, just like the backends you declare in VCL. *load balancing* directors use ``VRT_Assign_Backend()`` to take references to other directors. They *must* implement a ``release`` callback which has to release all references to other directors and ensure that none are gained after it returns. Static Directors ================ As opposed to dynamic backends covered below, directors which are guaranteed to have VCL lifetime (that is, they do not get destroyed before the VCL goes cold) can call ``VRT_StaticDirector()`` to avoid reference counting overhead. Dynamic Backends ================ If you want to speak HTTP/1 over TCP or UDS, but for some reason VCL does not fit the bill, you can instead reuse the whole backend facility. It allows you for instance to add and remove backends on-demand without the need to reload your VCL. You can then leverage your provisioning system. Consider the following snippet:: backend default { .host = "localhost"; } The VCL compiler turns this declaration into a ``struct vrt_backend``. When the VCL is loaded, Varnish calls ``VRT_new_backend`` (or rather ``VRT_new_backend_clustered`` for VSM efficiency) in order to create the director. Varnish doesn't expose its data structure for actual backends, only the director abstraction and dynamic backends are built just like static backends, one *struct* at a time. You can get rid of the ``struct vrt_backend`` as soon as you have the ``struct director``. A (dynamic) backend can't exceed its VCL's lifespan, because native backends are *owned* by VCLs. Though a dynamic backend can't outlive its VCL, it can be deleted any time with ``VRT_delete_backend``. The VCL will delete the remaining backends once discarded, you don't need to take care of it. Reference counting is used to ensure that backends which are no longer referenced are destroyed. Finally, Varnish will take care of event propagation for *all* native backends, but dynamic backends can only be created when the VCL is warm. If your backends are created by an independent thread (basically outside of VCL scope) you must subscribe to VCL events and watch for VCL state (see :ref:`ref-vmod-event-functions`). Varnish will panic if you try to create a backend on a cold VCL, and ``VRT_new_backend`` will return ``NULL`` if the VCL is cooling. You are also encouraged to comply with the :ref:`ref_vcl_temperature` in general. .. _ref-writing-a-director-loadbalancer: Health Probes ============= It is possible in a VCL program to query the health of a director (see :ref:`std.healthy()`). A director can report its health if it implements the ``healthy`` function, it is otherwise always considered healthy. Unless you are making a dynamic backend, you need to take care of the health probes yourselves. For *load balancing* directors, being healthy typically means having at least one healthy underlying backend or director. For dynamic backends, it is just a matter of assigning the ``probe`` field in the ``struct vrt_backend``. Once the director is created, the probe definition too is no longer needed. It is then Varnish that will take care of the health probe and disable the feature on a cold VCL (see :ref:`ref-vmod-event-functions`). Instead of initializing your own probe definition, you can get a ``VCL_PROBE`` directly built from VCL (see :ref:`ref-vmod-vcl-c-types`). Custom Backends =============== If you want to implement a custom backend, have a look at how Varnish implements native backends. It is the canonical implementation, and though it provides other services like connection pooling or statistics, it is essentially a director which state is a ``struct backend``. Varnish native backends currently speak HTTP/1 over TCP or UDS, and as such, you need to make your own custom backend if you want Varnish to do otherwise such as connect over UDP or speak a different protocol. If you want to leverage probes declarations in VCL, which have the advantage of being reusable since they are only specifications, you can. However, you need to implement the whole probing infrastructure from scratch. You may also consider making your custom backend compliant with regards to the VCL state (see :ref:`ref-vmod-event-functions`). If you are implementing the `gethdrs` method of your backend (i.e. your backend is able to generate a backend response to be manipulated in `vcl_backend_response`), you will want to log the response code, protocol and the various headers it'll create for easier debugging. For this, you can look at the `VSL*` family of functions, listed in `cache/cache.h`. Data structure considerations ----------------------------- When you are creating a custom backend, you may want to provide the semantics of the native backends. In this case, instead of repeating the redundant fields between data structures, you can use the macros ``VRT_BACKEND_FIELDS`` and ``VRT_BACKEND_PROBE_FIELDS`` to declare them all at once. This is the little dance Varnish uses to copy data between the ``struct vrt_backend`` and its internal data structure for example. The copy can be automated with the macros ``VRT_BACKEND_HANDLE`` and ``VRT_BACKEND_PROBE_HANDLE``. You can look at how they can be used in the Varnish code base. varnish-7.5.0/doc/sphinx/reference/index.rst000066400000000000000000000043311457605730600210700ustar00rootroot00000000000000.. Copyright (c) 2010-2021 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _reference-index: %%%%%%%%%%%%%%%%%%%%%%%%%%%% The Varnish Reference Manual %%%%%%%%%%%%%%%%%%%%%%%%%%%% .. _reference-vcl: The VCL language ---------------- .. toctree:: :maxdepth: 1 VCL - The Varnish Configuration Language VCL Variables VCL Steps VCL backend configuration VCL backend health probe states.rst A collection of :ref:`vcl-design-patterns-index` is available in addition to these reference manuals. Bundled VMODs ------------- .. toctree:: :maxdepth: 1 vmod_blob.rst vmod_cookie.rst vmod_directors.rst vmod_h2.rst vmod_proxy.rst vmod_purge.rst vmod_std.rst vmod_unix.rst The CLI interface ----------------- .. toctree:: :maxdepth: 1 VarnishAdm - Control program for Varnish CLI - The commands varnish understands Logging and monitoring ---------------------- .. toctree:: :maxdepth: 1 VSL - The log records Varnish generates VSLQ - Filter/Query expressions for VSL VarnishLog - Logging raw VSL VarnishNCSA - Logging in NCSA format VarnishHist - Realtime response histogram display VarnishTop - Realtime activity display Counters and statistics ----------------------- .. toctree:: :maxdepth: 1 VSC - The statistics Varnish collects VarnishStat - Watching and logging statistics The Varnishd program -------------------- .. toctree:: :maxdepth: 1 VarnishD - The program which does the actual work Varnishtest ----------- .. toctree:: :maxdepth: 1 VTC - Language for writing test cases VarnishTest - execute test cases vmod_vtc.rst For Developers & DevOps ----------------------- .. toctree:: :maxdepth: 1 Shell tricks VMODS - Extensions to VCL VEXT - Varnish Extensions VSM - Shared memory use VDIR - Backends & Directors VCLI - CLI protocol API .. Vmod_debug ? .. Libvarnishapi .. VRT .. VRT compat levels Code-book --------- .. toctree:: :maxdepth: 1 vtla.rst varnish-7.5.0/doc/sphinx/reference/shell_tricks.rst000066400000000000000000000013001457605730600224400ustar00rootroot00000000000000.. Copyright (c) 2021 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _ref-shell_tricks: %%%%%%%%%%%% Shell Tricks %%%%%%%%%%%% All the varnish programs can be invoked with the single argument ``--optstring`` to request their `getopt()` specification, which simplifies wrapper scripts: .. code-block:: text optstring=$(varnishfoo --optstring) while getopts "$optstring" opt do case $opt in n) # handle $OPTARG ;; # handle other options *) # ignore unneeded options ;; esac done varnishfoo "$@" # do something with the options varnish-7.5.0/doc/sphinx/reference/states.rst000066400000000000000000000025501457605730600212650ustar00rootroot00000000000000.. Copyright (c) 2014-2015 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _reference-states: ========================= Varnish Processing States ========================= ------------ Introduction ------------ Varnish processing of client and backend requests is implemented as state machines. Whenever a state is entered, a C function is called, which in turn calls the appropriate Varnish core code functions to process the request or response at this stage. For most states, core code also calls into a state-specific function compiled from VCL, a VCL subroutine (see :ref:`vcl_steps`). As a general guideline, core code aims to prepare objects accessible from VCL with good defaults for the most common cases before calling into the respective VCL subroutine. These can then be modified from VCL where necessary. The following graphs attempt to provide an overview over the processing states, their transitions and the most relevant functions in core code. They represent a compromise between usefulness for core/VMOD developers and administrators and are intended to serve as the reference basis for derivative work, such as more VCL-centric views. ----------- Client Side ----------- .. image:: ../../graphviz/cache_req_fsm.svg ------------ Backend Side ------------ .. image:: ../../graphviz/cache_fetch.svg varnish-7.5.0/doc/sphinx/reference/varnish-cli.rst000066400000000000000000000234501457605730600222030ustar00rootroot00000000000000.. Copyright (c) 2011-2021 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. role:: ref(emphasis) .. _varnish-cli(7): =========== varnish-cli =========== ------------------------------ Varnish Command Line Interface ------------------------------ :Manual section: 7 DESCRIPTION =========== Varnish has a command line interface (CLI) which can control and change most of the operational parameters and the configuration of Varnish, without interrupting the running service. The CLI can be used for the following tasks: configuration You can upload, change and delete VCL files from the CLI. parameters You can inspect and change the various parameters Varnish has available through the CLI. The individual parameters are documented in the varnishd(1) man page. bans Bans are filters that are applied to keep Varnish from serving stale content. When you issue a ban Varnish will not serve any *banned* object from cache, but rather re-fetch it from its backend servers. process management You can stop and start the cache (child) process though the CLI. You can also retrieve the latest stack trace if the child process has crashed. If you invoke varnishd(1) with -T, -M or -d the CLI will be available. In debug mode (-d) the CLI will be in the foreground, with -T you can connect to it with varnishadm or telnet and with -M varnishd will connect back to a listening service *pushing* the CLI to that service. Please see :ref:`varnishd(1)` for details. .. _ref_syntax: Syntax ------ The Varnish CLI is similar to another command line interface, the Bourne Shell. Commands are usually terminated with a newline, and they may take arguments. The command and its arguments are *tokenized* before parsing, and as such arguments containing spaces must be enclosed in double quotes. It means that command parsing of :: help banner is equivalent to :: "help" banner because the double quotes only indicate the boundaries of the ``help`` token. Within double quotes you can escape characters with \\ (backslash). The \\n, \\r, and \\t get translated to newlines, carriage returns, an tabs. Double quotes and backslashes themselves can be escaped with \\" and \\\\ respectively. To enter characters in octals use the \\nnn syntax. Hexadecimals can be entered with the \\xnn syntax. Commands may not end with a newline when a shell-style *here document* (here-document or heredoc) is used. The format of a here document is:: << word here document word *word* can be any continuous string chosen to make sure it doesn't appear naturally in the following *here document*. Traditionally EOF or END is used. Quoting pitfalls ---------------- Integrating with the Varnish CLI can be sometimes surprising when quoting is involved. For instance in Bourne Shell the delimiter used with here documents may or may not be separated by spaces from the ``<<`` token:: cat <' Line 1 Pos 1) <" or "<" for size comparisons. Prepending an operator with "!" negates the expression. The argument could be a quoted string, a regexp, or an integer. Integers can have "KB", "MB", "GB" or "TB" appended for size related fields. .. _ref_vcl_temperature: VCL Temperature --------------- A VCL program goes through several states related to the different commands: it can be loaded, used, and later discarded. You can load several VCL programs and switch at any time from one to another. There is only one active VCL, but the previous active VCL will be maintained active until all its transactions are over. Over time, if you often refresh your VCL and keep the previous versions around, resource consumption will increase, you can't escape that. However, most of the time you want to pay the price only for the active VCL and keep older VCLs in case you'd need to rollback to a previous version. The VCL temperature allows you to minimize the footprint of inactive VCLs. Once a VCL becomes cold, Varnish will release all the resources that can be be later reacquired. You can manually set the temperature of a VCL or let varnish automatically handle it. EXAMPLES ======== Load a multi-line VCL using shell-style *here document*:: vcl.inline example << EOF vcl 4.0; backend www { .host = "127.0.0.1"; .port = "8080"; } EOF Ban all requests where req.url exactly matches the string /news:: ban req.url == "/news" Ban all documents where the serving host is "example.com" or "www.example.com", and where the Set-Cookie header received from the backend contains "USERID=1663":: ban req.http.host ~ "^(?i)(www\\.)?example\\.com$" && obj.http.set-cookie ~ "USERID=1663" AUTHORS ======= This manual page was originally written by Per Buer and later modified by Federico G. Schwindt, Dridi Boukelmoune, Lasse Karstensen and Poul-Henning Kamp. SEE ALSO ======== * :ref:`varnishadm(1)` * :ref:`varnishd(1)` * :ref:`vcl(7)` * For API use of the CLI: The Reference Manual. varnish-7.5.0/doc/sphinx/reference/varnish-counters.rst000066400000000000000000000007461457605730600233010ustar00rootroot00000000000000.. Copyright (c) 2015 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _varnish-counters(7): ================ varnish-counters ================ --------------------------------- Varnish counter field definitions --------------------------------- :Manual section: 7 .. include:: ../include/counters.rst AUTHORS ======= This man page was written by Lasse Karstensen, using content from vsc2rst written by Tollef Fog Heen. varnish-7.5.0/doc/sphinx/reference/varnishadm.rst000066400000000000000000000052411457605730600221160ustar00rootroot00000000000000.. Copyright (c) 2010-2021 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. role:: ref(emphasis) .. _varnishadm(1): ========== varnishadm ========== Control a running Varnish instance ---------------------------------- :Manual section: 1 SYNOPSIS ======== varnishadm [-h] [-n ident] [-p] [-S secretfile] [-T [address]:port] [-t timeout] [command [...]] DESCRIPTION =========== The `varnishadm` utility establishes a CLI connection to varnishd either using -n *name* or using the -T and -S arguments. If -n *name* is given the location of the secret file and the address:port is looked up in shared memory. If neither is given `varnishadm` will look for an instance without a given name. If a command is given, the command and arguments are sent over the CLI connection and the result returned on stdout. If no command argument is given `varnishadm` will pass commands and replies between the CLI socket and stdin/stdout. OPTIONS ======= -h Print program usage and exit. -n ident Connect to the instance of `varnishd` with this name. -p Force `pass` mode and make the output follow the VCLI protocol. This disables command-history/command-completion and makes it easier for programs to parse the response(s). -S secretfile Specify the authentication secret file. This should be the same -S argument as was given to `varnishd`. Only processes which can read the contents of this file, will be able to authenticate the CLI connection. -T Connect to the management interface at the specified address and port. -t timeout Wait no longer than this many seconds for an operation to finish. The syntax and operation of the actual CLI interface is described in the :ref:`varnish-cli(7)` manual page. Parameters are described in :ref:`varnishd(1)` manual page. Additionally, a summary of commands can be obtained by issuing the *help* command, and a summary of parameters can be obtained by issuing the *param.show* command. EXIT STATUS =========== If a command is given, the exit status of the `varnishadm` utility is zero if the command succeeded, and non-zero otherwise. EXAMPLES ======== Some ways you can use varnishadm:: varnishadm -T localhost:999 -S /var/db/secret vcl.use foo echo vcl.use foo | varnishadm -T localhost:999 -S /var/db/secret echo vcl.use foo | ssh vhost varnishadm -T localhost:999 -S /var/db/secret SEE ALSO ======== * :ref:`varnishd(1)` * :ref:`varnish-cli(7)` AUTHORS ======= The `varnishadm` utility and this manual page were written by Cecilie Fritzvold. This man page has later been modified by Per Buer, Federico G. Schwindt and Lasse Karstensen. varnish-7.5.0/doc/sphinx/reference/varnishd.rst000066400000000000000000000473371457605730600216140ustar00rootroot00000000000000.. Copyright (c) 2010-2020 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. role:: ref(emphasis) .. _varnishd(1): ======== varnishd ======== ----------------------- HTTP accelerator daemon ----------------------- :Manual section: 1 SYNOPSIS ======== varnishd [-a [name=][listen_address[,PROTO]] [-b [host[:port]|path]] [-C] [-d] [-F] [-f config] [-h type[,options]] [-I clifile] [-i identity] [-j jail[,jailoptions]] [-l vsl] [-M address:port] [-n workdir] [-P file] [-p param=value] [-r param[,param...]] [-S secret-file] [-s [name=]kind[,options]] [-T address[:port]] [-t TTL] [-V] [-W waiter] varnishd [-x parameter|vsl|cli|builtin|optstring] varnishd [-?] DESCRIPTION =========== The `varnishd` daemon accepts HTTP requests from clients, passes them on to a backend server and caches the returned documents to better satisfy future requests for the same document. .. _ref-varnishd-options: OPTIONS ======= Basic options ------------- -a <[name=][listen_address[,PROTO]]> Accept for client requests on the specified listen_address (see below). Name is referenced in logs. If name is not specified, "a0", "a1", etc. is used. PROTO can be "HTTP" (the default) or "PROXY". Both version 1 and 2 of the proxy protocol can be used. Multiple -a arguments are allowed. If no -a argument is given, the default `-a :80` will listen to all IPv4 and IPv6 interfaces. -a <[name=][ip_address][:port][,PROTO]> The ip_address can be a host name ("localhost"), an IPv4 dotted-quad ("127.0.0.1") or an IPv6 address enclosed in square brackets ("[::1]") If port is not specified, port 80 (http) is used. At least one of ip_address or port is required. -a <[name=][path][,PROTO][,user=name][,group=name][,mode=octal]> (VCL4.1 and higher) Accept connections on a Unix domain socket. Path must be absolute ("/path/to/listen.sock") or "@" followed by the name of an abstract socket ("@myvarnishd"). The user, group and mode sub-arguments may be used to specify the permissions of the socket file -- use names for user and group, and a 3-digit octal value for mode. These sub-arguments do not apply to abstract sockets. -b <[host[:port]|path]> Use the specified host as backend server. If port is not specified, the default is 8080. If the value of ``-b`` begins with ``/``, it is interpreted as the absolute path of a Unix domain socket to which Varnish connects. In that case, the value of ``-b`` must satisfy the conditions required for the ``.path`` field of a backend declaration, see :ref:`vcl(7)`. Backends with Unix socket addresses may only be used with VCL versions >= 4.1. -b can be used only once, and not together with f. -f config Use the specified VCL configuration file instead of the builtin default. See :ref:`vcl(7)` for details on VCL syntax. If a single -f option is used, then the VCL instance loaded from the file is named "boot" and immediately becomes active. If more than one -f option is used, the VCL instances are named "boot0", "boot1" and so forth, in the order corresponding to the -f arguments, and the last one is named "boot", which becomes active. Either -b or one or more -f options must be specified, but not both, and they cannot both be left out, unless -d is used to start `varnishd` in debugging mode. If the empty string is specified as the sole -f option, then `varnishd` starts without starting the worker process, and the management process will accept CLI commands. You can also combine an empty -f option with an initialization script (-I option) and the child process will be started if there is an active VCL at the end of the initialization. When used with a relative file name, config is searched in the ``vcl_path``. It is possible to set this path prior to using ``-f`` options with a ``-p`` option. During startup, `varnishd` doesn't complain about unsafe VCL paths: unlike the `varnish-cli(7)` that could later be accessed remotely, starting `varnishd` requires local privileges. .. _opt_n: -n workdir Runtime directory for the shared memory, compiled VCLs etc. In performance critical applications, this directory should be on a RAM backed filesystem. Relative paths will be appended to `/var/run/` (NB: Binary packages of Varnish may have adjusted this to the platform.) The default value is `/var/run/varnishd` (NB: as above.) Documentation options --------------------- For these options, `varnishd` prints information to standard output and exits. When a -x option is used, it must be the only option (it outputs documentation in reStructuredText, aka RST). -? Print the usage message. -x parameter Print documentation of the runtime parameters (-p options), see `List of Parameters`_. -x vsl Print documentation of the tags used in the Varnish shared memory log, see :ref:`vsl(7)`. -x cli Print documentation of the command line interface, see :ref:`varnish-cli(7)`. -x builtin Print the contents of the default VCL program ``builtin.vcl``. -x optstring Print the optstring parameter to ``getopt(3)`` to help writing wrapper scripts. Operations options ------------------ -F Do not fork, run in the foreground. Only one of -F or -d can be specified, and -F cannot be used together with -C. -T Offer a management interface on the specified address and port. See :ref:`varnish-cli(7)` for documentation of the management commands. To disable the management interface use ``none``. -M Connect to this port and offer the command line interface. Think of it as a reverse shell. When running with -M and there is no backend defined the child process (the cache) will not start initially. -P file Write the PID of the process to the specified file. -i identity Specify the identity of the Varnish server. This can be accessed using ``server.identity`` from VCL. The server identity is used for the ``received-by`` field of ``Via`` headers generated by Varnish. For this reason, it must be a valid token as defined by the HTTP grammar. If not specified the output of ``gethostname(3)`` is used, in which case the syntax is assumed to be correct. -I clifile Execute the management commands in the file given as ``clifile`` before the the worker process starts, see `CLI Command File`_. Tuning options -------------- -t TTL Specifies the default time to live (TTL) for cached objects. This is a shortcut for specifying the *default_ttl* run-time parameter. -p Set the parameter specified by param to the specified value, see `List of Parameters`_ for details. This option can be used multiple times to specify multiple parameters. -s <[name=]type[,options]> Use the specified storage backend. See `Storage Backend`_ section. This option can be used multiple times to specify multiple storage files. Name is referenced in logs, VCL, statistics, etc. If name is not specified, "s0", "s1" and so forth is used. -l Specifies size of the space for the VSL records, shorthand for ``-p vsl_space=``. Scaling suffixes like 'K' and 'M' can be used up to (G)igabytes. See `vsl_space`_ for more information. Security options ---------------- -r Make the listed parameters read only. This gives the system administrator a way to limit what the Varnish CLI can do. Consider making parameters such as *cc_command*, *vcc_allow_inline_c* and *vmod_path* read only as these can potentially be used to escalate privileges from the CLI. -S secret-file Path to a file containing a secret used for authorizing access to the management port. To disable authentication use ``none``. If this argument is not provided, a secret drawn from the system PRNG will be written to a file called ``_.secret`` in the working directory (see `opt_n`_) with default ownership and permissions of the user having started varnish. Thus, users wishing to delegate control over varnish will probably want to create a custom secret file with appropriate permissions (ie. readable by a unix group to delegate control to). -j Specify the jailing mechanism to use. See `Jail`_ section. Advanced, development and debugging options ------------------------------------------- -d Enables debugging mode: The parent process runs in the foreground with a CLI connection on stdin/stdout, and the child process must be started explicitly with a CLI command. Terminating the parent process will also terminate the child. Only one of -d or -F can be specified, and -d cannot be used together with -C. -C Print VCL code compiled to C language and exit. Specify the VCL file to compile with the -f option. Either -f or -b must be used with -C, and -C cannot be used with -F or -d. -V Display the version number and exit. This must be the only option. -h Specifies the hash algorithm. See `Hash Algorithm`_ section for a list of supported algorithms. -W waiter Specifies the waiter type to use. .. _opt_h: Hash Algorithm -------------- The following hash algorithms are available: -h critbit self-scaling tree structure. The default hash algorithm in Varnish Cache 2.1 and onwards. In comparison to a more traditional B tree the critbit tree is almost completely lockless. Do not change this unless you are certain what you're doing. -h simple_list A simple doubly-linked list. Not recommended for production use. -h A standard hash table. The hash key is the CRC32 of the object's URL modulo the size of the hash table. Each table entry points to a list of elements which share the same hash key. The buckets parameter specifies the number of entries in the hash table. The default is 16383. .. _ref-varnishd-opt_s: Storage Backend --------------- The argument format to define storage backends is: -s <[name]=kind[,options]> If *name* is omitted, Varnish will name storages ``s``\ *N*, starting with ``s0`` and incrementing *N* for every new storage. For *kind* and *options* see details below. Storages can be used in vcl as ``storage.``\ *name*, so, for example if ``myStorage`` was defined by ``-s myStorage=malloc,5G``, it could be used in VCL like so:: set beresp.storage = storage.myStorage; A special *name* is ``Transient`` which is the default storage for uncacheable objects as resulting from a pass, hit-for-miss or hit-for-pass. If no ``-s`` options are given, the default is:: -s default,100m If no ``Transient`` storage is defined, the default is an unbound ``default`` storage as if defined as:: -s Transient=default The following storage types and options are available: -s The default storage type resolves to ``umem`` where available and ``malloc`` otherwise. -s malloc is a memory based backend. -s umem is a storage backend which is more efficient than malloc on platforms where it is available. See the section on umem in chapter `Storage backends` of `The Varnish Users Guide` for details. -s The file backend stores data in a file on disk. The file will be accessed using mmap. Note that this storage provide no cache persistence. The path is mandatory. If path points to a directory, a temporary file will be created in that directory and immediately unlinked. If path points to a non-existing file, the file will be created. If size is omitted, and path points to an existing file with a size greater than zero, the size of that file will be used. If not, an error is reported. Granularity sets the allocation block size. Defaults to the system page size or the filesystem block size, whichever is larger. Advice tells the kernel how `varnishd` expects to use this mapped region so that the kernel can choose the appropriate read-ahead and caching techniques. Possible values are ``normal``, ``random`` and ``sequential``, corresponding to MADV_NORMAL, MADV_RANDOM and MADV_SEQUENTIAL madvise() advice argument, respectively. Defaults to ``random``. -s Persistent storage. Varnish will store objects in a file in a manner that will secure the survival of *most* of the objects in the event of a planned or unplanned shutdown of Varnish. The persistent storage backend has multiple issues with it and will likely be removed from a future version of Varnish. .. _ref-varnishd-opt_j: Jail ---- Varnish jails are a generalization over various platform specific methods to reduce the privileges of varnish processes. They may have specific options. Available jails are: -j Reduce `privileges(5)` for `varnishd` and sub-processes to the minimally required set. Only available on platforms which have the `setppriv(2)` call. The optional `worker` argument can be used to pass a privilege-specification (see `ppriv(1)`) by which to extend the effective set of the varnish worker process. While extended privileges may be required by custom vmods, *not* using the `worker` option is always more secure. Example to grant basic privileges to the worker process:: -j solaris,worker=basic -j Default on all other platforms when `varnishd` is started with an effective uid of 0 ("as root"). With the ``unix`` jail mechanism activated, varnish will switch to an alternative user for subprocesses and change the effective uid of the master process whenever possible. The optional `user` argument specifies which alternative user to use. It defaults to ``varnish``. The optional `ccgroup` argument specifies a group to add to varnish subprocesses requiring access to a c-compiler. There is no default. The optional `workuser` argument specifies an alternative user to use for the worker process. It defaults to ``vcache``. The users given for the `user` and `workuser` arguments need to have the same primary ("login") group. To set up a system for the default users with a group name ``varnish``, shell commands similar to these may be used:: groupadd varnish useradd -g varnish -d /nonexistent -s /bin/false \ -c "Varnish-Cache Daemon User" varnish useradd -g varnish -d /nonexistent -s /bin/false \ -c "Varnish-Cache Worker User" vcache -j none last resort jail choice: With jail mechanism ``none``, varnish will run all processes with the privileges it was started with. .. _ref-varnishd-opt_T: Management Interface -------------------- If the -T option was specified, `varnishd` will offer a command-line management interface on the specified address and port. The recommended way of connecting to the command-line management interface is through :ref:`varnishadm(1)`. The commands available are documented in :ref:`varnish-cli(7)`. CLI Command File ---------------- The -I option makes it possible to run arbitrary management commands when `varnishd` is launched, before the worker process is started. In particular, this is the way to load configurations, apply labels to them, and make a VCL instance active that uses those labels on startup:: vcl.load panic /etc/varnish_panic.vcl vcl.load siteA0 /etc/varnish_siteA.vcl vcl.load siteB0 /etc/varnish_siteB.vcl vcl.load siteC0 /etc/varnish_siteC.vcl vcl.label siteA siteA0 vcl.label siteB siteB0 vcl.label siteC siteC0 vcl.load main /etc/varnish_main.vcl vcl.use main Every line in the file, including the last line, must be terminated by a newline or carriage return or is otherwise considered truncated, which is a fatal error. If a command in the file is prefixed with '-', failure will not abort the startup. Note that it is necessary to include an explicit `vcl.use` command to select which VCL should be the active VCL when relying on CLI Command File to load the configurations at startup. .. _ref-varnishd-params: RUN TIME PARAMETERS =================== Runtime parameters can either be set during startup with the ``-p`` command line option for ``varnishd(1)`` or through the CLI using the ``param.set`` or ``param.reset`` commands. They can be locked during startup with the ``-r`` command line option. Run Time Parameter Units ------------------------ There are different types of parameters that may accept a list of specific values, or optionally take a unit suffix. bool ~~~~ A boolean parameter accepts the values ``on`` and ``off``. It will also recognize the following values: - ``yes`` and ``no`` - ``true`` and ``false`` - ``enable`` and ``disable`` bytes ~~~~~ A bytes parameter requires one of the following units suffixes: - ``b`` (bytes) - ``k`` (kibibytes, 1024 bytes) - ``m`` (mebibytes, 1024 kibibytes) - ``g`` (gibibytes, 1024 mebibytes) - ``t`` (tebibytes, 1024 gibibytes) - ``p`` (pebibytes, 1024 tebibytes) Multiplicator units may be appended with an extra ``b``. For example ``32k`` is equivalent to ``32kb``. Bytes units are case-insensitive. seconds ~~~~~~~ A duration parameter may accept the following units suffixes: - ``ms`` (milliseconds) - ``s`` (seconds) - ``m`` (minutes) - ``h`` (hours) - ``d`` (days) - ``w`` (weeks) - ``y`` (years) If the parameter is a timeout or a deadline, a value of "never" (when allowed) disables the effect of the parameter. Run Time Parameter Flags ------------------------ Runtime parameters are marked with shorthand flags to avoid repeating the same text over and over in the table below. The meaning of the flags are: * `experimental` We have no solid information about good/bad/optimal values for this parameter. Feedback with experience and observations are most welcome. * `delayed` This parameter can be changed on the fly, but will not take effect immediately. * `restart` The worker process must be stopped and restarted, before this parameter takes effect. * `reload` The VCL programs must be reloaded for this parameter to take effect. * `wizard` Do not touch unless you *really* know what you're doing. * `only_root` Only works if `varnishd` is running as root. Default Value Exceptions on 32 bit Systems ------------------------------------------ Be aware that on 32 bit systems, certain default or maximum values are reduced relative to the values listed below, in order to conserve VM space: * workspace_client: 24k * workspace_backend: 20k * http_resp_size: 8k * http_req_size: 12k * gzip_buffer: 4k * vsl_buffer: 4k * vsl_space: 1G (maximum) * thread_pool_stack: 64k .. _List of Parameters: List of Parameters ------------------ This text is produced from the same text you will find in the CLI if you use the param.show command: .. include:: ../include/params.rst EXIT CODES ========== Varnish and bundled tools will, in most cases, exit with one of the following codes * `0` OK * `1` Some error which could be system-dependent and/or transient * `2` Serious configuration / parameter error - retrying with the same configuration / parameters is most likely useless The `varnishd` master process may also OR its exit code * with `0x20` when the `varnishd` child process died, * with `0x40` when the `varnishd` child process was terminated by a signal and * with `0x80` when a core was dumped. SEE ALSO ======== * :ref:`varnishlog(1)` * :ref:`varnishhist(1)` * :ref:`varnishncsa(1)` * :ref:`varnishstat(1)` * :ref:`varnishtop(1)` * :ref:`varnish-cli(7)` * :ref:`vcl(7)` HISTORY ======= The `varnishd` daemon was developed by Poul-Henning Kamp in cooperation with Verdens Gang AS and Varnish Software. This manual page was written by Dag-Erling Smørgrav with updates by Stig Sandbeck Mathisen , Nils Goroll and others. COPYRIGHT ========= This document is licensed under the same licence as Varnish itself. See LICENCE for details. * Copyright (c) 2007-2015 Varnish Software AS varnish-7.5.0/doc/sphinx/reference/varnishhist.rst000066400000000000000000000026461457605730600223320ustar00rootroot00000000000000.. Copyright (c) 2010-2016 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. role:: ref(emphasis) .. _varnishhist(1): =========== varnishhist =========== ------------------------- Varnish request histogram ------------------------- :Manual section: 1 SYNOPSIS ======== .. include:: ../include/varnishhist_synopsis.rst varnishhist |synopsis| DESCRIPTION =========== The varnishhist utility reads varnishd(1) shared memory logs and presents a continuously updated histogram showing the distribution of the last N requests by their processing. The value of N and the vertical scale are displayed in the top left corner. The horizontal scale is logarithmic. Hits are marked with a pipe character ("|"), and misses are marked with a hash character ("#"). The following options are available: .. include:: ../include/varnishhist_options.rst SEE ALSO ======== * :ref:`varnishd(1)` * :ref:`varnishlog(1)` * :ref:`varnishncsa(1)` * :ref:`varnishstat(1)` * :ref:`varnishtop(1)` * :ref:`vsl(7)` HISTORY ======= The varnishhist utility was developed by Poul-Henning Kamp in cooperation with Verdens Gang AS and Varnish Software AS. This manual page was written by Dag-Erling Smørgrav. COPYRIGHT ========= This document is licensed under the same licence as Varnish itself. See LICENCE for details. * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS varnish-7.5.0/doc/sphinx/reference/varnishlog.rst000066400000000000000000000024711457605730600221400ustar00rootroot00000000000000.. Copyright (c) 2010-2019 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. role:: ref(emphasis) .. _varnishlog(1): ========== varnishlog ========== -------------------- Display Varnish logs -------------------- :Manual section: 1 SYNOPSIS ======== .. include:: ../include/varnishlog_synopsis.rst varnishlog |synopsis| OPTIONS ======= The following options are available: .. include:: ../include/varnishlog_options.rst SIGNALS ======= * SIGHUP Rotate the log file (see -w option) in daemon mode, abort the loop and die gracefully when running in the foreground. * SIGUSR1 Flush any outstanding transactions SEE ALSO ======== * :ref:`varnishd(1)` * :ref:`varnishhist(1)` * :ref:`varnishncsa(1)` * :ref:`varnishstat(1)` * :ref:`varnishtop(1)` * :ref:`vsl(7)` * :ref:`vsl-query(7)` HISTORY ======= The varnishlog utility was developed by Poul-Henning Kamp in cooperation with Verdens Gang AS and Varnish Software AS. This manual page was initially written by Dag-Erling Smørgrav, and later updated by Per Buer and Martin Blix Grydeland. COPYRIGHT ========= This document is licensed under the same licence as Varnish itself. See LICENCE for details. * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS varnish-7.5.0/doc/sphinx/reference/varnishncsa.rst000066400000000000000000000212321457605730600222770ustar00rootroot00000000000000.. Copyright (c) 2010-2021 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. role:: ref(emphasis) .. _varnishncsa(1): =========== varnishncsa =========== --------------------------------------------------------- Display Varnish logs in Apache / NCSA combined log format --------------------------------------------------------- :Manual section: 1 SYNOPSIS ======== .. include:: ../include/varnishncsa_synopsis.rst varnishncsa |synopsis| DESCRIPTION =========== The varnishncsa utility reads varnishd(1) shared memory logs and presents them in the Apache / NCSA "combined" log format. Each log line produced is based on a single Request type transaction gathered from the shared memory log. The Request transaction is then scanned for the relevant parts in order to output one log line. To filter the log lines produced, use the query language to select the applicable transactions. Non-request transactions are ignored. The following options are available: .. include:: ../include/varnishncsa_options.rst MODES ===== The default mode of varnishncsa is "client mode". In this mode, the log will be similar to what a web server would produce in the absence of varnish. Client mode can be explicitly selected by using -c. If the -b switch is specified, varnishncsa will operate in "backend mode". In this mode, requests generated by varnish to the backends will be logged. Unless -c is also specified, client requests received by varnish will be ignored. When running varnishncsa in both backend and client mode, it is strongly advised to include the format specifier %{Varnish:side}x to distinguish between backend and client requests. Client requests that results in a pipe (ie. return(pipe) in vcl), will not generate logging in backend mode. This is because varnish is not generating requests, but blindly passes on bytes in both directions. However, a varnishncsa instance running in normal mode can see this case by using the formatter %{Varnish:handling}x, which will be 'pipe'. In backend mode, some of the fields in the format string get different meanings. Most notably, the byte counting formatters (%b, %I, %O) considers varnish to be the client. It is possible to keep two varnishncsa instances running, one in backend mode, and one in client mode, logging to different files. .. _ncsa-format: FORMAT ====== Specify the log format to use. If no format is specified the default log format is used:: %h %l %u %t "%r" %s %b "%{Referer}i" "%{User-agent}i" Escape sequences \\n and \\t are supported. Supported formatters are: %b In client mode, size of response in bytes, excluding HTTP headers. In backend mode, the number of bytes received from the backend, excluding HTTP headers. In CLF format, i.e. a '-' rather than a 0 when no bytes are sent. %D In client mode, time taken to serve the request, in microseconds. In backend mode, time from the request was sent to the entire body had been received. This is equivalent to %{us}T. %H The request protocol. Defaults to HTTP/1.0 if not known. %h Remote host. Defaults to '-' if not known. In backend mode this is the IP of the backend server. %I In client mode, total bytes received from client. In backend mode, total bytes sent to the backend. %{X}i The contents of request header X. If the header appears multiple times in a single transaction, the last occurrence is used. %l Remote logname. Always '-'. %m Request method. Defaults to '-' if not known. %{X}o The contents of response header X. If the header appears multiple times in a single transaction, the last occurrence is used. %O In client mode, total bytes sent to client. In backend mode, total bytes received from the backend. %q The query string. Defaults to an empty string if not present. %r The first line of the request. Synthesized from other fields, so it may not be the request verbatim. See the NOTES section. %s Status sent to the client. In backend mode, status received from the backend. %t In client mode, time when the request was received, in HTTP date/time format. In backend mode, time when the request was sent. %{X}t In client mode, time when the request was received, in the format specified by X. In backend mode, time when the request was sent. The time specification format is the same as for strftime(3) with these extensions: * ``%{sec}``: number of seconds since the Epoch * ``%{msec}``: number of milliseconds since the Epoch * ``%{usec}``: number of milliseconds since the Epoch * ``%{msec_frac}``: millisecond fraction * ``%{usec_frac}``: microsecond fraction The extensions can not be combined with each other or strftime(3) in the same specification. Use multiple ``%{X}t`` specifications instead. %T In client mode, time taken to serve the request, in seconds. In backend mode, time from the request was sent to the entire body had been received. This is equivalent to %{s}T. %{X}T In client mode, time taken to serve the request, in the format specified by X. In backend mode, time from the request was sent to the entire body had been received. The time specification format can be one of the following: s (same as %T), ms or us (same as %D). %U The request URL without the query string. Defaults to '-' if not known. %u Remote user from auth. %{X}x Extended variables. Supported variables are: Varnish:default_format The log format used when neither -f nor -F options are specified. Useful for appending/prepending with other formatters. Varnish:time_firstbyte Time from when the request processing starts until the first byte is sent to the client, in seconds. For backend mode: Time from the request was sent to the backend to the entire header had been received. Varnish:hitmiss In client mode, one of the 'hit' or 'miss' strings, depending on whether the request was a cache hit or miss. Pipe, pass and synth are considered misses. In backend mode, this field is blank. Varnish:handling In client mode, one of the 'hit', 'miss', 'pass', 'pipe' or 'synth' strings indicating how the request was handled. In backend mode, this field is blank. Varnish:side Backend or client side. One of two values, 'b' or 'c', depending on where the request was made. In pure backend or client mode, this field will be constant. Varnish:vxid The VXID of the varnish transaction. VCL_Log:key The value set by std.log("key:value") in VCL. VSL:tag:record-prefix[field] The value of the VSL entry for the given tag-record prefix-field combination. Tag is mandatory, the other components are optional. The record prefix will limit the matches to those records that have this prefix as the first part of the record content followed by a colon. The field will, if present, treat the log record as a white space separated list of fields, and only the nth part of the record will be matched against. Fields start counting at 1 and run up to 255. Defaults to '-' when the tag is not seen, the record prefix does not match or the field is out of bounds. If a tag appears multiple times in a single transaction, the first occurrence is used. SIGNALS ======= * SIGHUP Rotate the log file (see -w option) in daemon mode, abort the loop and die gracefully when running in the foreground. * SIGUSR1 Flush any outstanding transactions. NOTES ===== The %r formatter is equivalent to "%m http://%{Host}i%U%q %H". This differs from apache's %r behavior, equivalent to "%m %U%q %H". Furthermore, when using the %r formatter, if the Host header appears multiple times in a single transaction, the first occurrence is used. EXAMPLE ======= Log the second field of the Begin record, corresponding to the VXID of the parent transaction:: varnishncsa -F "%{VSL:Begin[2]}x" Log the entire Timestamp record associated with the processing length:: varnishncsa -F "%{VSL:Timestamp:Process}x" Log in JSON, using the -j flag to ensure that the output is valid JSON for all inputs:: varnishncsa -j -F '{"size": %b, "time": "%t", "ua": "%{User-Agent}i"}' SEE ALSO ======== :ref:`varnishd(1)` :ref:`varnishlog(1)` :ref:`varnishstat(1)` :ref:`vsl-query(7)` :ref:`vsl(7)` HISTORY ======= The varnishncsa utility was developed by Poul-Henning Kamp in cooperation with Verdens Gang AS and Varnish Software AS. This manual page was initially written by Dag-Erling Smørgrav , and later updated by Martin Blix Grydeland and PÃ¥l Hermunn Johansen. COPYRIGHT ========= This document is licensed under the same licence as Varnish itself. See LICENCE for details. * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2016 Varnish Software AS varnish-7.5.0/doc/sphinx/reference/varnishstat.rst000066400000000000000000000053221457605730600223300ustar00rootroot00000000000000.. Copyright (c) 2010-2020 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. role:: ref(emphasis) .. _varnishstat(1): =========== varnishstat =========== ------------------------ Varnish Cache statistics ------------------------ :Manual section: 1 SYNOPSIS ======== .. include:: ../include/varnishstat_synopsis.rst varnishstat |synopsis| DESCRIPTION =========== The varnishstat utility displays statistics from a running varnishd(1) instance. The following options are available: .. include:: ../include/varnishstat_options.rst CURSES MODE =========== When neither -1, -j nor -x options are given, the application starts up in curses mode. This shows a continuously updated view of the counter values, along with their description. The top area shows process uptime information. The center area shows a list of counter values. The bottom area shows the description of the currently selected counter. On startup, only counters at INFO level are shown. Columns ------- The following columns are displayed, from left to right: Name The name of the counter Current The current value of the counter. Change The average per second change over the last update interval. Average The average value of this counter over the runtime of the Varnish daemon, or a period if the counter can't be averaged. Avg_10 The moving average over the last 10 update intervals. Avg_100 The moving average over the last 100 update intervals. Avg_1000 The moving average over the last 1000 update intervals. Key bindings ------------ .. include:: ../include/varnishstat_bindings.rst OUTPUTS ======= The XML output format is:: FIELD NAME FIELD VALUE FIELD SEMANTICS FIELD DISPLAY FORMAT FIELD DESCRIPTION [..] The JSON output format is:: { "timestamp": "YYYY-MM-DDTHH:mm:SS", "FIELD NAME": { "description": "FIELD DESCRIPTION", "flag": "FIELD SEMANTICS", "format": "FIELD DISPLAY FORMAT", "value": FIELD VALUE }, "FIELD NAME": { "description": "FIELD DESCRIPTION", "flag": "FIELD SEMANTICS", "format": "FIELD DISPLAY FORMAT", "value": FIELD VALUE }, [..] } Timestamp is the time when the report was generated by varnishstat. SEE ALSO ======== * :ref:`varnishd(1)` * :ref:`varnishhist(1)` * :ref:`varnishlog(1)` * :ref:`varnishncsa(1)` * :ref:`varnishtop(1)` * curses(3) * :ref:`varnish-counters(7)` AUTHORS ======= This manual page was written by Dag-Erling Smørgrav, Per Buer, Lasse Karstensen and Martin Blix Grydeland. varnish-7.5.0/doc/sphinx/reference/varnishtest.rst000066400000000000000000000136451457605730600223430ustar00rootroot00000000000000.. Copyright (c) 2010-2020 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. role:: ref(emphasis) .. _varnishtest(1): =========== varnishtest =========== ------------------------ Test program for Varnish ------------------------ :Manual section: 1 SYNOPSIS ======== varnishtest [-hikLlqv] [-b size] [-D name=val] [-j jobs] [-n iter] [-t duration] file [file ...] DESCRIPTION =========== The varnishtest program is a script driven program used to test the Varnish Cache. The varnishtest program, when started and given one or more script files, can create a number of threads representing backends, some threads representing clients, and a varnishd process. This is then used to simulate a transaction to provoke a specific behavior. The following options are available: -b size Set internal buffer size (default: 1M) -D name=val Define macro for use in scripts -h Show help -i Set PATH and vmod_path to find varnish binaries in build tree -j jobs Run this many tests in parallel -k Continue on test failure -L Always leave temporary vtc.* -l Leave temporary vtc.* if test fails -n iterations Run tests this many times -p name=val Pass parameters to all varnishd command lines -q Quiet mode: report only failures -t duration Time tests out after this long (default: 60s) -v Verbose mode: always report test log file File to use as a script If `TMPDIR` is set in the environment, varnishtest creates temporary `vtc.*` directories for each test in `$TMPDIR`, otherwise in `/tmp`. SCRIPTS ======= The vtc syntax is documented at length in :ref:`vtc(7)`. Should you want more examples than the one below, you can have a look at the Varnish source code repository, under `bin/varnishtest/tests/`, where all the regression tests for Varnish are kept. An example:: varnishtest "#1029" server s1 { rxreq expect req.url == "/bar" txresp -gzipbody {[bar]} rxreq expect req.url == "/foo" txresp -body {

FOOBARF

} } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_esi = true; if (bereq.url == "/foo") { set beresp.ttl = 0s; } else { set beresp.ttl = 10m; } } } -start client c1 { txreq -url "/bar" -hdr "Accept-Encoding: gzip" rxresp gunzip expect resp.bodylen == 5 txreq -url "/foo" -hdr "Accept-Encoding: gzip" rxresp expect resp.bodylen == 21 } -run When run, the above script will simulate a server (s1) that expects two different requests. It will start a Varnish server (v1) and add the backend definition to the VCL specified (-vcl+backend). Finally it starts the c1-client, which is a single client sending two requests. TESTING A BUILD TREE ==================== Whether you are building a VMOD or trying to use one that you freshly built, you can tell ``varnishtest`` to pass a *vmod_path* to ``varnishd`` instances started using the ``varnish -start`` command in your test case:: varnishtest -p vmod_path=... /path/to/*.vtc This way you can use the same test cases on both installed and built VMODs:: server s1 {...} -start varnish v1 -vcl+backend { import wossname; ... } -start ... You are not limited to the *vmod_path* and can pass any parameter, allowing you to run a build matrix without changing the test suite. You can achieve the same with macros, but then they need to be defined on each run. You can see the actual ``varnishd`` command lines in test outputs, they look roughly like this:: exec varnishd [varnishtest -p params] [testing params] [vtc -arg params] Parameters you define with ``varnishtest -p`` may be overridden by parameters needed by ``varnishtest`` to run properly, and they may in turn be overridden by parameters set in test scripts. There's also a special mode in which ``varnishtest`` builds itself a PATH and a *vmod_path* in order to find Varnish binaries (programs and VMODs) in the build tree surrounding the ``varnishtest`` binary. This is meant for testing of Varnish under development and will disregard your *vmod_path* if you set one. If you need to test your VMOD against a Varnish build tree, you must install it first, in a temp directory for instance. With information provided by the installation's *pkg-config(1)* you can build a proper PATH in order to access Varnish programs, and a *vmod_path* to access both your VMOD and the built-in VMODs:: export PKG_CONFIG_PATH=/path/to/install/lib/pkgconfig BINDIR="$(pkg-config --variable=bindir varnishapi)" SBINDIR="$(pkg-config --variable=sbindir varnishapi)" PATH="SBINDIR:BINDIR:$PATH" VMODDIR="$(pkg-config --variable=vmoddir varnishapi)" VMOD_PATH="/path/to/your/vmod/build/dir:$VMODDIR" varnishtest -p vmod_path="$VMOD_PATH" ... SEE ALSO ======== * varnishtest source code repository with tests * :ref:`varnishhist(1)` * :ref:`varnishlog(1)` * :ref:`varnishncsa(1)` * :ref:`varnishstat(1)` * :ref:`varnishtop(1)` * :ref:`vcl(7)` * :ref:`vtc(7)` * :ref:`vmod_vtc(3)` HISTORY ======= The varnishtest program was developed by Poul-Henning Kamp in cooperation with Varnish Software AS. This manual page was originally written by Stig Sandbeck Mathisen and updated by Kristian Lyngstøl . COPYRIGHT ========= This document is licensed under the same licence as Varnish itself. See LICENCE for details. * Copyright (c) 2007-2016 Varnish Software AS varnish-7.5.0/doc/sphinx/reference/varnishtop.rst000066400000000000000000000034131457605730600221560ustar00rootroot00000000000000.. Copyright (c) 2010-2015 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. role:: ref(emphasis) .. _varnishtop(1): ========== varnishtop ========== ------------------------- Varnish log entry ranking ------------------------- :Manual section: 1 SYNOPSIS ======== .. include:: ../include/varnishtop_synopsis.rst varnishtop |synopsis| DESCRIPTION =========== The varnishtop utility reads :ref:`varnishd(1)` shared memory logs and presents a continuously updated list of the most commonly occurring log entries. With suitable filtering using the ``-I``, ``-i``, ``-X`` and ``-x`` options, it can be used to display a ranking of requested documents, clients, user agents, or any other information which is recorded in the log. The following options are available: .. include:: ../include/varnishtop_options.rst EXAMPLES ======== The following example displays a continuously updated list of the most frequently requested URLs:: varnishtop -i ReqURL The following example displays a continuously updated list of the most commonly used user agents:: varnishtop -C -I ReqHeader:User-Agent SEE ALSO ======== * :ref:`varnishd(1)` * :ref:`varnishhist(1)` * :ref:`varnishlog(1)` * :ref:`varnishncsa(1)` * :ref:`varnishstat(1)` HISTORY ======= The varnishtop utility was originally developed by Poul-Henning Kamp in cooperation with Verdens Gang AS and Varnish Software AS, and later substantially rewritten by Dag-Erling Smørgrav. This manual page was written by Dag-Erling Smørgrav, and later updated by Martin Blix Grydeland. COPYRIGHT ========= This document is licensed under the same licence as Varnish itself. See LICENCE for details. * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS varnish-7.5.0/doc/sphinx/reference/vcl-backend.rst000066400000000000000000000142471457605730600221410ustar00rootroot00000000000000.. Copyright (c) 2021 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. role:: ref(emphasis) .. _vcl-backend(7): ============ VCL-backends ============ -------------------- Configuring Backends -------------------- :Manual section: 7 .. _backend_definition: Backend definition ------------------ A backend declaration creates and initialises a named backend object. A declaration start with the keyword ``backend`` followed by the name of the backend. The actual declaration is in curly brackets, in a key/value fashion.:: backend name { .attribute1 = value; .attribute2 = value; [...] } If there is a backend named ``default`` it will be used unless another backend is explicitly set. If no backend is named ``default`` the first backend in the VCL program becomes the default. If you only use dynamic backends created by VMODs, an empty, always failing (503) backend can be specified:: backend default none; A backend must be specified with either a ``.host`` or a ``.path`` attribute, but not both. All other attributes have default values. Attribute ``.host`` ------------------- To specify a networked backend ``.host`` takes either a numeric IPv4/IPv6 address or a domain name which resolves to *at most* one IPv4 and one IPv6 address:: .host = "127.0.0.1"; .host = "[::1]:8080"; .host = "example.com:8081"; .host = "example.com:http"; Attribute ``.port`` ------------------- The TCP port number or service name can be specified as part of ``.host`` as above or separately using the ``.port`` attribute:: .port = "8081"; .port = "http"; Attribute ``.path`` ------------------- The absolute path to a Unix(4) domain socket of a local backend:: .path = "/var/run/http.sock"; or, where available, ``@`` followed by the name of an abstract socket of a local backend:: .path = "@mybackend"; A warning will be issued if the uds-socket does not exist when the VCL is loaded. This makes it possible to start the UDS-listening peer, or set the socket file's permissions afterwards. If the uds-socket socket does not exist or permissions deny access, connection attempts will fail. Attribute ``.host_header`` -------------------------- A host header to add to probes and regular backend requests if they have no such header:: .host_header = "Host: example.com"; Timeout Attributes ------------------ These attributes control how patient `varnishd` is during backend fetches:: .connect_timeout = 1.4s; .first_byte_timeout = 20s; .between_bytes_timeout = 10s; The default values comes parameters with the same names, see :ref:`varnishd(1)`. Attribute ``.max_connections`` ------------------------------ Limit how many simultaneous connections varnish can open to the backend:: .max_connections = 1000; Attribute ``.proxy_header`` --------------------------- Send a PROXY protocol header to the backend with the ``client.ip`` and ``server.ip`` values:: .proxy_header = 2; Legal values are one and two, depending which version of the PROXY protocol you want. *Notice* this setting will lead to backend connections being used for a single request only (subject to future improvements). Thus, extra care should be taken to avoid running into failing backend connections with EADDRNOTAVAIL due to no local ports being available. Possible options are: * Use additional backend connections to extra IP addresses or TCP ports * Increase the number of available ports (Linux sysctl ``net.ipv4.ip_local_port_range``) * Reuse backend connection ports early (Linux sysctl ``net.ipv4.tcp_tw_reuse``) Attribute ``.preamble`` ----------------------- Send a BLOB on all newly opened connections to the backend:: .preamble = :SGVsbG8gV29ybGRcbgo=:; .. _backend_definition_via: Attribute ``.via`` ------------------ .. _PROXY2: https://raw.githubusercontent.com/haproxy/haproxy/master/doc/proxy-protocol.txt Name of another *proxy* backend through which to make the connection to the *destination* backend using the `PROXY2`_ protocol, for example:: backend proxy { .path = "/path/to/proxy2_endpoint"; } backend destination { .host = "1.2.3.4"; .via = proxy; } The *proxy* backend can also use a ``.host``\ /\ ``.port`` definition rather than ``.path``. Use of the ``.path`` attribute for the *destination* backend is not supported. The ``.via`` attribute is unrelated to ``.proxy_header``. If both are used, a second header is sent as per ``.proxy_header`` specification. As of this release, the *proxy* backend used with ``.via`` can not be a director, it can not itself use ``.via`` (error: *Can not stack .via backends*) and the protocol is fixed to `PROXY2`_. Implementation detail: If ``.via = `` is used, a `PROXY2`_ preamble is created with the *destination* backend's address information as ``dst_addr``\ /\ ``dst_port`` and, optionally, other TLV attributes. The connection is then made to the *proxy* backend's endpoint (``path`` or ``host``\ /\ ``port``). This is technically equivalent to specifying a ``backend destination_via_proxy`` with a ``.preamble`` attribute containing the appropriate `PROXY2`_ preamble for the *destination* backend. Attribute ``.authority`` ------------------------ The HTTP authority to use when connecting to this backend. If unset, ``.host_header`` or ``.host`` are used. ``.authority = ""`` disables sending an authority. As of this release, the attribute is only used by ``.via`` connections as a ``PP2_TYPE_AUTHORITY`` Type-Length-Value (TLV) in the `PROXY2`_ preamble. Attribute ``.probe`` -------------------- Please see :ref:`vcl-probe(7)`. SEE ALSO -------- * :ref:`varnishd(1)` * :ref:`vcl(7)` * :ref:`vcl-probe(7)` * :ref:`vmod_directors(3)` * :ref:`vmod_std(3)` HISTORY ------- VCL was developed by Poul-Henning Kamp in cooperation with Verdens Gang AS, Redpill Linpro and Varnish Software. This manual page is written by Per Buer, Poul-Henning Kamp, Martin Blix Grydeland, Kristian Lyngstøl, Lasse Karstensen and others. COPYRIGHT --------- This document is licensed under the same license as Varnish itself. See LICENSE for details. * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2021 Varnish Software AS varnish-7.5.0/doc/sphinx/reference/vcl-probe.rst000066400000000000000000000123751457605730600216610ustar00rootroot00000000000000.. Copyright (c) 2021 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. role:: ref(emphasis) .. _vcl-probe(7): ========= VCL-probe ========= --------------------------------- Configuring Backend Health Probes --------------------------------- :Manual section: 7 .. _reference-vcl_probes: Backend health probes --------------------- Varnish can be configured to periodically send a request to test if a backend is answering and thus "healthy". Probes can be configured per backend:: backend foo { [...] .probe = { [...] } } They can be named and shared between backends:: probe light { [...] } backend foo { .probe = light; } backend bar { .probe = light; } Or a ``default`` probe can be defined, which applies to all backends without a specific ``.probe`` configured:: probe default { [...] } The basic syntax is the same as for backends:: probe name { .attribute1 = value; .attribute2 = "value"; [...] } There are no mandatory attributes, they all have defaults. Attribute ``.url`` ------------------ The URL to query. Defaults to ``/``:: .url = "/health-probe"; Attribute ``.request`` ---------------------- Can be used to specify a full HTTP/1.1 request to be sent:: .request = "GET / HTTP/1.1" "Host: example.com" "X-Magic: We're fine with this." "Connection: close"; Each of the strings will have ``CRNL`` appended and a final HTTP header block terminating ``CRNL`` will be appended as well. Because connection shutdown is part of the health check, ``Connection: close`` is mandatory. Attribute ``.expected_response`` -------------------------------- The expected HTTP status, defaults to ``200``:: .expected_response = 418; Attribute ``.expect_close`` --------------------------- Whether or not to expect the backend to close the underlying connection. Accepts ``true`` or ``false``, defaults to ``true``:: .expect_close = false; Warning: when the backend does not close the connection, setting ``expect_close`` to ``false`` makes probe tasks wait until they time out before inspecting the response. Attribute ``.timeout`` ---------------------- How fast the probe must succeed, default is two seconds:: .timeout = 10s; Attribute ``.interval`` ----------------------- Time between probes, default is five seconds:: .interval = 1m; The backend health shift register --------------------------------- Backend health probes uses a 64 stage shift register to remember the most recent health probes and to evaluate the total health of the backend. In the CLI, a good backend health status looks like this: .. code-block:: text varnish> backend.list -p boot.backend Backend name Admin Probe Health Last change boot.backend probe 5/5 healthy Wed, 13 Jan 2021 10:31:50 GMT Current states good: 5 threshold: 4 window: 5 Average response time of good probes: 0.000793 Oldest ================================================== Newest 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit RRRRRRRRRRRRRRRRRRRRRRR----RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR Good Recv HHHHHHHHHHHHHHHHHHHHHHH--------HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH Happy Starting from the bottom, the last line shows that this backend has been declared "Happy" for most the 64 health probes, but there were some trouble some while ago. However, in this case the ``.window`` is configured to five, and the ``.threshold`` is set to four, so at this point in time, the backend is considered fully healthy. An additional ``.initial`` fills that many "happy" entries in the shift register when the VCL is loaded, so that backends can quickly become healthy, even if their health is normally considered over many samples:: .interval = 1s; .window = 60; .threshold = 45; .initial = 43; This backend will be considered healthy if three out of four health probes in the last minute were good, but it becomes healthy as soon as two good probes have happened after the VCL was loaded. The default values are: * ``.window`` = 8 * ``.threshold`` = 3 * ``.initial`` = one less than ``.threshold`` Note that the default ``.initial`` means that the backend will be marked unhealthy until the first probe response come back successful. This means that for backends created on demand (by vmods) cannot use the default value for ``.initial``, as the freshly created backend would very likely still be unhealthy when the backend request happens. SEE ALSO ======== * :ref:`varnishd(1)` * :ref:`vcl(7)` * :ref:`vcl-backend(7)` * :ref:`vmod_directors(3)` * :ref:`vmod_std(3)` HISTORY ======= VCL was developed by Poul-Henning Kamp in cooperation with Verdens Gang AS, Redpill Linpro and Varnish Software. This manual page is written by Per Buer, Poul-Henning Kamp, Martin Blix Grydeland, Kristian Lyngstøl, Lasse Karstensen and others. COPYRIGHT ========= This document is licensed under the same license as Varnish itself. See LICENSE for details. * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2021 Varnish Software AS varnish-7.5.0/doc/sphinx/reference/vcl-step.rst000066400000000000000000000071171457605730600215230ustar00rootroot00000000000000.. Copyright (c) 2013-2021 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. role:: ref(emphasis) .. _vcl-step(7): ========= VCL-steps ========= -------------------- Built-in subroutines -------------------- :Manual section: 7 DESCRIPTION =========== Various built-in subroutines are called during processing of client and backend requests as well as upon ``vcl.load`` and ``vcl.discard``. See :ref:`reference-states` for a detailed graphical overview of the states and how they relate to core code functions and VCL subroutines. Built-in subroutines always terminate with a ``return ()``, where ```` determines how processing continues in the request processing state machine. The behaviour of actions is identical or at least similar across subroutines, so differences are only documented where relevant. Common actions are documented in :ref:`vcl_actions` in the next section. Actions specific to only one or some subroutines are documented in :ref:`vcl_steps`. A default behavior is provided for all :ref:`reference-states` in the :ref:`vcl-built-in-code` code. .. _vcl_actions: VCL Actions =========== Actions are used with the ``return()`` keyword, which returns control from subroutines back to varnish. The action determines how processing in varnish continues as shown in :ref:`reference-states`. Common actions are documented here, while additional actions specific to only one or some subroutines are documented in the next section :ref:`vcl_steps` as well as which action can be used from which built in subroutine. Common actions for the client and backend side ############################################## .. _fail: ``fail`` ~~~~~~~~ Transition to :ref:`vcl_synth` on the client side as for ``return(synth(503, "VCL Failed"))``, but with any request state changes undone as if ``std.rollback()`` was called and forcing a connection close. Intended for fatal errors, for which only minimal error handling is possible. Common actions for the client side ################################## .. _synth: ``synth(status code, reason)`` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Transition to :ref:`vcl_synth` with ``resp.status`` and ``resp.reason`` being preset to the arguments of ``synth()``. .. _pass: ``pass`` ~~~~~~~~ Switch to pass mode, making the current request not use the cache and not putting its response into it. Control will eventually pass to :ref:`vcl_pass`. .. _pipe: ``pipe`` ~~~~~~~~ Switch to pipe mode. Control will eventually pass to :ref:`vcl_pipe`. .. _restart: ``restart`` ~~~~~~~~~~~ Restart the transaction. Increases the ``req.restarts`` counter. If the number of restarts is higher than the *max_restarts* parameter, control is passed to :ref:`vcl_synth` as for ``return(synth(503, "Too many restarts"))`` For a restart, all modifications to ``req`` attributes are preserved except for ``req.restarts`` and ``req.xid``, which need to change by design. Common actions for the backend side ################################### .. _abandon: ``abandon`` ~~~~~~~~~~~ Abandon the backend request. Unless the backend request was a background fetch, control is passed to :ref:`vcl_synth` on the client side with ``resp.status`` preset to 503. .. include:: vcl_step.rst SEE ALSO ======== * :ref:`varnishd(1)` * :ref:`vcl(7)` COPYRIGHT ========= This document is licensed under the same license as Varnish itself. See LICENSE for details. * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2021 Varnish Software AS varnish-7.5.0/doc/sphinx/reference/vcl-var.rst000066400000000000000000000125361457605730600213410ustar00rootroot00000000000000.. Copyright (c) 2021 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. role:: ref(emphasis) .. _vcl-var(7): ============= VCL-Variables ============= ------------------ The complete album ------------------ :Manual section: 7 DESCRIPTION =========== This is a list of all variables in the VCL language. Variable names take the form ``scope.variable[.index]``, for instance:: req.url beresp.http.date client.ip Which operations are possible on each variable is described below, often with the shorthand "backend" which covers the ``vcl_backend_* {}`` subroutines and "client" which covers the rest, except ``vcl_init {}`` and ``vcl_fini {}``. .. include:: vcl_var.rst .. _protected_headers: Protected header fields ----------------------- The ``content-length`` and ``transfer-encoding`` headers are read-only. They must be preserved to ensure HTTP/1 framing remains consistent and maintain a proper request and response synchronization with both clients and backends. VMODs can still update these headers, when there is a reason to change the framing, such as a transformation of a request or response body. HTTP response status -------------------- A HTTP status code has 3 digits XYZ where X must be between 1 and 5 included. Since it is not uncommon to see HTTP clients or servers relying on non-standard or even invalid status codes, Varnish can work with any status between 100 and 999. Within VCL code it is even possible to use status codes in the form VWXYZ as long as the overall value is lower than 65536, but only the XYZ part will be sent to the client, by which time the X must also have become non-zero. The VWXYZ form of status codes can be communicate extra information in ``resp.status`` and ``beresp.status`` to ``return(synth(...))`` and ``return(error(...))``, to indicate which synthetic content to produce:: sub vcl_recv { if ([...]) { return synth(12404); } } sub vcl_synth { if (resp.status == 12404) { [...] // this specific 404 } else if (resp.status % 1000 == 404) { [...] // all other 404's } } The ``obj.status`` variable will inherit the VWXYZ form, but in a ban expression only the XYZ part will be available. The VWXYZ form is strictly limited to VCL execution. Assigning an HTTP standardized code to ``resp.status`` or ``beresp.status`` will also set ``resp.reason`` or ``beresp.reason`` to the corresponding status message. 304 handling ~~~~~~~~~~~~ For a 304 response, Varnish core code amends ``beresp`` before calling `vcl_backend_response`: * If the gzip status changed, ``Content-Encoding`` is unset and any ``Etag`` is weakened * Any headers not present in the 304 response are copied from the existing cache object. ``Content-Length`` is copied if present in the existing cache object and discarded otherwise. * The status gets set to 200. `beresp.was_304` marks that this conditional response processing has happened. Note: Backend conditional requests are independent of client conditional requests, so clients may receive 304 responses no matter if a backend request was conditional. beresp.ttl / beresp.grace / beresp.keep ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Before calling `vcl_backend_response`, core code sets ``beresp.ttl`` based on the response status and the response headers ``Age``, ``Cache-Control`` or ``Expires`` and ``Date`` as follows: * If present and valid, the value of the ``Age`` header is effectively deduced from all ttl calculations. * For status codes 200, 203, 204, 300, 301, 304, 404, 410 and 414: * If ``Cache-Control`` contains an ``s-maxage`` or ``max-age`` field (in that order of preference), the ttl is set to the respective non-negative value or 0 if negative. * Otherwise, if no ``Expires`` header exists, the default ttl is used. * Otherwise, if ``Expires`` contains a time stamp before ``Date``, the ttl is set to 0. * Otherwise, if no ``Date`` header is present or the ``Date`` header timestamp differs from the local clock by no more than the `clock_skew` parameter, the ttl is set to * 0 if ``Expires`` denotes a past timestamp or * the difference between the local clock and the ``Expires`` header otherwise. * Otherwise, the ttl is set to the difference between ``Expires`` and ``Date`` * For status codes 302 and 307, the calculation is identical except that the default ttl is not used and -1 is returned if neither ``Cache-Control`` nor ``Expires`` exists. * For all other status codes, ttl -1 is returned. ``beresp.grace`` defaults to the `default_grace` parameter. For a non-negative ttl, if ``Cache-Control`` contains a ``stale-while-revalidate`` field value, ``beresp.grace`` is set to that value if non-negative or 0 otherwise. ``beresp.keep`` defaults to the `default_keep` parameter. SEE ALSO ======== * :ref:`varnishd(1)` * :ref:`vcl(7)` HISTORY ======= VCL was developed by Poul-Henning Kamp in cooperation with Verdens Gang AS, Redpill Linpro and Varnish Software. This manual page is written by Per Buer, Poul-Henning Kamp, Martin Blix Grydeland, Kristian Lyngstøl, Lasse Karstensen and others. COPYRIGHT ========= This document is licensed under the same license as Varnish itself. See LICENSE for details. * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2021 Varnish Software AS varnish-7.5.0/doc/sphinx/reference/vcl.rst000066400000000000000000000316761457605730600205610ustar00rootroot00000000000000.. Copyright (c) 2010-2021 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. role:: ref(emphasis) .. _vcl(7): === VCL === ------------------------------ Varnish Configuration Language ------------------------------ :Manual section: 7 DESCRIPTION =========== The VCL language is a small domain-specific language designed to be used to describe request handling and document caching policies for Varnish Cache. When a new configuration is loaded, the varnishd management process translates the VCL code to C and compiles it to a shared object which is then loaded into the server process. This document focuses on the syntax of the VCL language. For a full description of syntax and semantics, with ample examples, please see the online documentation at https://www.varnish-cache.org/docs/ . Starting with Varnish 4.0, each VCL file must start by declaring its version with ``vcl`` *.*\ ``;`` marker at the top of the file. See more about this under Versioning below. Operators --------- The following operators are available in VCL: ``=`` Assignment operator. ``+``, ``-``, ``*``, ``/``, ``%`` Basic math on numerical values. ``+=``, ``-=``, ``*=``, ``/=`` Assign and increment/decrement/multiply/divide operator. For strings, ``+=`` appends. ``(``, ``)`` Evaluate separately. ``==``, ``!=``, ``<``, ``>``, ``<=``, ``>=`` Comparisons ``~``, ``!~`` Match / non-match. Can either be used with regular expressions or ACLs. ``!`` Negation. ``&&`` / ``||`` Logical and/or. Conditionals ------------ VCL has ``if`` and ``else`` statements. Nested logic can be implemented with the ``elseif`` statement (``elsif``\ /\ ``elif``\ /\ ``else if`` are equivalent). Note that there are no loops or iterators of any kind in VCL. Variables --------- VCL does most of the work by examining, ``set``'ing and ``unset``'ing variables:: if (req.url == "/mistyped_url.html") { set req.url = "/correct_url.html"; unset req.http.cookie; } There are obvious limitations to what can be done, for instance it makes no sense to ``unset req.url;`` - a request must have some kind of URL to be valid, and likewise trying to manipulate a backend response when there is none (yet) makes no sense. The VCL compiler will detect such errors. Variables have types. Most of them a STRINGS, and anything in VCL can be turned into a STRING, but some variables have types like ``DURATION``, ``IP`` etc. When setting a such variables, the right hand side of the equal sign must have the correct variables type, you cannot assign a STRING to a variable of type NUMBER, even if the string is ``"42"``. Explicit conversion functions are available in :ref:`vmod_std(3)`. For the complete album of VCL variables see: :ref:`vcl-var(7)`. Strings ~~~~~~~ Basic strings are enclosed in double quotes ``"``\ *...*\ ``"``, and may not contain newlines. Long strings are enclosed in ``{"``\ *...*\ ``"}`` or ``"""``\ *...*\ ``"""``. They may contain any character including single double quotes ``"``, newline and other control characters except for the *NUL* (0x00) character. Booleans ~~~~~~~~ Booleans can be either ``true`` or ``false``. In addition, in a boolean context some data types will evaluate to ``true`` or ``false`` depending on their value. String types will evaluate to ``false`` if they are unset. This allows checks of the type ``if (req.http.opthdr) {}`` to test if a header exists, even if it is empty, whereas ``if (req.http.opthdr == "") {}`` does not distinguish if the header does not exist or if it is empty. Backend types will evaluate to ``false`` if they don't have a backend assigned; integer types will evaluate to ``false`` if their value is zero; duration types will evaluate to ``false`` if their value is equal or less than zero. Time ~~~~ VCL has time. A duration can be added to a time to make another time. In string context they return a formatted string in RFC1123 format, e.g. ``Sun, 06 Nov 1994 08:49:37 GMT``. The keyword ``now`` returns a notion of the current time, which is kept consistent during VCL subroutine invocations, so during the execution of a VCL state subroutine (``vcl_* {}``), including all user-defined subroutines being called, ``now`` always returns the same value. .. _vcl(7)_durations: Durations ~~~~~~~~~ Durations are defined by a number followed by a unit. The number can include a fractional part, e.g. ``1.5s``. The supported units are: ``ms`` milliseconds ``s`` seconds ``m`` minutes ``h`` hours ``d`` days ``w`` weeks ``y`` years In string context they return a string with their value rounded to 3 decimal places and excluding the unit, e.g. ``1.500``. Integers ~~~~~~~~ Certain fields are integers, used as expected. In string context they return a string, e.g. ``1234``. Real numbers ~~~~~~~~~~~~ VCL understands real numbers. In string context they return a string with their value rounded to 3 decimal places, e.g. ``3.142``. Regular Expressions ------------------- Varnish uses Perl-compatible regular expressions (PCRE). For a complete description please see the pcre(3) man page. To send flags to the PCRE engine, such as to do case insensitive matching, add the flag within parens following a question mark, like this:: # If host is NOT example dot com.. if (req.http.host !~ "(?i)example\.com$") { ... } .. _vcl-include: Include statement ----------------- To include a VCL file in another file use the ``include`` keyword:: include "foo.vcl"; Optionally, the ``include`` keyword can take a ``+glob`` flag to include all files matching a glob pattern:: include +glob "example.org/*.vcl"; Import statement ---------------- The ``import`` statement is used to load Varnish Modules (VMODs.) Example:: import std; sub vcl_recv { std.log("foo"); } Comments -------- Single lines of VCL can be commented out using ``//`` or ``#``. Multi-line blocks can be commented out with ``/*``\ *block*\ ``*/``. Example:: sub vcl_recv { // Single line of out-commented VCL. # Another way of commenting out a single line. /* Multi-line block of commented-out VCL. */ } Backends and health probes -------------------------- Please see :ref:`vcl-backend(7)` and :ref:`vcl-probe(7)` .. _vcl-acl: Access Control List (ACL) ------------------------- An Access Control List (ACL) declaration creates and initialises a named access control list which can later be used to match client addresses:: acl localnetwork { "localhost"; # myself "192.0.2.0"/24; # and everyone on the local network ! "192.0.2.23"; # except for the dial-in router } If an ACL entry specifies a host name which Varnish is unable to resolve, it will match any address it is compared to. Consequently, if it is preceded by a negation mark, it will reject any address it is compared to, which may not be what you intended. If the entry is enclosed in parentheses, however, it will simply be ignored if the host name cannot be resolved. To match an IP address against an ACL, simply use the match operator:: if (client.ip ~ localnetwork) { return (pipe); } ACLs have feature flags which can be set or cleared for each ACL individually: * `+log` - Emit a `Acl` record in VSL to tell if a match was found or not. * `+table` - Implement the ACL with a table instead of compiled code. This runs a little bit slower, but compiles large ACLs much faster. * `-pedantic` - Allow masks to cover non-zero host-bits. This allows the following to work:: acl foo -pedantic +log { "firewall.example.com" / 24; } However, if the name resolves to both IPv4 and IPv6 you will still get an error. * `+fold` - Fold ACL supernets and adjacent networks. With this parameter set to on, ACLs are optimized in that subnets contained in other entries are skipped (e.g. if 1.2.3.0/24 is part of the ACL, an entry for 1.2.3.128/25 will not be added) and adjacent entries get folded (e.g. if both 1.2.3.0/25 and 1.2.3.128/25 are added, they will be folded to 1.2.3.0/24). Skip and fold operations on VCL entries are output as warnings during VCL compilation as entries from the VCL are processed in order. Logging under the ``VCL_acl`` tag can change with this parameter enabled: Matches on skipped subnet entries are now logged as matches on the respective supernet entry. Matches on folded entries are logged with a shorter netmask which might not be contained in the original ACL as defined in VCL. Such log entries are marked by ``fixed: folded``. Negated ACL entries are never folded. VCL objects ----------- A VCL object can be instantiated with the ``new`` keyword:: sub vcl_init { new b = directors.round_robin() b.add_backend(node1); } This is only available in ``vcl_init``. Subroutines ----------- A subroutine is used to group code for legibility or reusability:: sub pipe_if_local { if (client.ip ~ localnetwork) { return (pipe); } } Subroutines in VCL do not take arguments, nor do they return values. The built in subroutines all have names beginning with ``vcl_``, which is reserved. To call a subroutine, use the ``call`` keyword followed by the subroutine's name:: sub vcl_recv { call pipe_if_local; } Return statements ~~~~~~~~~~~~~~~~~ The ongoing ``vcl_*`` subroutine execution ends when a ``return(``\ **\ ``)`` statement is made. The ** specifies how execution should proceed. The context defines which actions are available. It is possible to exit a subroutine that is not part of the built-in ones using a simple ``return`` statement without specifying an action. It exits the subroutine without transitioning to a different state:: sub filter_cookies { if (!req.http.cookie) { return; } # complex cookie filtering } Multiple subroutines ~~~~~~~~~~~~~~~~~~~~ If multiple subroutines with the name of one of the built-in ones are defined, they are concatenated in the order in which they appear in the source. The built-in VCL distributed with Varnish will be implicitly concatenated when the VCL is compiled. Functions --------- The following built-in functions are available: .. _vcl(7)_ban: ban(STRING) ~~~~~~~~~~~ Deprecated. See :ref:`std.ban()`. The ``ban()`` function is identical to :ref:`std.ban()`, but does not provide error reporting. hash_data(input) ~~~~~~~~~~~~~~~~ Adds an input to the hash input. In the built-in VCL ``hash_data()`` is called on the host and URL of the request. Available in ``vcl_hash``. synthetic(STRING) ~~~~~~~~~~~~~~~~~ Prepare a synthetic response body containing the *STRING*. Available in ``vcl_synth`` and ``vcl_backend_error``. Identical to ``set resp.body`` / ``set beresp.body``. .. list above comes from struct action_table[] in vcc_action.c. regsub(str, regex, sub) ~~~~~~~~~~~~~~~~~~~~~~~ Returns a copy of *str* with the first occurrence of the regular expression *regex* replaced with *sub*. Within *sub*, ``\0`` (which can also be spelled ``\&``) is replaced with the entire matched string, and ``\``\ *n* is replaced with the contents of subgroup *n* in the matched string. regsuball(str, regex, sub) ~~~~~~~~~~~~~~~~~~~~~~~~~~ As ``regsub()``, but this replaces all occurrences. .. regsub* is in vcc_expr.c For converting or casting VCL values between data types use the functions available in the std VMOD. Versioning ========== Multiple versions of the VCL syntax can coexist within certain constraints. The VCL syntax version at the start of VCL file specified with ``-f`` sets the hard limit that cannot be exceeded anywhere, and it selects the appropriate version of the builtin VCL. That means that you can never include ``vcl 9.1;`` from ``vcl 8.7;``, but the opposite *may* be possible, to the extent the compiler supports it. Files pulled in via ``include`` do not need to have a ``vcl`` *X.Y*\ ``;`` but it may be a good idea to do it anyway, to not have surprises in the future. The syntax version set in an included file only applies to that file and any files it includes - unless these set their own VCL syntax version. The version of Varnish this file belongs to supports syntax 4.0 and 4.1. EXAMPLES ======== For examples, please see the online documentation. SEE ALSO ======== * :ref:`varnishd(1)` * :ref:`vcl-backend(7)` * :ref:`vcl-probe(7)` * :ref:`vcl-step(7)` * :ref:`vcl-var(7)` * :ref:`vmod_directors(3)` * :ref:`vmod_std(3)` HISTORY ======= VCL was developed by Poul-Henning Kamp in cooperation with Verdens Gang AS, Redpill Linpro and Varnish Software. This manual page is written by Per Buer, Poul-Henning Kamp, Martin Blix Grydeland, Kristian Lyngstøl, Lasse Karstensen and others. COPYRIGHT ========= This document is licensed under the same license as Varnish itself. See LICENSE for details. * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS varnish-7.5.0/doc/sphinx/reference/vcl_step.rst000066400000000000000000000274171457605730600216120ustar00rootroot00000000000000 .. _vcl_steps: VCL Steps ========= Client side ########### .. _vcl_recv: vcl_recv ~~~~~~~~ Called at the beginning of a request, after the complete request has been received and parsed, after a `restart` or as the result of an ESI include. Its purpose is to decide whether or not to serve the request, possibly modify it and decide on how to process it further. A backend hint may be set as a default for the backend processing side. The `vcl_recv` subroutine may terminate with calling ``return()`` on one of the following keywords: | | ``fail`` | see :ref:`fail` section above | | ``synth(status code, reason)`` | see :ref:`synth` section above | | ``restart`` | see :ref:`restart` section above | | ``pass`` | see :ref:`pass` section above | | ``pipe`` | see :ref:`pipe` section above | | ``hash`` | Continue processing the object as a potential candidate for | caching. Passes the control over to :ref:`vcl_hash`. | | ``purge`` | Purge the object and it's variants. Control passes through | :ref:`vcl_hash` to :ref:`vcl_purge`. | | ``vcl(label)`` | Switch to vcl labelled *label*. | | This will roll back the request as if ``std.rollback(req)`` was | called and continue vcl processing in :ref:`vcl_recv` of the vcl | labelled *label* as if it was the active vcl. | | The ``vcl(label)`` return is only valid while the ``req.restarts`` | count is zero and if used from the active vcl. | | See the :ref:`ref_cli_vcl_label` command in :ref:`varnish-cli(7)`. .. _vcl_pipe: vcl_pipe ~~~~~~~~ Called upon entering pipe mode. In this mode, the request is passed on to the backend, and any further data from both the client and backend is passed on unaltered until either end closes the connection. Basically, Varnish will degrade into a simple TCP proxy, shuffling bytes back and forth. For a connection in pipe mode, no other VCL subroutine will ever get called after `vcl_pipe`. The `vcl_pipe` subroutine may terminate with calling ``return()`` with one of the following keywords: | | ``fail`` | see :ref:`fail` section above | | ``synth(status code, reason)`` | see :ref:`synth` section above | | ``pipe`` | Proceed with pipe mode. .. _vcl_pass: vcl_pass ~~~~~~~~ Called upon entering pass mode. In this mode, the request is passed on to the backend, and the backend's response is passed on to the client, but is not entered into the cache. Subsequent requests submitted over the same client connection are handled normally. The `vcl_pass` subroutine may terminate with calling ``return()`` with one of the following keywords: | | ``fail`` | see :ref:`fail` section above | | ``synth(status code, reason)`` | see :ref:`synth` section above | | ``restart`` | see :ref:`restart` section above | | ``fetch`` | Proceed with pass mode - initiate a backend request. .. _vcl_hash: vcl_hash ~~~~~~~~ Called after `vcl_recv` to create a hash value for the request. This is used as a key to look up the object in Varnish. The `vcl_hash` subroutine may terminate with calling ``return()`` with one of the following keywords: | | ``fail`` | see :ref:`fail` section above | | ``lookup`` | Look up the object in cache. | | Control passes to :ref:`vcl_purge` when coming from a ``purge`` | return in `vcl_recv`. | | Otherwise control passes to the next subroutine depending on the | result of the cache lookup: | | * a hit: pass to :ref:`vcl_hit` | | * a miss or a hit on a hit-for-miss object (an object with | ``obj.uncacheable == true``): pass to :ref:`vcl_miss` | | * a hit on a hit-for-pass object (for which ``pass(DURATION)`` had been | previously returned from ``vcl_backend_response``): pass to | :ref:`vcl_pass` .. _vcl_purge: vcl_purge ~~~~~~~~~ Called after the purge has been executed and all its variants have been evicted. The `vcl_purge` subroutine may terminate with calling ``return()`` with one of the following keywords: | | ``fail`` | see :ref:`fail` section above | | ``synth(status code, reason)`` | see :ref:`synth` section above | | ``restart`` | see :ref:`restart` section above .. _vcl_miss: vcl_miss ~~~~~~~~ Called after a cache lookup if the requested document was not found in the cache or if :ref:`vcl_hit` returned ``fetch``. Its purpose is to decide whether or not to attempt to retrieve the document from the backend. A backend hint may be set as a default for the backend processing side. The `vcl_miss` subroutine may terminate with calling ``return()`` with one of the following keywords: | | ``fail`` | see :ref:`fail` section above | | ``synth(status code, reason)`` | see :ref:`synth` section above | | ``restart`` | see :ref:`restart` section above | | ``pass`` | see :ref:`pass` section above | | ``fetch`` | Retrieve the requested object from the backend. Control will | eventually pass to `vcl_backend_fetch`. .. _vcl_hit: vcl_hit ~~~~~~~ Called when a cache lookup is successful. The object being hit may be stale: It can have a zero or negative `ttl` with only `grace` or `keep` time left. The `vcl_hit` subroutine may terminate with calling ``return()`` with one of the following keywords: | ``fail`` | see :ref:`fail` section above | | ``synth(status code, reason)`` | see :ref:`synth` section above | | ``restart`` | see :ref:`restart` section above | | ``pass`` | see :ref:`pass` section above | | ``deliver`` | Deliver the object. If it is stale, a background fetch to refresh | it is triggered. .. _vcl_deliver: vcl_deliver ~~~~~~~~~~~ Called before any object except a `vcl_synth` result is delivered to the client. The `vcl_deliver` subroutine may terminate with calling ``return()`` with one of the following keywords: | | ``fail`` | see :ref:`fail` section above | | ``synth(status code, reason)`` | see :ref:`synth` section above | | ``restart`` | see :ref:`restart` section above | | ``deliver`` | Deliver the object to the client. .. _vcl_synth: vcl_synth ~~~~~~~~~ Called to deliver a synthetic object. A synthetic object is generated in VCL, not fetched from the backend. Its body may be constructed using the ``synthetic()`` function. A `vcl_synth` defined object never enters the cache, contrary to a :ref:`vcl_backend_error` defined object, which may end up in cache. The subroutine may terminate with calling ``return()`` with one of the following keywords: | | ``fail`` | see :ref:`fail` section above | | ``restart`` | see :ref:`restart` section above | | ``deliver`` | Directly deliver the object defined by `vcl_synth` to the client | without calling `vcl_deliver`. Backend Side ############ .. _vcl_backend_fetch: vcl_backend_fetch ~~~~~~~~~~~~~~~~~ Called before sending the backend request. In this subroutine you typically alter the request before it gets to the backend. The `vcl_backend_fetch` subroutine may terminate with calling ``return()`` with one of the following keywords: | | ``fail`` | see :ref:`fail` section above | | ``abandon`` | see :ref:`abandon` section above | | ``fetch`` | Fetch the object from the backend. | | ``error(status code, reason)`` | Transition to :ref:`vcl_backend_error` with ``beresp.status`` and | ``beresp.reason`` being preset to the arguments of ``error()`` if | arguments are provided. Before calling `vcl_backend_fetch`, Varnish core prepares the `bereq` backend request as follows: * Unless the request is a `pass`, * set ``bereq.method`` to ``GET`` and ``bereq.proto`` to ``HTTP/1.1`` and * set ``bereq.http.Accept_Encoding`` to ``gzip`` if :ref:`ref_param_http_gzip_support` is enabled. * If there is an existing cache object to be revalidated, set ``bereq.http.If-Modified-Since`` from its ``Last-Modified`` header and/or set ``bereq.http.If-None-Match`` from its ``Etag`` header * Set ``bereq.http.X-Varnish`` to the current transaction id (`vxid`) These changes can be undone or modified in `vcl_backend_fetch` before the backend request is issued. In particular, to cache non-GET requests, ``req.method`` needs to be saved to a header or variable in :ref:`vcl_recv` and restored to ``bereq.method``. Notice that caching non-GET requests typically also requires changing the cache key in :ref:`vcl_hash` e.g. by also hashing the request method and/or request body. HEAD request can be satisfied from cached GET responses. .. _vcl_backend_response: vcl_backend_response ~~~~~~~~~~~~~~~~~~~~ Called after the response headers have been successfully retrieved from the backend. The `vcl_backend_response` subroutine may terminate with calling ``return()`` with one of the following keywords: | | ``fail`` | see :ref:`fail` section above | | ``abandon`` | see :ref:`abandon` section above | | ``deliver`` | For a 304 response, create an updated cache object. | Otherwise, fetch the object body from the backend and initiate | delivery to any waiting client requests, possibly in parallel | (streaming). | | ``retry`` | Retry the backend transaction. Increases the `retries` counter. | If the number of retries is higher than *max_retries*, | control will be passed to :ref:`vcl_backend_error`. | | ``pass(duration)`` | Mark the object as a hit-for-pass for the given duration. Subsequent | lookups hitting this object will be turned into passed transactions, | as if ``vcl_recv`` had returned ``pass``. | | ``error(status code, reason)`` | Transition to :ref:`vcl_backend_error` with ``beresp.status`` and | ``beresp.reason`` being preset to the arguments of ``error()`` if | arguments are provided. .. _vcl_backend_error: vcl_backend_error ~~~~~~~~~~~~~~~~~ This subroutine is called if we fail the backend fetch or if *max_retries* has been exceeded. Returning with :ref:`abandon` does not leave a cache object. If returning with ``deliver`` and a ``beresp.ttl > 0s``, a synthetic cache object is generated in VCL, whose body may be constructed using the ``synthetic()`` function. When there is a waiting list on the object, the default ``ttl`` will be positive (currently one second), set before entering ``vcl_backend_error``. This is to avoid request serialization and hammering on a potentially failing backend. Since these synthetic objects are cached in these special circumstances, be cautious with putting private information there. If you really must, then you need to explicitly set ``beresp.ttl`` to zero in ``vcl_backend_error``. The `vcl_backend_error` subroutine may terminate with calling ``return()`` with one of the following keywords: | | ``fail`` | see :ref:`fail` section above | | ``abandon`` | see :ref:`abandon` section above | | ``deliver`` | Deliver and possibly cache the object defined in | `vcl_backend_error` **as if it was fetched from the backend**, also | referred to as a "backend synth". | | ``retry`` | Retry the backend transaction. Increases the `retries` counter. | If the number of retries is higher than *max_retries*, | :ref:`vcl_synth` on the client side is called with ``resp.status`` | preset to 503. During vcl.load / vcl.discard ############################# .. _vcl_init: vcl_init ~~~~~~~~ Called when VCL is loaded, before any requests pass through it. Typically used to initialize VMODs. The `vcl_init` subroutine may terminate with calling ``return()`` with one of the following keywords: | | ``ok`` | Normal return, VCL continues loading. | | ``fail`` | Abort loading of this VCL. .. _vcl_fini: vcl_fini ~~~~~~~~ Called when VCL is discarded only after all requests have exited the VCL. Typically used to clean up VMODs. The `vcl_fini` subroutine may terminate with calling ``return()`` with one of the following keywords: | | ``ok`` | Normal return, VCL will be discarded. varnish-7.5.0/doc/sphinx/reference/vcl_var.rst000066400000000000000000001011001457605730600214050ustar00rootroot00000000000000.. Copyright (c) 2018-2021 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. NOTE: please maintain lexicographic order of foo.* variable names .. _vcl_variables: local, server, remote and client -------------------------------- These variables describe the network connection between the client and varnishd. Without PROXY protocol:: client server remote local v v CLIENT ------------ VARNISHD With PROXY protocol:: client server remote local v v v v CLIENT ------------ PROXY ------------ VARNISHD .. _client.identity: client.identity Type: STRING Readable from: client, backend Writable from: client Identification of the client, used to load balance in the client director. Defaults to ``client.ip`` This variable can be overwritten with more precise information, for instance extracted from a ``Cookie:`` header. .. _client.ip: client.ip Type: IP Readable from: client, backend The client's IP address, either the same as ``remote.ip`` or what the PROXY protocol told us. .. _server.hostname: server.hostname Type: STRING Readable from: all The host name of the server, as returned by the `gethostname(3)` system function. .. _server.identity: server.identity Type: STRING Readable from: all The identity of the server, as set by the ``-i`` parameter. If an ``-i`` parameter is not passed to varnishd, the return value from `gethostname(3)` system function will be used. .. _server.ip: server.ip Type: IP Readable from: client, backend The IP address of the socket on which the client connection was received, either the same as ``server.ip`` or what the PROXY protocol told us. .. _remote.ip: remote.ip Type: IP Readable from: client, backend The IP address of the other end of the TCP connection. This can either be the clients IP, or the outgoing IP of a proxy server. If the connection is a UNIX domain socket, the value will be ``0.0.0.0:0`` .. _local.endpoint: local.endpoint ``VCL >= 4.1`` Type: STRING Readable from: client, backend The address of the '-a' socket the session was accepted on. If the argument was ``-a foo=:81`` this would be ":81" .. _local.ip: local.ip Type: IP Readable from: client, backend The IP address (and port number) of the local end of the TCP connection, for instance ``192.168.1.1:81`` If the connection is a UNIX domain socket, the value will be ``0.0.0.0:0`` .. _local.socket: local.socket ``VCL >= 4.1`` Type: STRING Readable from: client, backend The name of the '-a' socket the session was accepted on. If the argument was ``-a foo=:81`` this would be "foo". Note that all '-a' gets a default name on the form ``a%d`` if no name is provided. req and req_top --------------- These variables describe the present request, and when ESI:include requests are being processed, req_top points to the request received from the client. .. _req: req Type: HTTP Readable from: client The entire request HTTP data structure. Mostly useful for passing to VMODs. .. _req.backend_hint: req.backend_hint Type: BACKEND Readable from: client Writable from: client Set bereq.backend to this if we attempt to fetch. When set to a director, reading this variable returns an actual backend if the director has resolved immediately, or the director otherwise. When used in string context, returns the name of the director or backend, respectively. .. _req.can_gzip: req.can_gzip Type: BOOL Readable from: client True if the client provided ``gzip`` or ``x-gzip`` in the ``Accept-Encoding`` header. req.esi ``VCL <= 4.0`` Type: BOOL Readable from: client Writable from: client Set to ``false`` to disable ESI processing regardless of any value in beresp.do_esi. Defaults to ``true``. This variable is replaced by ``resp.do_esi`` in VCL 4.1. .. _req.esi_level: req.esi_level Type: INT Readable from: client A count of how many levels of ESI requests we're currently at. .. _req.grace: req.grace Type: DURATION Readable from: client Writable from: client Upper limit on the object grace. During lookup the minimum of req.grace and the object's stored grace value will be used as the object's grace. .. _req.hash: req.hash Type: BLOB Readable from: vcl_hit, vcl_miss, vcl_pass, vcl_purge, vcl_deliver The hash key of this request. Mostly useful for passing to VMODs, but can also be useful for debugging hit/miss status. .. _req.hash_always_miss: req.hash_always_miss Type: BOOL Readable from: client Writable from: client Default: ``false``. Force a cache miss for this request, even if perfectly good matching objects are in the cache. This is useful to force-update the cache without invalidating existing entries in case the fetch fails. .. _req.hash_ignore_busy: req.hash_ignore_busy Type: BOOL Readable from: client Writable from: client Default: ``false``. Ignore any busy object during cache lookup. You only want to do this when you have two server looking up content sideways from each other to avoid deadlocks. .. _req.hash_ignore_vary: req.hash_ignore_vary Type: BOOL Readable from: client Writable from: client Default: ``false``. Ignore objects vary headers during cache lookup. This returns the very first match regardless of the object compatibility with the client request. This is useful when variants are irrelevant to certain clients, and differences in the way the resouce is presented don't change how the client will interpret it. Use with caution. .. _req.http: req.http.* Type: HEADER Readable from: client Writable from: client Unsetable from: client The headers of request, things like ``req.http.date``. The RFCs allow multiple headers with the same name, and both ``set`` and ``unset`` will remove *all* headers with the name given. The header name ``*`` is a VCL symbol and as such cannot, for example, start with a numeral. To work with valid header that can't be represented as VCL symbols it is possible to quote the name, like ``req.http."grammatically.valid"``. None of the HTTP headers present in IANA registries need to be quoted, so the quoted syntax is discouraged but available for interoperability. Some headers that cannot be tampered with for proper HTTP fetch or delivery are read-only. req.http.content-length Type: HEADER Readable from: client The content-length header field is protected, see protected_headers_. req.http.transfer-encoding Type: HEADER Readable from: client The transfer-encoding header field is protected, see protected_headers_. .. _req.is_hitmiss: req.is_hitmiss Type: BOOL Readable from: client If this request resulted in a hitmiss .. _req.is_hitpass: req.is_hitpass Type: BOOL Readable from: client If this request resulted in a hitpass .. _req.method: req.method Type: STRING Readable from: client Writable from: client The request method (e.g. "GET", "HEAD", ...) req.proto ``VCL <= 4.0`` Type: STRING Readable from: client Writable from: client The HTTP protocol version used by the client, usually "HTTP/1.1" or "HTTP/2.0". .. _req.proto: req.proto ``VCL >= 4.1`` Type: STRING Readable from: client The HTTP protocol version used by the client, usually "HTTP/1.1" or "HTTP/2.0". .. _req.restarts: req.restarts Type: INT Readable from: client A count of how many times this request has been restarted. .. _req.storage: req.storage Type: STEVEDORE Readable from: client Writable from: client The storage backend to use to save this request body. .. _req.time: req.time Type: TIME Readable from: client The time when the request was fully received, remains constant across restarts. .. _req.trace: req.trace Type: BOOL Readable from: client Writable from: client Controls if ``VCL_trace`` VSL records are emitted for the current request, see :ref:`vsl(7)`. Defaults to the setting of the ``feature trace`` parameter, see :ref:`varnishd(1)`. Does not get reset by a rollback. .. _req.transport: req.transport Type: STRING Readable from: client The transport protocol which brought this request. .. _req.ttl: req.ttl Type: DURATION Readable from: client Writable from: client Upper limit on the object age for cache lookups to return hit. .. _req.url: req.url Type: STRING Readable from: client Writable from: client The requested URL, for instance "/robots.txt". .. _req.xid: req.xid Type: INT Readable from: client Unique ID of this request. .. _req_top.http: req_top.http.* Type: HEADER Readable from: client HTTP headers of the top-level request in a tree of ESI requests. Identical to req.http. in non-ESI requests. See req.http_ for general notes. .. _req_top.method: req_top.method Type: STRING Readable from: client The request method of the top-level request in a tree of ESI requests. (e.g. "GET", "HEAD"). Identical to req.method in non-ESI requests. .. _req_top.proto: req_top.proto Type: STRING Readable from: client HTTP protocol version of the top-level request in a tree of ESI requests. Identical to req.proto in non-ESI requests. .. _req_top.time: req_top.time Type: TIME Readable from: client The time when the top-level request was fully received, remains constant across restarts. .. _req_top.url: req_top.url Type: STRING Readable from: client The requested URL of the top-level request in a tree of ESI requests. Identical to req.url in non-ESI requests. bereq ----- This is the request we send to the backend, it is built from the clients ``req.*`` fields by filtering out "per-hop" fields which should not be passed along (``Connection:``, ``Range:`` and similar). Slightly more fields are allowed through for ``pass` fetches than for `miss` fetches, for instance ``Range``. bereq Type: HTTP Readable from: backend The entire backend request HTTP data structure. Mostly useful as argument to VMODs. .. _bereq.backend: bereq.backend Type: BACKEND Readable from: vcl_pipe, backend Writable from: vcl_pipe, backend This is the backend or director we attempt to fetch from. When set to a director, reading this variable returns an actual backend if the director has resolved immediately, or the director otherwise. When used in string context, returns the name of the director or backend, respectively. .. _bereq.between_bytes_timeout: bereq.between_bytes_timeout Type: DURATION Readable from: backend Writable from: backend Unsetable from: vcl_pipe, backend Default: ``.between_bytes_timeout`` attribute from the :ref:`backend_definition`, which defaults to the ``between_bytes_timeout`` parameter, see :ref:`varnishd(1)`. The time in seconds to wait between each received byte from the backend. Not available in pipe mode. .. _bereq.body: bereq.body Type: BODY Unsetable from: vcl_backend_fetch The request body. Unset will also remove bereq.http.content-length_. .. _bereq.connect_timeout: bereq.connect_timeout Type: DURATION Readable from: vcl_pipe, backend Writable from: vcl_pipe, backend Unsetable from: vcl_pipe, backend Default: ``.connect_timeout`` attribute from the :ref:`backend_definition`, which defaults to the ``connect_timeout`` parameter, see :ref:`varnishd(1)`. The time in seconds to wait for a backend connection to be established. .. _bereq.first_byte_timeout: bereq.first_byte_timeout Type: DURATION Readable from: backend Writable from: backend Unsetable from: vcl_pipe, backend Default: ``.first_byte_timeout`` attribute from the :ref:`backend_definition`, which defaults to the ``first_byte_timeout`` parameter, see :ref:`varnishd(1)`. The time in seconds to wait getting the first byte back from the backend. Not available in pipe mode. .. _bereq.hash: bereq.hash Type: BLOB Readable from: vcl_pipe, backend The hash key of this request, a copy of ``req.hash``. .. _bereq.http: bereq.http.* Type: HEADER Readable from: vcl_pipe, backend Writable from: vcl_pipe, backend Unsetable from: vcl_pipe, backend The headers to be sent to the backend. See req.http_ for general notes. .. _bereq.http.content-length: bereq.http.content-length Type: HEADER Readable from: backend The content-length header field is protected, see protected_headers_. bereq.http.transfer-encoding Type: HEADER Readable from: backend The transfer-encoding header field is protected, see protected_headers_. .. _bereq.is_bgfetch: bereq.is_bgfetch Type: BOOL Readable from: backend True for fetches where the client got a hit on an object in grace, and this fetch was kicked of in the background to get a fresh copy. .. _bereq.is_hitmiss: bereq.is_hitmiss Type: BOOL Readable from: backend If this backend request was caused by a hitmiss. .. _bereq.is_hitpass: bereq.is_hitpass Type: BOOL Readable from: backend If this backend request was caused by a hitpass. .. _bereq.method: bereq.method Type: STRING Readable from: vcl_pipe, backend Writable from: vcl_pipe, backend The request type (e.g. "GET", "HEAD"). Regular (non-pipe, non-pass) fetches are always "GET" bereq.proto ``VCL <= 4.0`` Type: STRING Readable from: vcl_pipe, backend Writable from: vcl_pipe, backend The HTTP protocol version, "HTTP/1.1" unless a pass or pipe request has "HTTP/1.0" in ``req.proto`` .. _bereq.proto: bereq.proto ``VCL >= 4.1`` Type: STRING Readable from: vcl_pipe, backend The HTTP protocol version, "HTTP/1.1" unless a pass or pipe request has "HTTP/1.0" in ``req.proto`` .. _bereq.retries: bereq.retries Type: INT Readable from: backend A count of how many times this request has been retried. .. _bereq.task_deadline: bereq.task_deadline Type: DURATION Readable from: vcl_pipe Writable from: vcl_pipe Unsetable from: vcl_pipe Deadline for pipe sessions, defaults ``0s``, which falls back to the ``pipe_task_deadline`` parameter, see :ref:`varnishd(1)` .. _bereq.time: bereq.time Type: TIME Readable from: vcl_pipe, backend The time when we started preparing the first backend request, remains constant across retries. .. _bereq.trace: bereq.trace Type: BOOL Readable from: backend Writable from: backend Controls if ``VCL_trace`` VSL records are emitted for the current request, see :ref:`vsl(7)`. Inherits the value of ``req.trace`` when the backend request is created. Does not get reset by a rollback. .. _bereq.uncacheable: bereq.uncacheable Type: BOOL Readable from: backend Indicates whether this request is uncacheable due to a `pass` in the client side or a hit on an hit-for-pass object. .. _bereq.url: bereq.url Type: STRING Readable from: vcl_pipe, backend Writable from: vcl_pipe, backend The requested URL, copied from ``req.url`` .. _bereq.xid: bereq.xid Type: INT Readable from: vcl_pipe, backend Unique ID of this request. beresp ------ The response received from the backend, one cache misses, the store object is built from ``beresp``. beresp Type: HTTP Readable from: vcl_backend_response, vcl_backend_error The entire backend response HTTP data structure, useful as argument to VMOD functions. .. _beresp.age: beresp.age Type: DURATION Readable from: vcl_backend_response, vcl_backend_error Default: Age header, or zero. The age of the object. .. _beresp.backend: beresp.backend Type: BACKEND Readable from: vcl_backend_response, vcl_backend_error This is the backend we fetched from. If bereq.backend was set to a director, this will be the backend selected by the director. When used in string context, returns its name. beresp.backend.ip ``VCL <= 4.0`` Type: IP Readable from: vcl_backend_response IP of the backend this response was fetched from. .. _beresp.backend.name: beresp.backend.name Type: STRING Readable from: vcl_backend_response, vcl_backend_error Name of the backend this response was fetched from. Same as beresp.backend. .. _beresp.body: beresp.body Type: BODY Writable from: vcl_backend_error For producing a synthetic body. .. _beresp.do_esi: beresp.do_esi Type: BOOL Readable from: vcl_backend_response, vcl_backend_error Writable from: vcl_backend_response, vcl_backend_error Default: ``false``. Set it to true to parse the object for ESI directives. This is necessary for later ESI processing on the client side. If beresp.do_esi is false when an object enters the cache, client side ESI processing will not be possible (obj.can_esi will be false). It is a VCL error to use beresp.do_esi after setting beresp.filters. .. _beresp.do_gunzip: beresp.do_gunzip Type: BOOL Readable from: vcl_backend_response, vcl_backend_error Writable from: vcl_backend_response, vcl_backend_error Default: ``false``. Set to ``true`` to gunzip the object while storing it in the cache. If ``http_gzip_support`` is disabled, setting this variable has no effect. It is a VCL error to use beresp.do_gunzip after setting beresp.filters. .. _beresp.do_gzip: beresp.do_gzip Type: BOOL Readable from: vcl_backend_response, vcl_backend_error Writable from: vcl_backend_response, vcl_backend_error Default: ``false``. Set to ``true`` to gzip the object while storing it. If ``http_gzip_support`` is disabled, setting this variable has no effect. It is a VCL error to use beresp.do_gzip after setting beresp.filters. .. _beresp.do_stream: beresp.do_stream Type: BOOL Readable from: vcl_backend_response, vcl_backend_error Writable from: vcl_backend_response, vcl_backend_error Default: ``true``. Deliver the object to the client while fetching the whole object into varnish. For uncacheable objects, storage for parts of the body which have been sent to the client may get freed early, depending on the storage engine used. This variable has no effect if beresp.do_esi is true or when the response body is empty. .. _beresp.filters: beresp.filters Type: STRING Readable from: vcl_backend_response Writable from: vcl_backend_response List of Varnish Fetch Processor (VFP) filters the beresp.body will be pulled through. The order left to right signifies processing from backend to cache, iow the leftmost filter is run first on the body as received from the backend after decoding of any transfer encodings. VFP Filters change the body before going into the cache and/or being handed to the client side, where it may get processed again by resp.filters. The following VFP filters exist in varnish-cache: * ``gzip``: compress a body using gzip * ``testgunzip``: Test if a body is valid gzip and refuse it otherwise * ``gunzip``: Uncompress gzip content * ``esi``: ESI-process plain text content * ``esi_gzip``: Save gzipped snippets for efficient ESI-processing This filter enables stitching together ESI from individually gzipped fragments, saving processing power for re-compression on the client side at the expense of some compression efficiency. Additional VFP filters are available from VMODs. By default, beresp.filters is constructed as follows: * ``gunzip`` gets added for gzipped content if ``beresp.do_gunzip`` or ``beresp.do_esi`` are true. * ``esi_gzip`` gets added if ``beresp.do_esi`` is true together with ``beresp.do_gzip`` or content is already compressed. * ``esi`` gets added if ``beresp.do_esi`` is true * ``gzip`` gets added for uncompressed content if ``beresp.do_gzip`` is true * ``testgunzip`` gets added for compressed content if ``beresp.do_gunzip`` is false. After beresp.filters is set, using any of the beforementioned ``beresp.do_*`` switches is a VCL error. .. _beresp.grace: beresp.grace Type: DURATION Readable from: vcl_backend_response, vcl_backend_error Writable from: vcl_backend_response, vcl_backend_error Default: Cache-Control ``stale-while-revalidate`` directive, or ``default_grace`` parameter. Set to a period to enable grace. .. _beresp.http: beresp.http.* Type: HEADER Readable from: vcl_backend_response, vcl_backend_error Writable from: vcl_backend_response, vcl_backend_error Unsetable from: vcl_backend_response, vcl_backend_error The HTTP headers returned from the server. See req.http_ for general notes. beresp.http.content-length Type: HEADER Readable from: vcl_backend_response, vcl_backend_error The content-length header field is protected, see protected_headers_. beresp.http.transfer-encoding Type: HEADER Readable from: vcl_backend_response, vcl_backend_error The transfer-encoding header field is protected, see protected_headers_. .. _beresp.keep: beresp.keep Type: DURATION Readable from: vcl_backend_response, vcl_backend_error Writable from: vcl_backend_response, vcl_backend_error Default: ``default_keep`` parameter. Set to a period to enable conditional backend requests. The keep time is cache lifetime in addition to the ttl. Objects with ttl expired but with keep time left may be used to issue conditional (If-Modified-Since / If-None-Match) requests to the backend to refresh them. beresp.proto ``VCL <= 4.0`` Type: STRING Readable from: vcl_backend_response, vcl_backend_error Writable from: vcl_backend_response, vcl_backend_error The HTTP protocol version the backend replied with. .. _beresp.proto: beresp.proto ``VCL >= 4.1`` Type: STRING Readable from: vcl_backend_response, vcl_backend_error The HTTP protocol version the backend replied with. .. _beresp.reason: beresp.reason Type: STRING Readable from: vcl_backend_response, vcl_backend_error Writable from: vcl_backend_response, vcl_backend_error The HTTP status message returned by the server. .. _beresp.status: beresp.status Type: INT Readable from: vcl_backend_response, vcl_backend_error Writable from: vcl_backend_response, vcl_backend_error The HTTP status code returned by the server. More information in the `HTTP response status`_ section. .. _beresp.storage: beresp.storage Type: STEVEDORE Readable from: vcl_backend_response, vcl_backend_error Writable from: vcl_backend_response, vcl_backend_error The storage backend to use to save this object. beresp.storage_hint ``VCL <= 4.0`` Type: STRING Readable from: vcl_backend_response, vcl_backend_error Writable from: vcl_backend_response, vcl_backend_error Deprecated since varnish 5.1 and discontinued since VCL 4.1 (varnish 6.0). Use beresp.storage instead. Hint to Varnish that you want to save this object to a particular storage backend. .. _beresp.time: beresp.time Type: TIME Readable from: vcl_backend_response, vcl_backend_error When the backend headers were fully received just before ``vcl_backend_response {}`` was entered, or when ``vcl_backend_error {}`` was entered. .. _beresp.transit_buffer: beresp.transit_buffer Type: BYTES Readable from: vcl_backend_response Writable from: vcl_backend_response Default: ``transit_buffer`` parameter, see :ref:`varnishd(1)`. The maximum number of bytes the client can be ahead of the backend during a streaming pass if ``beresp`` is uncacheable. See also ``transit_buffer`` parameter documentation in :ref:`varnishd(1)`. .. _beresp.ttl: beresp.ttl Type: DURATION Readable from: vcl_backend_response, vcl_backend_error Writable from: vcl_backend_response, vcl_backend_error Default: Cache-Control ``s-maxage`` or ``max-age`` directives, or a value computed from the Expires header's deadline, or the ``default_ttl`` parameter. The object's remaining time to live, in seconds. .. _beresp.uncacheable: beresp.uncacheable Type: BOOL Readable from: vcl_backend_response, vcl_backend_error Writable from: vcl_backend_response, vcl_backend_error Inherited from bereq.uncacheable, see there. Setting this variable makes the object uncacheable. This may may produce a hit-for-miss object in the cache. Clearing the variable has no effect and will log the warning "Ignoring attempt to reset beresp.uncacheable". .. _beresp.was_304: beresp.was_304 Type: BOOL Readable from: vcl_backend_response, vcl_backend_error When ``true`` this indicates that we got a 304 response to our conditional fetch from the backend and turned that into ``beresp.status = 200`` obj --- This is the object we found in cache. It cannot be modified. .. _obj.age: obj.age Type: DURATION Readable from: vcl_hit, vcl_deliver The age of the object. .. _obj.can_esi: obj.can_esi Type: BOOL Readable from: vcl_hit, vcl_deliver If the object can be ESI processed, that is if setting ``resp.do_esi`` or adding ``esi`` to ``resp.filters`` in ``vcl_deliver {}`` would cause the response body to be ESI processed. .. _obj.grace: obj.grace Type: DURATION Readable from: vcl_hit, vcl_deliver The object's grace period in seconds. .. _obj.hits: obj.hits Type: INT Readable from: vcl_hit, vcl_deliver The count of cache-hits on this object. In `vcl_deliver` a value of 0 indicates a cache miss. .. _obj.http: obj.http.* Type: HEADER Readable from: vcl_hit The HTTP headers stored in the object. See req.http_ for general notes. .. _obj.keep: obj.keep Type: DURATION Readable from: vcl_hit, vcl_deliver The object's keep period in seconds. .. _obj.proto: obj.proto Type: STRING Readable from: vcl_hit The HTTP protocol version stored in the object. .. _obj.reason: obj.reason Type: STRING Readable from: vcl_hit The HTTP reason phrase stored in the object. .. _obj.status: obj.status Type: INT Readable from: vcl_hit The HTTP status code stored in the object. More information in the `HTTP response status`_ section. .. _obj.storage: obj.storage Type: STEVEDORE Readable from: vcl_hit, vcl_deliver The storage backend where this object is stored. .. _obj.time: obj.time Type: TIME Readable from: vcl_hit, vcl_deliver The time the object was created from the perspective of the server which generated it. This will roughly be equivalent to ``now`` - ``obj.age``. .. _obj.ttl: obj.ttl Type: DURATION Readable from: vcl_hit, vcl_deliver The object's remaining time to live, in seconds. .. _obj.uncacheable: obj.uncacheable Type: BOOL Readable from: vcl_deliver Whether the object is uncacheable (pass, hit-for-pass or hit-for-miss). resp ---- This is the response we send to the client, it is built from either ``beresp`` (pass/miss), ``obj`` (hits) or created from whole cloth (synth). With the exception of ``resp.body`` all ``resp.*`` variables available in both ``vcl_deliver{}`` and ``vcl_synth{}`` as a matter of symmetry. resp Type: HTTP Readable from: vcl_deliver, vcl_synth The entire response HTTP data structure, useful as argument to VMODs. .. _resp.body: resp.body Type: BODY Writable from: vcl_synth To produce a synthetic response body, for instance for errors. .. _resp.do_esi: resp.do_esi ``VCL >= 4.1`` Type: BOOL Readable from: vcl_deliver, vcl_synth Writable from: vcl_deliver, vcl_synth Default: obj.can_esi This can be used to selectively disable ESI processing, even though ESI parsing happened during fetch (see beresp.do_esi). This is useful when Varnish caches peer with each other. It is a VCL error to use resp.do_esi after setting resp.filters. .. _resp.filters: resp.filters Type: STRING Readable from: vcl_deliver, vcl_synth Writable from: vcl_deliver, vcl_synth List of VDP filters the resp.body will be pushed through. Before resp.filters is set, the value read will be the default filter list as determined by varnish based on resp.do_esi and request headers. After resp.filters is set, changing any of the conditions which otherwise determine the filter selection will have no effiect. Using resp.do_esi is an error once resp.filters is set. .. _resp.http: resp.http.* Type: HEADER Readable from: vcl_deliver, vcl_synth Writable from: vcl_deliver, vcl_synth Unsetable from: vcl_deliver, vcl_synth The HTTP headers that will be returned. See req.http_ for general notes. resp.http.content-length Type: HEADER Readable from: vcl_deliver, vcl_synth The content-length header field is protected, see protected_headers_. resp.http.transfer-encoding Type: HEADER Readable from: vcl_deliver, vcl_synth The transfer-encoding header field is protected, see protected_headers_. .. _resp.is_streaming: resp.is_streaming Type: BOOL Readable from: vcl_deliver, vcl_synth Returns true when the response will be streamed while being fetched from the backend. resp.proto ``VCL <= 4.0`` Type: STRING Readable from: vcl_deliver, vcl_synth Writable from: vcl_deliver, vcl_synth The HTTP protocol version to use for the response. .. _resp.proto: resp.proto ``VCL >= 4.1`` Type: STRING Readable from: vcl_deliver, vcl_synth The HTTP protocol version to use for the response. .. _resp.reason: resp.reason Type: STRING Readable from: vcl_deliver, vcl_synth Writable from: vcl_deliver, vcl_synth The HTTP status message that will be returned. .. _resp.status: resp.status Type: INT Readable from: vcl_deliver, vcl_synth Writable from: vcl_deliver, vcl_synth The HTTP status code that will be returned. More information in the `HTTP response status`_ section. resp.status 200 will get changed into 304 by core code after a return(deliver) from vcl_deliver for conditional requests to cached content if validation succeeds. For the validation, first ``req.http.If-None-Match`` is compared against ``resp.http.Etag``. If they compare equal according to the rules for weak validation (see RFC7232), a 304 is sent. Secondly, ``req.http.If-Modified-Since`` is compared against ``resp.http.Last-Modified`` or, if it is unset or weak, against the point in time when the object was last modified based on the ``Date`` and ``Age`` headers received with the backend response which created the object. If the object has not been modified based on that comparison, a 304 is sent. .. _resp.time: resp.time Type: TIME Readable from: vcl_deliver, vcl_synth The time when we started preparing the response, just before entering ``vcl_synth {}`` or ``vcl_deliver {}``. Special variables ----------------- .. _now: now Type: TIME Readable from: all The current time, in seconds since the UNIX epoch. When converted to STRING in expressions it returns a formatted timestamp like ``Tue, 20 Feb 2018 09:30:31 GMT`` ``now`` remains stable for the duration of any built-in VCL subroutine to make time-based calculations predictable and avoid edge cases. In other words, even if considerable amounts of time are spent in VCL, ``now`` will always represent the point in time when the respective built-in VCL subroutine was entered. ``now`` is thus not suitable for any kind of time measurements. See :ref:`std.timestamp()`, :ref:`std.now()` and :ref:`std.timed_call()` in :ref:`vmod_std(3)`. sess ---- A session corresponds to the "conversation" that Varnish has with a single client connection, over which one or more request/response transactions may take place. It may comprise the traffic over an HTTP/1 keep-alive connection, or the multiplexed traffic over an HTTP/2 connection. .. _sess.idle_send_timeout: sess.idle_send_timeout Type: DURATION Readable from: client Writable from: client Unsetable from: client Send timeout for individual pieces of data on client connections, defaults to the ``idle_send_timeout`` parameter, see :ref:`varnishd(1)` .. _sess.send_timeout: sess.send_timeout Type: DURATION Readable from: client Writable from: client Unsetable from: client Total timeout for ordinary HTTP1 responses, defaults to the ``send_timeout`` parameter, see :ref:`varnishd(1)` .. _sess.timeout_idle: sess.timeout_idle Type: DURATION Readable from: client Writable from: client Unsetable from: client Idle timeout for this session, defaults to the ``timeout_idle`` parameter, see :ref:`varnishd(1)` .. _sess.timeout_linger: sess.timeout_linger Type: DURATION Readable from: client Writable from: client Unsetable from: client Linger timeout for this session, defaults to the ``timeout_linger`` parameter, see :ref:`varnishd(1)` .. _sess.xid: sess.xid ``VCL >= 4.1`` Type: INT Readable from: client, backend Unique ID of this session. storage ------- .. XXX all of these are actually defined in generate.py .. _storage.free_space: storage..free_space Type: BYTES Readable from: client, backend Default: 0 Free space available in the named stevedore. Only available for the malloc stevedore. .. _storage.happy: storage..happy Type: BOOL Readable from: client, backend Default: true Health status for the named stevedore. Not available in any of the current stevedores. .. _storage.used_space: storage..used_space Type: BYTES Readable from: client, backend Default: 0 Used space in the named stevedore. Only available for the malloc stevedore. varnish-7.5.0/doc/sphinx/reference/vext.rst000066400000000000000000000017621457605730600207540ustar00rootroot00000000000000.. Copyright (c) 2010-2021 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _ref-vext: %%%%%%%%%%%%%%%%%%%%%%%%% VEXT - Varnish Extensions %%%%%%%%%%%%%%%%%%%%%%%%% A Varnish Extension is a shared library, loaded into the worker process during startup, before privilges are dropped for good. This allows a VEXT to do pretty much anything it wants to do in the worker process. A VEXT can (also) contain a VMOD, and it will work just like any other VMOD, which also means that VMODs can be loaded as VEXTs. The VEXTs are loaded in the child process, in the order they are specified on the commandline, after the ``heritage`` has been established, but before stevedores are initialized and jail privileges are dropped. There is currently no ``init`` entrypoint defined, but a VEXT can use a static initializer to get activated on loading. If those static initializers want to bail out, ``stderr`` and ``exit(3)`` can be used to convey diagnostics. varnish-7.5.0/doc/sphinx/reference/vmod.rst000066400000000000000000000661521457605730600207370ustar00rootroot00000000000000.. Copyright (c) 2010-2021 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _ref-vmod: %%%%%%%%%%%%%%%%%%%%%% VMOD - Varnish Modules %%%%%%%%%%%%%%%%%%%%%% For all you can do in VCL, there are things you can not do. Look an IP number up in a database file for instance. VCL provides for inline C code, and there you can do everything, but it is not a convenient or even readable way to solve such problems. This is where VMODs come into the picture: A VMOD is a shared library with some C functions which can be called from VCL code. For instance:: import std; sub vcl_deliver { set resp.http.foo = std.toupper(req.url); } The "std" vmod is one you get with Varnish, it will always be there and we will put "boutique" functions in it, such as the "toupper" function shown above. The full contents of the "std" module is documented in vmod_std(3). This part of the manual is about how you go about writing your own VMOD, how the language interface between C and VCC works, where you can find contributed VMODs etc. This explanation will use the "std" VMOD as example, having a Varnish source tree handy may be a good idea. VMOD Directory ============== The VMOD directory is an up-to-date compilation of maintained extensions written for Varnish Cache: https://www.varnish-cache.org/vmods The vmod.vcc file ================= The interface between your VMOD and the VCL compiler ("VCC") and the VCL runtime ("VRT") is defined in the vmod.vcc file which a python script called "vmodtool.py" turns into thaumaturgically challenged C data structures that do all the hard work. The std VMODs vmod.vcc file looks somewhat like this:: $ABI strict $Module std 3 "Varnish Standard Module" $Event event_function $Function STRING toupper(STRANDS s) $Function STRING tolower(STRANDS s) $Function VOID set_ip_tos(INT) The ``$ABI`` line is optional. Possible values are ``strict`` (default) and ``vrt``. It allows to specify that a vmod is integrating with the blessed ``vrt`` interface provided by ``varnishd`` or go deeper in the stack. As a rule of thumb you, if the VMOD uses more than the VRT (Varnish RunTime), in which case it needs to be built for the exact Varnish version, use ``strict``. If it complies to the VRT and only needs to be rebuilt when breaking changes are introduced to the VRT API, use ``vrt``. The ``$Module`` line gives the name of the module, the manual section where the documentation will reside, and the description. The ``$Event`` line specifies an optional "Event" function, which will be called whenever a VCL program which imports this VMOD is loaded or transitions to any of the warm, active, cold or discarded states. More on this below. The ``$Function`` lines define three functions in the VMOD, along with the types of the arguments, and that is probably where the hardest bit of writing a VMOD is to be found, so we will talk about that at length in a moment. Notice that the third function returns VOID, that makes it a "procedure" in VCL lingo, meaning that it cannot be used in expressions, right side of assignments and such. Instead it can be used as a primary action, something functions which return a value can not:: sub vcl_recv { std.set_ip_tos(32); } Running vmodtool.py on the vmod.vcc file, produces a "vcc_if.c" and "vcc_if.h" files, which you must use to build your shared library file. Forget about vcc_if.c everywhere but your Makefile, you will never need to care about its contents, and you should certainly never modify it, that voids your warranty instantly. But vcc_if.h is important for you, it contains the prototypes for the functions you want to export to VCL. For the std VMOD, the compiled vcc_if.h file looks like this:: VCL_STRING vmod_toupper(VRT_CTX, VCL_STRANDS); VCL_STRING vmod_tolower(VRT_CTX, VCL_STRANDS); VCL_VOID vmod_set_ip_tos(VRT_CTX, VCL_INT); vmod_event_f event_function; Those are your C prototypes. Notice the ``vmod_`` prefix on the function names. Named arguments and default values ---------------------------------- The basic vmod.vcc function declaration syntax introduced above makes all arguments mandatory for calls from vcl - which implies that they need to be given in order. Naming the arguments as in:: $Function BOOL match_acl(ACL acl, IP ip) allows calls from VCL with named arguments in any order, for example:: if (debug.match_acl(ip=client.ip, acl=local)) { # ... Named arguments also take default values, so for this example from the debug vmod:: $Function STRING argtest(STRING one, REAL two=2, STRING three="3", STRING comma=",", INT four=4) only argument `one` is required, so that all of the following are valid invocations from vcl:: debug.argtest("1", 2.1, "3a") debug.argtest("1", two=2.2, three="3b") debug.argtest("1", three="3c", two=2.3) debug.argtest("1", 2.4, three="3d") debug.argtest("1", 2.5) debug.argtest("1", four=6); The C interface does not change with named arguments and default values, arguments remain positional and default values appear no different to user specified values. `Note` that default values have to be given in the native C-type syntax, see below. As a special case, ``NULL`` has to be given as ``0``. Optional arguments ------------------ The vmod.vcc declaration also allows for optional arguments in square brackets like so:: $Function VOID opt(PRIV_TASK priv, INT four = 4, [STRING opt]) With any optional argument present, the C function prototype looks completely different: * Only the ``VRT_CTX`` and object pointer arguments (only for methods) remain positional * All other arguments get passed in a struct as the last argument of the C function. The argument struct is simple, vmod authors should check the `vmodtool`-generated ``vcc_if.c`` file for the function and struct declarations: * for each optional argument, a ``valid_``\ `argument` member is used to signal the presence of the respective optional argument. ``valid_`` argstruct members should only be used as truth values, irrespective of their actual data type. * named arguments are passed in argument struct members by the same name and with the same data type. * unnamed (positional) arguments are passed as ``arg``\ `n` with `n` starting at 1 and incrementing with the argument's position. .. _ref-vmod-vcl-c-objects: Objects and methods ------------------- Varnish also supports a simple object model for vmods. Objects and methods are declared in the vcc file as:: $Object class(...) $Method .method(...) For declared object classes of a vmod, object instances can then be created in ``vcl_init { }`` using the ``new`` statement:: sub vcl_init { new foo = vmod.class(...); } and have their methods called anywhere (including in ``vcl_init {}`` after the instantiation):: sub somewhere { foo.method(...); } Nothing prevents a method to be named like the constructor and the meaning of such a method is up to the vmod author:: $Object foo(...) $Method .bar(...) $Method .foo(...) Object instances are represented as pointers to vmod-implemented C structs. Varnish only provides space to store the address of object instances and ensures that the right object address gets passed to C functions implementing methods. * Objects' scope and lifetime are the vcl * Objects can only be created in ``vcl_init {}`` and have their destructors called by varnish after ``vcl_fini {}`` has completed. vmod authors are advised to understand the prototypes in the `vmodtool`\ -generated ``vcc_if.c`` file: * For ``$Object`` declarations, a constructor and destructor function must be implemented * The constructor is named by the suffix ``__init``, always is of ``VOID`` return type and has the following arguments before the vcc-declared parameters: * ``VRT_CTX`` as usual * a pointer-pointer to return the address of the created oject * a string containing the vcl name of the object instance * The destructor is named by the suffix ``__fini``, always is of ``VOID`` return type and has a single argument, the pointer-pointer to the address of the object. The destructor is expected clear the address of the object stored in that pointer-pointer. * Methods gain the pointer to the object as an argument after the ``VRT_CTX``. As varnish is in no way involved in managing object instances other than passing their addresses, vmods need to implement all aspects of managing instances, in particular their memory management. As the lifetime of object instances is the vcl, they will usually be allocated from the heap. Functions and Methods scope restriction --------------------------------------- The ``$Restrict`` stanza offers a way to limit the scope of the preceding vmod function or method, so that they can only be called from restricted vcl call sites. It must only appear after a ``$Method`` or ``$Function`` and has the following syntax:: $Restrict scope1 [scope2 ...] Possible scope values are: ``backend, client, housekeeping, vcl_recv, vcl_pipe, vcl_pass, vcl_hash, vcl_purge, vcl_miss, vcl_hit, vcl_deliver, vcl_synth, vcl_backend_fetch, vcl_backend_response, vcl_backend_error, vcl_init, vcl_fini`` Deprecated Aliases ------------------ The ``$Alias`` stanza offers a mechanism to rename a function or an object's method without removing the previous name. This allows name changes to maintain compatibility until the alias is dropped. The syntax for a function is:: $Alias deprecated_function original_function [description] The syntax for a method is:: $Alias .deprecated_method object.original_method [description] The ``$Alias`` stanza can appear anywhere, this allows grouping them in a dedicated "deprecated" section of their manual. The optional description can be used to explain why a function was renamed. .. _ref-vmod-vcl-c-types: VCL and C data types ==================== VCL data types are targeted at the job, so for instance, we have data types like "DURATION" and "HEADER", but they all have some kind of C language representation. Here is a description of them. All but the PRIV types have typedefs: VCL_INT, VCL_REAL, etc. Notice that most of the non-native (C pointer) types are ``const``, which, if returned by a vmod function/method, are assumed to be immutable. In other words, a vmod `must not` modify any data which was previously returned. When returning non-native values, the producing function is responsible for arranging memory management. Either by freeing the structure later by whatever means available or by using storage allocated from the client or backend workspaces. ACL C-type: ``const struct vrt_acl *`` A type for named ACLs declared in VCL. BACKEND C-type: ``const struct director *`` A type for backend and director implementations. See :ref:`ref-writing-a-director`. BLOB C-type: ``const struct vmod_priv *`` An opaque type to pass random bits of memory between VMOD functions. BODY C-type: ``const void *`` A type only used on the LHS of an assignment that can take either a blob or an expression that can be converted to a string. BOOL C-type: ``unsigned`` Zero means false, anything else means true. BYTES C-type: ``double`` Unit: bytes. A storage space, as in 1024 bytes. DURATION C-type: ``double`` Unit: seconds. A time interval, as in 25 seconds. ENUM vcc syntax: ENUM { val1, val2, ... } vcc example: ``ENUM { one, two, three } number="one"`` C-type: ``const char *`` Allows values from a set of constant strings. `Note` that the C-type is a string, not a C enum. Enums will be passed as fixed pointers, so instead of string comparisons, also pointer comparisons with ``VENUM(name)`` are possible. HEADER C-type: ``const struct gethdr_s *`` These are VCL compiler generated constants referencing a particular header in a particular HTTP entity, for instance ``req.http.cookie`` or ``beresp.http.last-modified``. By passing a reference to the header, the VMOD code can both read and write the header in question. If the header was passed as STRING, the VMOD code only sees the value, but not where it came from. HTTP C-type: ``struct http *`` A reference to a header object as ``req.http`` or ``bereq.http``. INT C-type: ``long`` A (long) integer as we know and love them. IP C-type: ``const struct suckaddr *`` This is an opaque type, see the ``include/vsa.h`` file for which primitives we support on this type. PRIV_CALL See :ref:`ref-vmod-private-pointers` below. PRIV_TASK See :ref:`ref-vmod-private-pointers` below. PRIV_TOP See :ref:`ref-vmod-private-pointers` below. PRIV_VCL See :ref:`ref-vmod-private-pointers` below. PROBE C-type: ``const struct vrt_backend_probe *`` A named standalone backend probe definition. REAL C-type: ``double`` A floating point value. REGEX C-type: ``const struct vre *`` This is an opaque type for regular expressions with a VCL scope. The REGEX type is only meant for regular expression literals managed by the VCL compiler. For dynamic regular expressions or complex usage see the API from the ``include/vre.h`` file. STRING C-type: ``const char *`` A NUL-terminated text-string. Can be NULL to indicate a nonexistent string, for instance in:: mymod.foo(req.http.foobar); If there were no "foobar" HTTP header, the vmod_foo() function would be passed a NULL pointer as argument. STEVEDORE C-type: ``const struct stevedore *`` A storage backend. STRANDS C-Type: ``const struct strands *`` Strands are a list of strings that gets passed in a struct with the following members: * ``int n``: the number of strings * ``const char **p``: the array of strings with `n` elements A VMOD should never hold onto strands beyond a function or method execution. See ``include/vrt.h`` for the details. TIME C-type: ``double`` Unit: seconds since UNIX epoch. An absolute time, as in 1284401161. VCL_SUB C-type: ``const struct vcl_sub *`` Opaque handle on a VCL subroutine. References to subroutines can be passed into VMODs as arguments and called later through ``VRT_call()``. The scope strictly is the VCL: vmods must ensure that ``VCL_SUB`` references never be called from a different VCL. ``VRT_call()`` fails the VCL for recursive calls and when the ``VCL_SUB`` can not be called from the current context (e.g. calling a subroutine accessing ``req`` from the backend side). For more than one invocation of ``VRT_call()``, VMODs *must* check if ``VRT_handled()`` returns non-zero inbetween calls: The called SUB may have returned with an action (any ``return(x)`` other than plain ``return``) or may have failed the VCL, and in both cases the calling VMOD *must* return also, possibly after having conducted some cleanup. Note that undoing the handling through ``VRT_handling()`` is a bug. ``VRT_check_call()`` can be used to check if a ``VRT_call()`` would succeed in order to avoid the potential VCL failure. It returns ``NULL`` if ``VRT_call()`` would make the call or an error string why not. VOID C-type: ``void`` Can only be used for return-value, which makes the function a VCL procedure. .. _ref-vmod-private-pointers: Private Pointers ================ It is often useful for library functions to maintain local state, this can be anything from a precompiled regexp to open file descriptors and vast data structures. The VCL compiler supports the following private pointers: * ``PRIV_CALL`` "per call" private pointers are useful to cache/store state relative to the specific call or its arguments, for instance a compiled regular expression specific to a regsub() statement or simply caching the most recent output of some expensive operation. These private pointers live for the duration of the loaded VCL. * ``PRIV_TASK`` "per task" private pointers are useful for state that applies to calls for either a specific request or a backend request. For instance this can be the result of a parsed cookie specific to a client. Note that ``PRIV_TASK`` contexts are separate for the client side and the backend side, so use in ``vcl_backend_*`` will yield a different private pointer from the one used on the client side. These private pointers live only for the duration of their task. * ``PRIV_TOP`` "per top-request" private pointers live for the duration of one request and all its ESI-includes. They are only defined for the client side. When used from backend VCL subs, a NULL pointer will potentially be passed and a VCL failure triggered. These private pointers live only for the duration of their top level request .. PRIV_TOP see #3498 * ``PRIV_VCL`` "per vcl" private pointers are useful for such global state that applies to all calls in this VCL, for instance flags that determine if regular expressions are case-sensitive in this vmod or similar. The ``PRIV_VCL`` object is the same object that is passed to the VMOD's event function. This private pointer lives for the duration of the loaded VCL. The ``PRIV_CALL`` vmod_privs are finalized before ``PRIV_VCL``. The way it works in the vmod code, is that a ``struct vmod_priv *`` is passed to the functions where one of the ``PRIV_*`` argument types is specified. This structure contains three members:: struct vmod_priv { void *priv; long len; const struct vmod_priv_methods *methods; }; The ``.priv`` and ``.len`` elements can be used for whatever the vmod code wants to use them for. ``.methods`` can be an optional pointer to a struct of callbacks:: typedef void vmod_priv_fini_f(VRT_CTX, void *); struct vmod_priv_methods { unsigned magic; const char *type; vmod_priv_fini_f *fini; }; ``.magic`` has to be initialized to ``VMOD_PRIV_METHODS_MAGIC``. ``.type`` should be a descriptive name to help debugging. ``.fini`` will be called for a non-NULL ``.priv`` of the ``struct vmod_priv`` when the scope ends with that ``.priv`` pointer as its second argument besides a ``VRT_CTX``. The common case where a private data structure is allocated with malloc(3) would look like this:: static void myfree(VRT_CTX, void *p) { CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); free (p); } static const struct vmod_priv_methods mymethods[1] = {{ .magic = VMOD_PRIV_METHODS_MAGIC, .type = "mystate", .fini = myfree }}; // .... if (priv->priv == NULL) { priv->priv = calloc(1, sizeof(struct myfoo)); AN(priv->priv); priv->methods = mymethods; mystate = priv->priv; mystate->foo = 21; ... } else { mystate = priv->priv; } if (foo > 25) { ... } Private Pointers Memory Management ---------------------------------- The generic malloc(3) / free(3) approach documented above works for all private pointers. It is the simplest and less error prone (as long as allocated memory is properly freed though the fini callback), but comes at the cost of calling into the heap memory allocator. Per-vmod constant data structures can be assigned to any private pointer type, but, obviously, free(3) must not be used on them. Dynamic data stored in ``PRIV_TASK`` and ``PRIV_TOP`` pointers can also come from the workspace: * For ``PRIV_TASK``, any allocation from ``ctx->ws`` works, like so:: if (priv->priv == NULL) { priv->priv = WS_Alloc(ctx->ws, sizeof(struct myfoo)); if (priv->priv == NULL) { VRT_fail(ctx, "WS_Alloc failed"); return (...); } priv->methods = mymethods; mystate = priv->priv; mystate->foo = 21; ... * For ``PRIV_TOP``, first of all keep in mind that it must only be used from the client context, so vmod code should error out for ``ctx->req == NULL``. For dynamic data, the *top request's* workspace must be used, which complicates things a bit:: if (priv->priv == NULL) { struct ws *ws; CHECK_OBJ_NOTNULL(ctx->req, REQ_MAGIC); CHECK_OBJ_NOTNULL(ctx->req->top, REQTOP_MAGIC); CHECK_OBJ_NOTNULL(ctx->req->top->topreq, REQ_MAGIC); ws = ctx->req->top->topreq->ws; priv->priv = WS_Alloc(ws, sizeof(struct myfoo)); // ... same as above for PRIV_TASK Notice that allocations on the workspace do not need to be freed, their lifetime is the respective task. Private Pointers and Objects ---------------------------- ``PRIV_TASK`` and ``PRIV_TOP`` arguments to methods are not per object instance, but per vmod as for ordinary vmod functions. Thus, vmods requiring per-task / per top-request state for object instances need to implement other means to associate storage with object instances. This is what ``VRT_priv_task()`` / ``VRT_priv_task_get()`` and ``VRT_priv_top()`` / ``VRT_priv_top_get()`` are for: The non-get functions either return an existing ``PRIV_TASK`` / ``PRIV_TOP`` for a given ``void *`` argument or create one. They return ``NULL`` in case of an allocation failure. The ``_get()`` functions do not create a ``PRIV_*``, but return either an existing one or ``NULL``. By convention, private pointers for object instance are created on the address of the object, as in this example for a ``PRIV_TASK``:: VCL_VOID myvmod_obj_method(VRT_CTX, struct myvmod_obj *o) { struct vmod_priv *p; p = VRT_priv_task(ctx, o); // ... see above The ``PRIV_TOP`` case looks identical except for calling ``VRT_priv_top(ctx, o)`` in place of ``VRT_priv_task(ctx, o)``, but be reminded that the ``VRT_priv_top*()`` functions must only be called from client context (if ``ctx->req != NULL``). .. _ref-vmod-event-functions: Event functions =============== VMODs can have an "event" function which is called when a VCL which imports the VMOD is loaded or discarded. This corresponds to the ``VCL_EVENT_LOAD`` and ``VCL_EVENT_DISCARD`` events, respectively. In addition, this function will be called when the VCL temperature is changed to cold or warm, corresponding to the ``VCL_EVENT_COLD`` and ``VCL_EVENT_WARM`` events. The first argument to the event function is a VRT context. The second argument is the vmod_priv specific to this particular VCL, and if necessary, a VCL specific VMOD "fini" function can be attached to its "free" hook. The third argument is the event. If the VMOD has private global state, which includes any sockets or files opened, any memory allocated to global or private variables in the C-code etc, it is the VMODs own responsibility to track how many VCLs were loaded or discarded and free this global state when the count reaches zero. VMOD writers are *strongly* encouraged to release all per-VCL resources for a given VCL when it emits a ``VCL_EVENT_COLD`` event. You will get a chance to reacquire the resources before the VCL becomes active again and be notified first with a ``VCL_EVENT_WARM`` event. Unless a user decides that a given VCL should always be warm, an inactive VMOD will eventually become cold and should manage resources accordingly. An event function must return zero upon success. It is only possible to fail an initialization with the ``VCL_EVENT_LOAD`` or ``VCL_EVENT_WARM`` events. Should such a failure happen, a ``VCL_EVENT_DISCARD`` or ``VCL_EVENT_COLD`` event will be sent to the VMODs that succeeded to put them back in a cold state. The VMOD that failed will not receive this event, and therefore must not be left half-initialized should a failure occur. If your VMOD is running an asynchronous background job you can hold a reference to the VCL to prevent it from going cold too soon and get the same guarantees as backends with ongoing requests for instance. For that, you must acquire the reference by calling ``VRT_VCL_Prevent_Discard`` when you receive a ``VCL_EVENT_WARM`` and later calling ``VRT_VCL_Allow_Discard`` once the background job is over. Receiving a ``VCL_EVENT_COLD`` is your cue to terminate any background job bound to a VCL. You can find an example of VCL references in vmod-debug:: priv_vcl->vclref = VRT_VCL_Prevent_Discard(ctx, "vmod-debug"); ... VRT_VCL_Allow_Discard(&ctx, &priv_vcl->vclref); In this simplified version, you can see that you need at least a VCL-bound data structure like a ``PRIV_VCL`` or a VMOD object to keep track of the reference and later release it. You also have to provide a description, it will be printed to the user if they try to warm up a cooling VCL:: $ varnishadm vcl.list available auto/cooling 0 vcl1 active auto/warm 0 vcl2 $ varnishadm vcl.state vcl1 warm Command failed with error code 300 Failed Message: VCL vcl1 is waiting for: - vmod-debug In the case where properly releasing resources may take some time, you can opt for an asynchronous worker, either by spawning a thread and tracking it, or by using Varnish's worker pools. When to lock, and when not to lock ================================== Varnish is heavily multithreaded, so by default VMODs must implement their own locking to protect shared resources. When a VCL is loaded or unloaded, the event and priv->free are run sequentially all in a single thread, and there is guaranteed to be no other activity related to this particular VCL, nor are there init/fini activity in any other VCL or VMOD at this time. That means that the VMOD init, and any object init/fini functions are already serialized in sensible order, and won't need any locking, unless they access VMOD specific global state, shared with other VCLs. Traffic in other VCLs which also import this VMOD, will be happening while housekeeping is going on. Statistics Counters =================== Starting in Varnish 6.0, VMODs can define their own counters that appear in *varnishstat*. If you're using autotools, see the ``VARNISH_COUNTERS`` macro in varnish.m4 for documentation on getting your build set up. Counters are defined in a .vsc file. The ``VARNISH_COUNTERS`` macro calls *vsctool.py* to turn a *foo.vsc* file into *VSC_foo.c* and *VSC_foo.h* files, just like *vmodtool.py* turns *foo.vcc* into *vcc_foo_if.c* and *vcc_foo_if.h* files. Similarly to the VCC files, the generated VSC files give you a structure and functions that you can use in your VMOD's code to create and destroy the counters your defined. The *vsctool.py* tool also generates a *VSC_foo.rst* file that you can include in your documentation to describe the counters your VMOD has. The .vsc file looks like this: .. code-block:: none .. varnish_vsc_begin:: xkey :oneliner: xkey Counters :order: 70 Metrics from vmod_xkey .. varnish_vsc:: g_keys :type: gauge :oneliner: Number of surrogate keys Number of surrogate keys in use. Increases after a request that includes a new key in the xkey header. Decreases when a key is purged or when all cache objects associated with a key expire. .. varnish_vsc_end:: xkey Counters can have the following parameters: type The type of metric this is. Can be one of ``counter``, ``gauge``, or ``bitmap``. ctype The type that this counter will have in the C code. This can only be ``uint64_t`` and does not need to be specified. level The verbosity level of this counter. *varnishstat* will only show counters with a higher verbosity level than the one currently configured. Can be one of ``info``, ``diag``, or ``debug``. oneliner A short, one line description of the counter. group I don't know what this does. format Can be one of ``integer``, ``bytes``, ``bitmap``, or ``duration``. After these parameters, a counter can have a longer description, though this description has to be all on one line in the .vsc file. You should call ``VSC_*_New()`` when your VMOD is loaded and ``VSC_*_Destroy()`` when it is unloaded. See the generated ``VSC_*.h`` file for the full details about the structure that contains your counters. varnish-7.5.0/doc/sphinx/reference/vmod_blob.rst000066400000000000000000000002571457605730600217270ustar00rootroot00000000000000.. Copyright (c) 2019 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. include:: ../include/vmod_blob.generated.rst varnish-7.5.0/doc/sphinx/reference/vmod_cookie.rst000066400000000000000000000002611457605730600222550ustar00rootroot00000000000000.. Copyright (c) 2020 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. include:: ../include/vmod_cookie.generated.rst varnish-7.5.0/doc/sphinx/reference/vmod_directors.rst000066400000000000000000000002641457605730600230050ustar00rootroot00000000000000.. Copyright (c) 2019 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. include:: ../include/vmod_directors.generated.rst varnish-7.5.0/doc/sphinx/reference/vmod_h2.rst000066400000000000000000000002551457605730600213200ustar00rootroot00000000000000.. Copyright (c) 2024 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. include:: ../include/vmod_h2.generated.rst varnish-7.5.0/doc/sphinx/reference/vmod_proxy.rst000066400000000000000000000002611457605730600221650ustar00rootroot00000000000000.. Copyright (c) 2019 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. include:: ../include/vmod_proxy.generated.rst varnish-7.5.0/doc/sphinx/reference/vmod_purge.rst000066400000000000000000000002611457605730600221260ustar00rootroot00000000000000.. Copyright (c) 2019 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. include:: ../include/vmod_purge.generated.rst varnish-7.5.0/doc/sphinx/reference/vmod_std.rst000066400000000000000000000002631457605730600216000ustar00rootroot00000000000000.. Copyright (c) 2011-2019 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. include:: ../include/vmod_std.generated.rst varnish-7.5.0/doc/sphinx/reference/vmod_unix.rst000066400000000000000000000002571457605730600217740ustar00rootroot00000000000000.. Copyright (c) 2019 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. include:: ../include/vmod_unix.generated.rst varnish-7.5.0/doc/sphinx/reference/vmod_vtc.rst000066400000000000000000000002561457605730600216040ustar00rootroot00000000000000.. Copyright (c) 2019 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. include:: ../include/vmod_vtc.generated.rst varnish-7.5.0/doc/sphinx/reference/vsl-query.rst000066400000000000000000000245771457605730600217460ustar00rootroot00000000000000.. Copyright (c) 2013-2021 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. role:: ref(emphasis) .. _vsl-query(7): ========= vsl-query ========= ----------------------------- Varnish VSL Query Expressions ----------------------------- :Manual section: 7 OVERVIEW ======== The Varnish VSL Query Expressions extracts transactions from the Varnish shared memory log, and perform queries on the transactions before reporting matches. A transaction is a set of log lines that belongs together, e.g. a client request or a backend request. The API monitors the log, and collects all log records that make up a transaction before reporting on that transaction. Transactions can also be grouped, meaning backend transactions are reported together with the client transaction that initiated it. A query is run on a group of transactions. A query expression is true if there is a log record within the group that satisfies the condition. It is false only if none of the log records satisfies the condition. Query expressions can be combined using boolean functions. In addition to log records, it is possible to query transaction ids (vxid) in query. GROUPING ======== When grouping transactions, there is a hierarchy structure showing which transaction initiated what. The level increases by one on an 'initiated by' relation, so for example a backend transaction will have one higher level than the client transaction that initiated it on a cache miss. Request restart transactions don't get their level increased to make it predictable. Levels start counting at 1, except when using raw where it will always be 0. The grouping modes are: * ``session`` All transactions initiated by a client connection are reported together. Client connections are open ended when using HTTP keep-alives, so it is undefined when the session will be reported. If the transaction timeout period is exceeded an incomplete session will be reported. Non-transactional data (vxid == 0) is not reported. * ``request`` Transactions are grouped by request, where the set will include the request itself as well as any backend requests or ESI-subrequests. Session data and non-transactional data (vxid == 0) is not reported. * ``vxid`` Transactions are not grouped, so each vxid is reported in its entirety. Sessions, requests, ESI-requests and backend requests are all reported individually. Non-transactional data is not reported (vxid == 0). This is the default. * ``raw`` Every log record will make up a transaction of its own. All data, including non-transactional data will be reported. Transaction Hierarchy --------------------- Example transaction hierarchy using request grouping mode :: Lvl 1: Client request (cache miss) Lvl 2: Backend request Lvl 2: ESI subrequest (cache miss) Lvl 3: Backend request Lvl 3: Backend request (VCL restart) Lvl 3: ESI subrequest (cache miss) Lvl 4: Backend request Lvl 2: ESI subrequest (cache hit) MEMORY USAGE ============ The API will use pointers to shared memory log data as long as possible to keep memory usage at a minimum. But as the shared memory log is a ring buffer, data will get overwritten eventually, so the API creates local copies of referenced log data when varnishd comes close to overwriting still unreported content. This process avoids loss of log data in many scenarios, but it is not failsafe: Overruns where varnishd "overtakes" the log reader process in the ring buffer can still happen when API clients cannot keep up reading and/or copying, for instance due to output blocking. Though being unrelated to grouping in principle, copying of log data is particularly relevant for session grouping together with long lasting client connections - for this grouping, the logging API client process is likely to consume relevant amounts of memory. As the vxid grouping also logs (potentially long lasting) sessions, it is also likely to require memory for copies of log entries, but far less than session grouping. QUERY LANGUAGE ============== A query expression consists of record selection criteria, and optionally an operator and a value to match against the selected records. :: Additionally, a query expression can occur on the transaction itself rather than log records belonging to the transaction. :: vxid A ``vxid`` query allows you to directly target a specific transacion, whose id can be obtained from an ``X-Varnish`` HTTP header, the default "guru meditation" error page, or ``Begin`` and ``Link`` log records. A query must fit on a single line, but it is possible to pass multiple queries at once, one query per line. Empty lines are ignored, and the list of queries is treated as if the 'or' operator was used to combine them. For example this list of queries:: # catch varnish errors *Error # catch backend errors BerespStatus >= 500 is identical to this query:: (*Error) or (BerespStatus >= 500) Comments can be used and will be ignored, they start with the ``'#'`` character, which may be more useful when the query is read from a file. For very long queries that couldn't easily be split into multiple queries it is possible to break them into multiple lines with a backslash preceding an end of line. For example this query:: BerespStatus >= 500 is identical to this query:: BerespStatus \ >= \ 500 A backslash-newline sequence doesn't continue a comment on the next line and isn't allowed in a quoted string. Record selection criteria ------------------------- The record selection criteria determines what kind records from the transaction group the expression applies to. Syntax: :: {level}taglist:record-prefix[field] Taglist is mandatory, the other components are optional. The level limits the expression to a transaction at that level. If left unspecified, the expression is applied to transactions at all levels. Level is a positive integer or zero. If level is followed by a '+' character, it expresses greater than or equal. If level is followed by a '-', it expresses less than or equal. The taglist is a comma-separated list of VSL record tags that this expression should be checked against. Each list element can be a tag name or a tag glob. Globs allow a '*' either in the beginning of the name or at the end, and will select all tags that match either the prefix or subscript. A single '*' will select all tags. The record prefix will further limit the matches to those records that has this prefix as their first part of the record content followed by a colon. The part of the log record matched against will then be limited to what follows the prefix and colon. This is useful when matching against specific HTTP headers. The record prefix matching is done case insensitive. The field will, if present, treat the log record as a white space separated list of fields, and only the nth part of the record will be matched against. Fields start counting at 1. An expression using only a record selection criteria will be true if there is any record in the transaction group that is selected by the criteria. Operators --------- The following matching operators are available: * == != < <= > >= Numerical comparison. The record contents will be converted to either an integer or a float before comparison, depending on the type of the operand. * eq ne String comparison. 'eq' tests string equality, 'ne' tests for not equality. * ~ !~ Regular expression matching. '~' is a positive match, '!~' is a non-match. Operand ------- The operand is the value the selected records will be matched against. An operand can be quoted or unquoted. Quotes can be either single or double quotes, and for quoted operands a backslash can be used to escape the quotes. Unquoted operands can only consist of the following characters: :: a-z A-Z 0-9 + - _ . * The following types of operands are available: * Integer A number without any fractional part, valid for the numerical comparison operators. The integer type is used when the operand does not contain any period (.) nor exponent (e) characters. However if the record evaluates as a float, only its integral part is used for the comparison. * Float A number with a fractional part, valid for the numerical comparison operators. The float type is used when the operand does contain a period (.) or exponent (e) character. * String A sequence of characters, valid for the string equality operators. * Regular expression A PCRE2 regular expression. Valid for the regular expression operators. Boolean functions ----------------- Query expressions can be linked together using boolean functions. The following are available, in decreasing precedence: * not Inverts the result of * and True only if both expr1 and expr2 are true * or True if either of expr1 or expr2 is true Expressions can be grouped using parenthesis. QUERY EXPRESSION EXAMPLES ========================= * Transaction group contains a request URL that equals to "/foo" :: ReqURL eq "/foo" * Transaction group contains a request cookie header :: ReqHeader:cookie * Transaction group doesn't contain a request cookie header :: not ReqHeader:cookie * Client request where internal handling took more than 800ms.:: Timestamp:Process[2] > 0.8 * Transaction group contains a request user-agent header that contains "iPod" and the request delivery time exceeds 1 second :: ReqHeader:user-agent ~ "iPod" and Timestamp:Resp[2] > 1. * Transaction group contains a backend response status larger than or equal to 500 :: BerespStatus >= 500 * Transaction group contains a request response status of 304, but where the request did not contain an if-modified-since header :: RespStatus == 304 and not ReqHeader:if-modified-since * Transactions that have had backend failures or long delivery time on their ESI subrequests. (Assumes request grouping mode). :: BerespStatus >= 500 or {2+}Timestamp:Process[2] > 1. * Log non-transactional errors. (Assumes raw grouping mode). :: vxid == 0 and Error HISTORY ======= This document was initially written by Martin Blix Grydeland and amended by others. COPYRIGHT ========= This document is licensed under the same licence as Varnish itself. See LICENCE for details. * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS varnish-7.5.0/doc/sphinx/reference/vsl.rst000066400000000000000000000122501457605730600205640ustar00rootroot00000000000000.. Copyright (c) 2011-2021 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. role:: ref(emphasis) .. _vsl(7): === VSL === ----------------------------- Varnish Shared Memory Logging ----------------------------- :Manual section: 7 OVERVIEW ======== This document describes the format and content of all the Varnish shared memory logging tags. These tags are used by the varnishlog(1), varnishtop(1), etc. logging tools supplied with Varnish. VSL tags ~~~~~~~~ .. include:: ../include/vsl-tags.rst TIMESTAMPS ========== Timestamps are inserted in the log on completing certain events during the worker thread's task handling. The timestamps has a label showing which event was completed. The reported fields show the absolute time of the event, the time spent since the start of the task and the time spent since the last timestamp was logged. The timestamps logged automatically by Varnish are inserted after completing events that are expected to have delays (e.g. network IO or spending time on a waitinglist). Timestamps can also be inserted from VCL using the std.timestamp() function. If one is doing time consuming tasks in the VCL configuration, it's a good idea to log a timestamp after completing that task. This keeps the timing information in subsequent timestamps from including the time spent on the VCL event. Request handling timestamps ~~~~~~~~~~~~~~~~~~~~~~~~~~~ Start The start of request processing (first byte received or restart). Req Complete client request received. ReqBody Client request body processed (discarded, cached or passed to the backend). Waitinglist Came off waitinglist. Fetch Fetch processing finished (completely fetched or ready for streaming). Process Processing finished, ready to deliver the client response. Resp Delivery of response to the client finished. Restart Client request is being restarted. Reset The client closed its connection, reset its stream or caused a stream error that forced Varnish to reset the stream. Request processing is interrupted and considered failed, with a 408 "Request Timeout" status code. Pipe handling timestamps ~~~~~~~~~~~~~~~~~~~~~~~~ The following timestamps are client timestamps specific to pipe transactions: Pipe Opened a pipe to the backend and forwarded the request. PipeSess The pipe session has finished. The following timestamps change meaning in a pipe transaction: Process Processing finished, ready to begin the pipe delivery. Backend fetch timestamps ~~~~~~~~~~~~~~~~~~~~~~~~ Start Start of the backend fetch processing. Fetch Came off vcl_backend_fetch ready to send the backend request. Connected Successfully established a backend connection, or selected a recycled connection from the pool. Bereq Backend request sent. Beresp Backend response headers received. Process Processing finished, ready to fetch the response body. BerespBody Backend response body received. Retry Backend request is being retried. Error Backend request failed to vcl_backend_error. NOTICE MESSAGES =============== Notice messages contain informational messages about the handling of a request. These can be exceptional circumstances encountered that causes deviation from the normal handling. The messages are prefixed with ``vsl:`` for core Varnish generated messages, and VMOD authors are encouraged to use ``vmod_:`` for their own notice messages. This matches the name of the manual page where detailed descriptions of notice messages are expected. The core messages are described below. Conditional fetch wait for streaming object ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The backend answered 304 Not Modified on a conditional fetch using an object that has not yet been fully fetched as the stale template object. This can only happen when the TTL of the object is less than the time it takes to fetch it. The fetch is halted until the stale object is fully fetched, upon which the new object is created as normal. While waiting, any grace time on the stale object will be in effect. High number of variants ~~~~~~~~~~~~~~~~~~~~~~~ Objects are primarily looked up from an index using the hash key computed from the ``hash_data()`` VCL function. When variants are involved, that is to say when a backend response was stored with a ``Vary`` header, a secondary lookup is performed but it is not indexed. As the number of variants for a given key increases, this can slow cache lookups down, and since this happens under a lock, this can greatly increase lock contention, even more so for frequently requested objects. Variants should therefore be used sparingly on cacheable responses, but since they can originate from either VCL or origin servers, this notice should help identify problematic resources. HISTORY ======= This document was initially written by Poul-Henning Kamp, and later updated by Martin Blix Grydeland. SEE ALSO ======== * :ref:`varnishhist(1)` * :ref:`varnishlog(1)` * :ref:`varnishncsa(1)` * :ref:`varnishtop(1)` COPYRIGHT ========= This document is licensed under the same licence as Varnish itself. See LICENCE for details. * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2015 Varnish Software AS varnish-7.5.0/doc/sphinx/reference/vsm.rst000066400000000000000000000077271457605730600206020ustar00rootroot00000000000000.. Copyright (c) 2011-2019 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% VSM: Shared Memory Logging and Statistics %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Varnish uses shared memory to export parameters, logging and statistics, because it is faster and much more efficient than regular files. "Varnish Shared Memory" or VSM, is the overall mechanism maintaining sets of shared memory files, each consisting a chunk of memory identified by a two-part name (class, ident). The Class indicates what type of data is stored in the chunk, for instance "Arg" for command line arguments useful for establishing an CLI connection to the varnishd, "Stat" for statistics counters (VSC) and "Log" for log records (VSL). The ident name part is mostly used with stats counters, where they identify dynamic counters, such as: SMA.Transient.c_bytes Shared memory trickery ---------------------- Shared memory is faster than regular files, but it is also slightly tricky in ways a regular logfile is not. When you open a file in "append" mode, the operating system guarantees that whatever you write will not overwrite existing data in the file. The neat result of this is that multiple processes or threads writing to the same file does not even need to know about each other, it all works just as you would expect. With a shared memory log, we get no such help from the kernel, the writers need to make sure they do not stomp on each other, and they need to make it possible and safe for the readers to access the data. The "CS101" way to deal with that, is to introduce locks, and much time is spent examining the relative merits of the many kinds of locks available. Inside the varnishd (worker) process, we use mutexes to guarantee consistency, both with respect to allocations, log entries and stats counters. We do not want a varnishncsa trying to push data through a stalled ssh connection to stall the delivery of content, so readers like that are purely read-only, they do not get to affect the varnishd process and that means no locks for them. Instead we use "stable storage" concepts, to make sure the view seen by the readers is consistent at all times. As long as you only add stuff, that is trivial, but taking away stuff, such as when a backend is taken out of the configuration, we need to give the readers a chance to discover this, a "cooling off" period. The Varnish way: ---------------- .. XXX: not yet up to date with VSM new world order When varnishd starts, it opens locked shared memory files, advising to use different -n arguments if an attempt is made to run multiple varnishd instances with the same name. Child processes each use their own shared memory files, since a worker process restart marks a clean break in operation anyway. To the extent possible, old shared memory files are marked as abandoned by setting the alloc_seq field to zero, which should be monitored by all readers of the VSM. Processes subscribing to VSM files for a long time, should notice if the VSM file goes "silent" and check that the file has not been renamed due to a child restart. Chunks inside the shared memory file form a linked list, and whenever that list changes, the alloc_seq field changes. The linked list and other metadata in the VSM file, works with offsets relative to the start address of where the VSM file is memory mapped, so it need not be mapped at any particular address. When new chunks are allocated, for instance when a new backend is added, they are appended to the list, no matter where they are located in the VSM file. When a chunk is freed, it will be taken out of the linked list of allocations, its length will be set to zero and alloc_seq will be changed to indicate a change of layout. For the next 60 seconds the chunk will not be touched or reused, giving other subscribers a chance to discover the deallocation. The include file provides the supported API for accessing VSM files. varnish-7.5.0/doc/sphinx/reference/vtc.rst000066400000000000000000000073121457605730600205570ustar00rootroot00000000000000.. Copyright (c) 2016-2017 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. role:: ref(emphasis) .. _vtc(7): === VTC === ------------------------ Varnish Test Case Syntax ------------------------ :Manual section: 7 OVERVIEW ======== This document describes the syntax used by Varnish Test Cases files (.vtc). A vtc file describe a scenario with different scripted HTTP-talking entities, and generally one or more Varnish instances to test. PARSING ======= A vtc file will be read word after word, with very little tokenization, meaning a syntax error won't be detected until the test actually reach the relevant action in the test. A parsing error will most of the time result in an assert being triggered. If this happens, please refer yourself to the related source file and line number. However, this guide should help you avoid the most common mistakes. Words and strings ----------------- The parser splits words by detecting whitespace characters and a string is a word, or a series of words on the same line enclosed by double-quotes ("..."), or, for multi-line strings, enclosed in curly brackets ({...}). Comments -------- The leading whitespaces of lines are ignored. Empty lines (or ones consisting only of whitespaces) are ignored too, as are the lines starting with "#" that are comments. Lines and commands ------------------ Test files take at most one command per line, with the first word of the line being the command and the following ones being its arguments. To continue over to a new line without breaking the argument string, you can escape the newline character (\\n) with a backslash (\\). .. _vtc-macros: MACROS ====== When a string is processed, macro expansion is performed. Macros are in the form ``${[,...]}``, they have a name followed by an optional comma- or space-separated list of arguments. Leading and trailing spaces are ignored. The macros ``${foo,bar,baz}`` and ``${ foo bar baz }`` are equivalent. If an argument contains a space or a comma, arguments can be quoted. For example the macro ``${foo,"bar,baz"}`` gives one argument ``bar,baz`` to the macro called ``foo``. Unless documented otherwise, all macros are simple macros that don't take arguments. Built-in macros --------------- ``${bad_backend}`` A socket address that will reliably never accept connections. ``${bad_ip}`` An unlikely IPv4 address. ``${date}`` The current date and time formatted for HTTP. ``${listen_addr}`` The default listen address various components use, by default a random port on localhost. ``${localhost}`` The first IP address that resolves to "localhost". ``${pwd}`` The working directory from which ``varnishtest`` was executed. ``${string,[,...]}`` The ``string`` macro is the entry point for text generation, it takes a specialized action with each its own set of arguments. ``${string,repeat,,}`` Repeat ``uint`` times the string ``str``. ``${testdir}`` The directory containing the VTC script of the ongoing test case execution. ``${tmpdir}`` The dedicated working directory for the ongoing test case execution, which happens to also be the current working directory. Useful when an absolute path to the working directory is needed. ``${topbuild}`` Only present when the ``-i`` option is used, to work on Varnish itself instead of a regular installation. SYNTAX ====== .. include:: ../include/vtc-syntax.rst HISTORY ======= This document has been written by Guillaume Quintard. SEE ALSO ======== * :ref:`varnishtest(1)` * :ref:`vmod_vtc(3)` COPYRIGHT ========= This document is licensed under the same licence as Varnish itself. See LICENCE for details. * Copyright (c) 2006-2016 Varnish Software AS varnish-7.5.0/doc/sphinx/reference/vtla.rst000066400000000000000000000071471457605730600207370ustar00rootroot00000000000000.. Copyright (c) 2019-2020 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. role:: ref(emphasis) .. _vtla: ==================================== VTLA - Varnish Three Letter Acronyms ==================================== Very early in the project, we made a fortunate bargain on eBay, buying up a batch of 676 three letter acronyms, all starting with 'V'. This page tells you what we use them for, if & when we remember to add them... VAV Varnish Arg Vector -- Argv parsing. VBE Varnish Back End -- Code for contacting backends (bin/varnishd/cache_backend.c) VBP Varnish Backend Polling -- Health checks of backends (bin/varnishd/cache_backend_poll.c) VCA Varnish Connection Acceptor -- The code that receives/accepts the TCP connections (bin/varnishd/cache_acceptor.c) VCC VCL to C Compiler -- The code that compiles VCL to C code. (lib/libvcl) VCF Varnish CatFlap VCL Varnish Configuration Language -- The domain-specific programming language used for configuring a varnishd. VCT Varnish CType(3) -- Character classification for RFC2616 and XML parsing. VDD Varnish (Core) Developer Day -- Quarterly invite-only meeting strictly for Varnish core (C) developers, packagers and VMOD hackers. VENC Varnish ENCoding -- base64 functions VEND Varnish ENDianess -- functions to marshall data in specified endianess VEV Varnish EVent -- library functions to implement a simple event-dispatcher. VEXT Varnish Extension -- Shared library loaded into the child process. VGB Varnish Governing Board -- May or may not exist. If you need to ask, you are not on it. VGC Varnish Generated Code -- Code generated by VCC from VCL. VIN Varnish Instance Naming -- Resolution of -n arguments. VLU Varnish Line Up -- library functions to collect stream of bytes into lines for processing. (lib/libvarnish/vlu.c) VPI VCC Private Interface -- functions in varnishd which only VCC is allowed to call. VRE Varnish Regular Expression -- library functions for regular expression based matching and substring replacement. (lib/libvarnish/vre.c) VRT Varnish Run Time -- functions called from compiled code. (bin/varnishd/cache_vrt.c) VRY VaRY -- Related to processing of Vary: HTTP headers. (bin/varnishd/cache_vary.c) VSL Varnish Shared memory Log -- The log written into the shared memory segment for varnish{log,ncsa,top,hist} to see. VSB Varnish string Buffer -- a copy of the FreeBSD "sbuf" library, for safe string handling. VSC Varnish Statistics Counter -- counters for various stats, exposed via varnishapi. VSS Varnish Session Stuff -- library functions to wrap DNS/TCP. (lib/libvarnish/vss.c) VTC Varnish Test Code -- a test-specification for the varnishtest program. VTE Varnish Turbo Encabulator VTLA Varnish Three Letter Acronym -- No rule without an exception. VUG Varnish User Group meeting -- Half-yearly event where the users and developers of Varnish Cache gather to share experiences and plan future development. VWx Varnish Waiter 'x' -- A code module to monitor idle sessions. VWE Varnish Waiter Epoll -- epoll(2) (linux) based waiter module. VWK Varnish Waiter Kqueue -- kqueue(2) (freebsd) based waiter module. VWP Varnish Waiter Poll -- poll(2) based waiter module. VWS Varnish Waiter Solaris -- Solaris ports(2) based waiter module. COPYRIGHT ========= This document is licensed under the same licence as Varnish itself. See LICENCE for details. * Copyright (c) 2019 Varnish Software AS varnish-7.5.0/doc/sphinx/tutorial/000077500000000000000000000000001457605730600171335ustar00rootroot00000000000000varnish-7.5.0/doc/sphinx/tutorial/backend_servers.rst000066400000000000000000000032011457605730600230210ustar00rootroot00000000000000.. Copyright (c) 2010-2015 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _tutorial-backend_servers: Backend servers --------------- Varnish has a concept of `backend` or origin servers. A backend server is the server providing the content Varnish will accelerate via the cache. Our first task is to tell Varnish where it can find its content. Start your favorite text editor and open the Varnish default configuration file. If you installed from source this is `/usr/local/etc/varnish/default.vcl`, if you installed from a package it is probably `/etc/varnish/default.vcl`. If you've been following the tutorial there is probably a section of the configuration that looks like this:: vcl 4.0; backend default { .host = "www.varnish-cache.org"; .port = "80"; } This means we set up a backend in Varnish that fetches content from the host www.varnish-cache.org on port 80. Since you probably don't want to be mirroring varnish-cache.org we need to get Varnish to fetch content from your own origin server. We've already bound Varnish to the public port 80 on the server so now we need to tie it to the origin. For this example, let's pretend the origin server is running on localhost, port 8080.:: vcl 4.0; backend default { .host = "127.0.0.1"; .port = "8080"; } Varnish can have several backends defined and can even join several backends together into clusters of backends for load balancing purposes, having Varnish pick one backend based on different algorithms. Next, let's have a look at some of what makes Varnish unique and what you can do with it. varnish-7.5.0/doc/sphinx/tutorial/index.rst000066400000000000000000000013171457605730600207760ustar00rootroot00000000000000.. Copyright (c) 2010-2014 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _tutorial-index: The Varnish Tutorial ==================== This section covers the Varnish basics in a tutorial form. It will cover what Varnish is and how it works. It also covers how to get Varnish up and running. After this section you probably would want to continue with the users guide (:ref:`users-guide-index`). If you're reading this on the web note the "Next topic" and "Previous topic" links on the right side of each page. .. toctree:: :maxdepth: 1 introduction starting_varnish putting_varnish_on_port_80 backend_servers peculiarities.rst now_what varnish-7.5.0/doc/sphinx/tutorial/introduction.rst000066400000000000000000000161251457605730600224130ustar00rootroot00000000000000.. Copyright (c) 2012-2021 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _tutorial-intro: Varnish: The beef in the sandwich --------------------------------- You may have heard the term "web-delivery-sandwich" used in relation to Varnish, and it is a pretty apt metafor:: ┌─────────┠│ browser │ └─────────┘ ┌─────────┠\ ┌─────────â”│ ┌─────┠╔â•â•â•â•â•â•â•â•â•â•— ┌─────┠┌─────────┠┌─────────â”│┘ │ app │ --- â•‘ Network â•‘ -- │ TLS │ -- │ Varnish │ -- │ Backend │┘ └─────┘ ╚â•â•â•â•â•â•â•â•â•╠└─────┘ └─────────┘ └─────────┘ / ┌────────────┠│ API-client │ └────────────┘ The top layer of the sandwich, 'TLS' is responsible for handling the TLS ("https") encryption, which means it must have access to the cryptographic certificate which authenticates your website. The bottom layer of the sandwich are your webservers, CDNs, API-servers, business backend systems and all the other sources for your web-content. Varnish goes in the middle, where it provides caching, policy, analytics, visibility and mitigation for your webtraffic. How Varnish works ----------------- For each and every request, Varnish runs through the 'VCL' program to decide what should happen: Which backend has this content, how long time can we cache it, is it accessible for this request, should it be redirected elsewhere and so on. If that particular backend is down, varnish can find another or substitute different content until it comes back up. Your first VCL program will probably be trivial, for instance just splitting the traffic between two different backend servers:: sub vcl_recv { if (req.url ~ "^/wiki") { set req.backend_hint = wiki_server; } else { set req.backend_hint = wordpress_server; } } When you load the VCL program into Varnish, it is compiled into a C-program which is compiled into a shared library, which varnish then loads and calls into, therefore VCL code is *fast*. Everything Varnish does is recorded in 'VSL' log records which can be examined and monitored in real time or recorded for later use in native or NCSA format, and when we say 'everything' we mean *everything*:: * << Request >> 318737 - Begin req 318736 rxreq - Timestamp Start: 1612787907.221931 0.000000 0.000000 - Timestamp Req: 1612787907.221931 0.000000 0.000000 - VCL_use boot - ReqStart 192.0.2.24 39698 a1 - ReqMethod GET - ReqURL /vmods/ - ReqProtocol HTTP/1.1 - ReqHeader Host: varnish-cache.org - ReqHeader Accept: text/html, application/rss+xml, […] - ReqHeader Accept-Encoding: gzip,deflate - ReqHeader Connection: close - ReqHeader User-Agent: Mozilla/5.0 […] - ReqHeader X-Forwarded-For: 192.0.2.24 - VCL_call RECV - VCL_acl NO_MATCH bad_guys - VCL_return hash […] These `VSL` log records are written to a circular buffer in shared memory, from where other programs can subscribe to them via a supported API. One such program is `varnishncsa` which produces NCSA-style log records:: 192.0.2.24 - - [08/Feb/2021:12:42:35 +0000] "GET http://vmods/ HTTP/1.1" 200 0 […] Varnish is also engineered for uptime, it is not necessary to restart varnish to change the VCL program, in fact, multiple VCL programs can be loaded at the same time and you can switch between them instantly. Caching with Varnish -------------------- When Varnish receives a request, VCL can decide to look for a reusable answer in the cache, if there is one, that becomes one less request to put load on your backend applications database. Cache-hits take less than a millisecond, often mere microseconds, to deliver. If there is nothing usable in the cache, the answer from the backend can, again under VCL control, be put in the cache for some amount of time, so future requests for the same object can find it there. Varnish understands the `Cache-Control` HTTP header if your backend server sends one, but ultimately the VCL program makes the decision to cache and how long, and if you want to send a different `Cache-Control` header to the clients, VCL can do that too. Content Composition with Varnish -------------------------------- Varnish supports `ESI - Edge Side Includes` which makes it possible to send responses to clients which are composed of different bits from different backends, with the very important footnote that the different bits can have very different caching policies. With ESI a backend can tell varnish to edit the content of another object into a HTML page::

Todays Top News

The `/topnews` request will be handled like every other request in Varnish, VCL will decide if it can be cached, which backend should supply it and so on, so even if the whole object in the example can not be cached, for instance if the page is dynamic content for a logged-in user, the `/topnews` object can be cached and can be shared from the cache, between all users. Content Policy with Varnish --------------------------- Because VCL is in full control of every request, and because VCL can be changed instantly on the fly, Varnish is a great tool to implement both reactive and prescriptive content-policies. Prescriptive content-policies can be everything from complying with UN sanctions using IP number access lists over delivering native language content to different clients to closing access to employee web-mail in compliance with "Right to disconnect" laws. Varnish, and VCL is particular, are well suited to sort requests and collect metrics for real-time A/B testing or during migrations to a new backend system. Reactive content-policies can be anything from blocking access to an infected backend or fixing the URL from the QR code on the new product, to extending caching times while the backend rebuilds the database. Varnish is general purpose -------------------------- Varnish is written to run on modern UNIX-like operating systems: Linux, FreeBSD, OS/X, OpenBSD, NetBSD, Solaris, OmniOs, SmartOS etc. Varnish runs on any CPU architecture: i386, amd64, arm32, arm64, mips, power, riscV, s390 - you name it. Varnish can be deployed on dedicated hardware, in VMs, jails, Containers, Cloud, as a service or any other way you may care for. Unfortunately the `sudo make me a sandwich`_ feature is not ready yet, so you will have to do that yourself but click on "Next topic" in the navigation menu on the left and we'll tell you the recipe... .. _sudo make me a sandwich: https://xkcd.com/149/ varnish-7.5.0/doc/sphinx/tutorial/now_what.rst000066400000000000000000000006731457605730600215210ustar00rootroot00000000000000.. Copyright (c) 2012-2014 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license ========= Now what? ========= You've read through the tutorial. You should have Varnish up and running. You should know about the logs and you should have a rough idea of what VCL is. Next, you might want to have a look at :ref:`users-guide-index`, where we go through the features of Varnish in more detail. varnish-7.5.0/doc/sphinx/tutorial/peculiarities.rst000066400000000000000000000032271457605730600225330ustar00rootroot00000000000000.. Copyright (c) 2013-2019 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license Peculiarities ------------- There are a couple of things that are different with Varnish Cache, as opposed to other programs. One thing you've already seen - VCL. In this section we provide a very quick tour of other peculiarities you need to know about to get the most out of Varnish. Configuration ~~~~~~~~~~~~~ The Varnish Configuration is written in VCL. When Varnish is ran this configuration is transformed into C code and then fed into a C compiler, loaded and executed. .. XXX:Ran sounds strange above, maybe "is running" "is started" "executes"? benc So, as opposed to switching various settings on or off, you write polices on how the incoming traffic should be handled. varnishadm ~~~~~~~~~~ Varnish Cache has an admin console. You can connect it through the :ref:`varnishadm(1)` command. In order to connect the user needs to be able to read `/etc/varnish/secret` in order to authenticate. Once you've started the console you can do quite a few operations on Varnish, like stopping and starting the cache process, load VCL, adjust the built in load balancer and invalidate cached content. It has a built in command "help" which will give you some hints on what it does. .. XXX:sample of the command here. benc varnishlog ~~~~~~~~~~ Varnish does not log to disk. Instead it logs to a chunk of memory. It is actually streaming the logs. At any time you'll be able to connect to the stream and see what is going on. Varnish logs quite a bit of information. You can have a look at the logstream with the command :ref:`varnishlog(1)`. varnish-7.5.0/doc/sphinx/tutorial/putting_varnish_on_port_80.rst000066400000000000000000000036601457605730600251650ustar00rootroot00000000000000.. Copyright (c) 2010-2017 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license Put Varnish on port 80 ---------------------- Until now we've been running with Varnish on a high port which is great for testing purposes. Let's now put Varnish on the default HTTP port 80. First we stop varnish: ``service varnish stop`` Now we need to edit the configuration file that starts Varnish. Debian/Ubuntu (legacy) ~~~~~~~~~~~~~~~~~~~~~~ On older Debian/Ubuntu this is `/etc/default/varnish`. In the file you'll find some text that looks like this:: DAEMON_OPTS="-a :6081 \ -T localhost:6082 \ -f /etc/varnish/default.vcl \ -S /etc/varnish/secret \ -s default,256m" Change it to:: DAEMON_OPTS="-a :80 \ -T localhost:6082 \ -f /etc/varnish/default.vcl \ -S /etc/varnish/secret \ -s default,256m" Debian (v8+) / Ubuntu (v15.04+) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ On more recent Debian and Ubuntu systems this is configured in the systemd service file. Applying changes to the default service is best done by creating a new file `/etc/systemd/system/varnish.service.d/customexec.conf`:: [Service] ExecStart= ExecStart=/usr/sbin/varnishd -a :80 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s default,256m This will override the ExecStart part of the default configuration shipped with Varnish Cache. Run ``systemctl daemon-reload`` to make sure systemd picks up the new configuration before restarting Varnish. Red Hat Enterprise Linux / CentOS ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ On Red Hat/CentOS you can find a similar configuration file in `/etc/sysconfig/varnish`. Restarting Varnish again ------------------------ Once the change is done, restart Varnish: ``service varnish start``. Now everyone accessing your site will be accessing through Varnish. varnish-7.5.0/doc/sphinx/tutorial/starting_varnish.rst000066400000000000000000000050271457605730600232560ustar00rootroot00000000000000.. Copyright (c) 2010-2015 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _tutorial-starting_varnish: Starting Varnish ---------------- This tutorial will assume that you are running Varnish on Ubuntu, Debian, Red Hat Enterprise Linux or CentOS. Those of you running on other platforms might have to do some mental translation exercises in order to follow this. Since you're on a "weird" platform you're probably used to it. :-) Make sure you have Varnish successfully installed (following one of the procedures described in "Installing Varnish" above. When properly installed you start Varnish with ``service varnish start``. This will start Varnish if it isn't already running. .. XXX:What does it do if it is already running? benc Now you have Varnish running. Let us make sure that it works properly. Use your browser to go to http://127.0.0.1:6081/ (Replace the IP address with the IP for the machine that runs Varnish) The default configuration will try to forward requests to a web application running on the same machine as Varnish was installed on. Varnish expects the web application to be exposed over http on port 8080. If there is no web application being served up on that location Varnish will issue an error. Varnish Cache is very conservative about telling the world what is wrong so whenever something is amiss it will issue the same generic "Error 503 Service Unavailable". You might have a web application running on some other port or some other machine. Let's edit the configuration and make it point to something that actually works. Fire up your favorite editor and edit `/etc/varnish/default.vcl`. Most of it is commented out but there is some text that is not. It will probably look like this:: vcl 4.0; backend default { .host = "127.0.0.1"; .port = "8080"; } We'll change it and make it point to something that works. Hopefully http://www.varnish-cache.org/ is up. Let's use that. Replace the text with:: vcl 4.0; backend default { .host = "www.varnish-cache.org"; .port = "80"; } Now issue ``service varnish reload`` to make Varnish reload it's configuration. If that succeeded visit http://127.0.0.1:6081/ in your browser and you should see some directory listing. It works! The reason you're not seeing the Varnish official website is because your client isn't sending the appropriate `Host` header in the request and it ends up showing a listing of the default webfolder on the machine usually serving up http://www.varnish-cache.org/ . varnish-7.5.0/doc/sphinx/users-guide/000077500000000000000000000000001457605730600175245ustar00rootroot00000000000000varnish-7.5.0/doc/sphinx/users-guide/command-line.rst000066400000000000000000000043761457605730600226330ustar00rootroot00000000000000.. Copyright (c) 2012-2021 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _users-guide-command-line: Required command line arguments ------------------------------- There a two command line arguments you have to set when starting Varnish, these are: * what TCP port to serve HTTP from, and * where the backend server can be contacted. If you have installed Varnish through using a provided operating system bound package, you will find the startup options here: * Debian, Ubuntu: `/etc/default/varnish` * Red Hat, Centos: `/etc/sysconfig/varnish` * FreeBSD: `/etc/rc.conf` (See also: /usr/local/etc/rc.d/varnishd) '-a' *listen_address* ^^^^^^^^^^^^^^^^^^^^^ The '-a' argument defines what address Varnish should listen to, and service HTTP requests from. You will most likely want to set this to ":80" which is the Well Known Port for HTTP. You can specify multiple addresses separated by a comma, and you can use numeric or host/service names if you like, Varnish will try to open and service as many of them as possible, but if none of them can be opened, `varnishd` will not start. Here are some examples:: -a :80 -a localhost:80 -a 192.168.1.100:8080 -a '[fe80::1]:80' -a '0.0.0.0:8080,[::]:8081' .. XXX:brief explanation of some of the more complex examples perhaps? benc If your webserver runs on the same machine, you will have to move it to another port number first. '-f' *VCL-file* or '-b' *backend* ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Varnish needs to know where to find the HTTP server it is caching for. You can either specify it with the '-b' argument, or you can put it in your own VCL file, specified with the '-f' argument. Using '-b' is a quick way to get started:: -b localhost:81 -b thatotherserver.example.com:80 -b 192.168.1.2:80 Notice that if you specify a name, it can at most resolve to one IPv4 *and* one IPv6 address. For more advanced use, you will want to specify a VCL program with ``-f``, but you can start with as little as just:: backend default { .host = "localhost:81"; } which is, by the way, *precisely* what '-b' does. Optional arguments ^^^^^^^^^^^^^^^^^^ For a complete list of the command line arguments please see :ref:`varnishd(1) options `. varnish-7.5.0/doc/sphinx/users-guide/compression.rst000066400000000000000000000100101457605730600226070ustar00rootroot00000000000000.. Copyright (c) 2012-2015 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _users-guide-compression: Compression ----------- In Varnish 3.0 we introduced native support for compression, using gzip encoding. *Before* 3.0, Varnish would never compress objects. In Varnish 4.0 compression defaults to "on", meaning that it tries to be smart and do the sensible thing. If you don't want Varnish tampering with the encoding you can disable compression all together by setting the parameter `http_gzip_support` to false. Please see man :ref:`varnishd(1)` for details. Default behaviour ~~~~~~~~~~~~~~~~~ The default behaviour is active when the `http_gzip_support` parameter is set to "on" and neither `beresp.do_gzip` nor `beresp.do_gunzip` are used in VCL. Unless returning from `vcl_recv` with `pipe` or `pass`, Varnish modifies `req.http.Accept-Encoding`: if the client supports gzip `req.http.Accept-Encoding` is set to "gzip", otherwise the header is removed. Unless the request is a `pass`, Varnish sets `bereq.http.Accept-Encoding` to "gzip" before `vcl_backend_fetch` runs, so the header can be changed in VCL. If the server responds with gzip'ed content it will be stored in memory in its compressed form and `Accept-Encoding` will be added to the `Vary` header. To clients supporting gzip, compressed content is delivered unmodified. For clients not supporting gzip, compressed content gets decompressed on the fly while delivering it. The `Content-Encoding` response header gets removed and any `Etag` gets weakened (by prepending "W/"). For Vary Lookups, `Accept-Encoding` is ignored. Compressing content if backends don't ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ You can tell Varnish to compress content before storing it in cache in `vcl_backend_response` by setting `beresp.do_gzip` to "true", like this:: sub vcl_backend_response { if (beresp.http.content-type ~ "text") { set beresp.do_gzip = true; } } With `beresp.do_gzip` set to "true", Varnish will make the following changes to the headers of the resulting object before inserting it in the cache: * set `obj.http.Content-Encoding` to "gzip" * add "Accept-Encoding" to `obj.http.Vary`, unless already present * weaken any `Etag` (by prepending "W/") Generally, Varnish doesn't use much CPU so it might make more sense to have Varnish spend CPU cycles compressing content than doing it in your web- or application servers, which are more likely to be CPU-bound. Please make sure that you don't try to compress content that is uncompressable, like JPG, GIF and MP3 files. You'll only waste CPU cycles. Uncompressing content before entering the cache ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ You can also uncompress content before storing it in cache by setting `beresp.do_gunzip` to "true". One use case for this feature is to work around badly configured backends uselessly compressing already compressed content like JPG images (but fixing the misbehaving backend is always the better option). With `beresp.do_gunzip` set to "true", Varnish will make the following changes to the headers of the resulting object before inserting it in the cache: * remove `obj.http.Content-Encoding` * weaken any `Etag` (by prepending "W/") .. XXX pending closing #940: remove any "Accept-Encoding" from `obj.http.Vary` GZIP and ESI ~~~~~~~~~~~~ If you are using Edge Side Includes (ESI) you'll be happy to note that ESI and GZIP work together really well. Varnish will magically decompress the content to do the ESI-processing, then recompress it for efficient storage and delivery. Turning off gzip support ~~~~~~~~~~~~~~~~~~~~~~~~ When the `http_gzip_support` parameter is set to "off", Varnish does not do any of the header alterations documented above, handles `Vary: Accept-Encoding` like it would for any other `Vary` value and ignores `beresp.do_gzip` and `beresp.do_gunzip`. A random outburst ~~~~~~~~~~~~~~~~~ Poul-Henning Kamp has written :ref:`phk_gzip` which talks a bit more about how the implementation works. varnish-7.5.0/doc/sphinx/users-guide/devicedetection.rst000066400000000000000000000220031457605730600234110ustar00rootroot00000000000000.. Copyright (c) 2012-2020 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _users-guide-devicedetect: Device detection ~~~~~~~~~~~~~~~~ Device detection is figuring out what kind of content to serve to a client based on the User-Agent string supplied in a request. Use cases for this are for example to send size reduced files to mobile clients with small screens and on high latency networks, or to provide a streaming video codec that the client understands. There are a couple of typical strategies to use for this type of scenario: 1) Redirect to another URL. 2) Use a different backend for the special client. 3) Change the backend request so that the backend sends tailored content. To perhaps make the strategies easier to understand, we, in this context, assume that the `req.http.X-UA-Device` header is present and unique per client class. Setting this header can be as simple as:: sub vcl_recv { if (req.http.User-Agent ~ "(?i)iphone" { set req.http.X-UA-Device = "mobile-iphone"; } } There are different commercial and free offerings in doing grouping and identifying clients in further detail. For a basic and community based regular expression set, see https://github.com/varnishcache/varnish-devicedetect/. Serve the different content on the same URL ------------------------------------------- The tricks involved are: 1. Detect the client (pretty simple, just include `devicedetect.vcl` and call it). 2. Figure out how to signal the backend the client class. This includes for example setting a header, changing a header or even changing the backend request URL. 3. Modify any response from the backend to add missing 'Vary' headers, so Varnish' internal handling of this kicks in. 4. Modify output sent to the client so any caches outside our control don't serve the wrong content. All this needs to be done while still making sure that we only get one cached object per URL per device class. Example 1: Send HTTP header to backend '''''''''''''''''''''''''''''''''''''' The basic case is that Varnish adds the 'X-UA-Device' HTTP header on the backend requests, and the backend mentions in the response 'Vary' header that the content is dependent on this header. Everything works out of the box from Varnish' perspective. .. 071-example1-start VCL:: sub vcl_recv { # call some detection engine that set req.http.X-UA-Device } # req.http.X-UA-Device is copied by Varnish into bereq.http.X-UA-Device # so, this is a bit counterintuitive. The backend creates content based on # the normalized User-Agent, but we use Vary on X-UA-Device so Varnish will # use the same cached object for all U-As that map to the same X-UA-Device. # # If the backend does not mention in Vary that it has crafted special # content based on the User-Agent (==X-UA-Device), add it. # If your backend does set Vary: User-Agent, you may have to remove that here. sub vcl_backend_response { if (bereq.http.X-UA-Device) { if (!beresp.http.Vary) { # no Vary at all set beresp.http.Vary = "X-UA-Device"; } elseif (beresp.http.Vary !~ "X-UA-Device") { # add to existing Vary set beresp.http.Vary = beresp.http.Vary + ", X-UA-Device"; } } # comment this out if you don't want the client to know your # classification set beresp.http.X-UA-Device = bereq.http.X-UA-Device; } # to keep any caches in the wild from serving wrong content to client #2 # behind them, we need to transform the Vary on the way out. sub vcl_deliver { if ((req.http.X-UA-Device) && (resp.http.Vary)) { set resp.http.Vary = regsub(resp.http.Vary, "X-UA-Device", "User-Agent"); } } .. 071-example1-end Example 2: Normalize the User-Agent string '''''''''''''''''''''''''''''''''''''''''' Another way of signalling the device type is to override or normalize the 'User-Agent' header sent to the backend. For example:: User-Agent: Mozilla/5.0 (Linux; U; Android 2.2; nb-no; HTC Desire Build/FRF91) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1 becomes:: User-Agent: mobile-android when seen by the backend. This works if you don't need the original header for anything on the backend. A possible use for this is for CGI scripts where only a small set of predefined headers are (by default) available for the script. .. 072-example2-start VCL:: sub vcl_recv { # call some detection engine that set req.http.X-UA-Device } # override the header before it is sent to the backend sub vcl_miss { if (req.http.X-UA-Device) { set req.http.User-Agent = req.http.X-UA-Device; } } sub vcl_pass { if (req.http.X-UA-Device) { set req.http.User-Agent = req.http.X-UA-Device; } } # standard Vary handling code from previous examples. sub vcl_backend_response { if (bereq.http.X-UA-Device) { if (!beresp.http.Vary) { # no Vary at all set beresp.http.Vary = "X-UA-Device"; } elseif (beresp.http.Vary !~ "X-UA-Device") { # add to existing Vary set beresp.http.Vary = beresp.http.Vary + ", X-UA-Device"; } } set beresp.http.X-UA-Device = bereq.http.X-UA-Device; } sub vcl_deliver { if ((req.http.X-UA-Device) && (resp.http.Vary)) { set resp.http.Vary = regsub(resp.http.Vary, "X-UA-Device", "User-Agent"); } } .. 072-example2-end Example 3: Add the device class as a GET query parameter '''''''''''''''''''''''''''''''''''''''''''''''''''''''' If everything else fails, you can add the device type as a GET argument. http://example.com/article/1234.html --> http://example.com/article/1234.html?devicetype=mobile-iphone The client itself does not see this classification, only the backend request is changed. .. 073-example3-start VCL:: sub vcl_recv { # call some detection engine that set req.http.X-UA-Device } sub append_ua { if ((req.http.X-UA-Device) && (req.method == "GET")) { # if there are existing GET arguments; if (req.url ~ "\?") { set req.http.X-get-devicetype = "&devicetype=" + req.http.X-UA-Device; } else { set req.http.X-get-devicetype = "?devicetype=" + req.http.X-UA-Device; } set req.url = req.url + req.http.X-get-devicetype; unset req.http.X-get-devicetype; } } # do this after vcl_hash, so all Vary-ants can be purged in one go. (avoid ban()ing) sub vcl_miss { call append_ua; } sub vcl_pass { call append_ua; } # Handle redirects, otherwise standard Vary handling code from previous # examples. sub vcl_backend_response { if (bereq.http.X-UA-Device) { if (!beresp.http.Vary) { # no Vary at all set beresp.http.Vary = "X-UA-Device"; } elseif (beresp.http.Vary !~ "X-UA-Device") { # add to existing Vary set beresp.http.Vary = beresp.http.Vary + ", X-UA-Device"; } # if the backend returns a redirect (think missing trailing slash), # we will potentially show the extra address to the client. we # don't want that. if the backend reorders the get parameters, you # may need to be smarter here. (? and & ordering) if (beresp.status == 301 || beresp.status == 302 || beresp.status == 303) { set beresp.http.location = regsub(beresp.http.location, "[?&]devicetype=.*$", ""); } } set beresp.http.X-UA-Device = bereq.http.X-UA-Device; } sub vcl_deliver { if ((req.http.X-UA-Device) && (resp.http.Vary)) { set resp.http.Vary = regsub(resp.http.Vary, "X-UA-Device", "User-Agent"); } } .. 073-example3-end Different backend for mobile clients ------------------------------------ If you have a different backend that serves pages for mobile clients, or any special needs in VCL, you can use the 'X-UA-Device' header like this:: backend mobile { .host = "10.0.0.1"; .port = "80"; } sub vcl_recv { # call some detection engine if (req.http.X-UA-Device ~ "^mobile" || req.http.X-UA-device ~ "^tablet") { set req.backend_hint = mobile; } } sub vcl_hash { if (req.http.X-UA-Device) { hash_data(req.http.X-UA-Device); } } Redirecting mobile clients -------------------------- If you want to redirect mobile clients you can use the following snippet. .. 065-redir-mobile-start VCL:: sub vcl_recv { # call some detection engine if (req.http.X-UA-Device ~ "^mobile" || req.http.X-UA-device ~ "^tablet") { return(synth(750, "Moved Temporarily")); } } sub vcl_synth { if (obj.status == 750) { set obj.http.Location = "http://m.example.com" + req.url; set obj.status = 302; return(deliver); } } .. 065-redir-mobile-end varnish-7.5.0/doc/sphinx/users-guide/esi.rst000066400000000000000000000171241457605730600210430ustar00rootroot00000000000000.. Copyright (c) 2012-2020 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _users-guide-esi: Content composition with Edge Side Includes ------------------------------------------- Varnish can create web pages by assembling different pages, called `fragments`, together into one page. These `fragments` can have individual cache policies. If you have a web site with a list showing the five most popular articles on your site, this list can probably be cached as a `fragment` and included in all the other pages. .. XXX:What other pages? benc Used properly this strategy can dramatically increase your hit rate and reduce the load on your servers. In Varnish we've only implemented a small subset of ESI, because most of the rest of the ESI specifications facilities are easier and better done with VCL:: esi:include esi:remove Content substitution based on variables and cookies is not implemented. Varnish will not process ESI instructions in HTML comments. Example: esi:include ~~~~~~~~~~~~~~~~~~~~ Lets see an example how this could be used. This simple cgi script outputs the date:: #!/bin/sh echo 'Content-type: text/html' echo '' date "+%Y-%m-%d %H:%M" Now, lets have an HTML file that has an ESI include statement:: The time is: at this very moment. For ESI to work you need to activate ESI processing in VCL, like this:: sub vcl_backend_response { if (bereq.url == "/test.html") { set beresp.do_esi = true; // Do ESI processing set beresp.ttl = 24 h; // Sets the TTL on the HTML above } elseif (bereq.url == "/cgi-bin/date.cgi") { set beresp.ttl = 1m; // Sets a one minute TTL on // the included object } } Note that ``set beresp.do_esi = true;`` is not required, and should be avoided, for the included fragments, unless they also contains ```` instructions. Example: esi:remove and ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The `` and `` constructs can be used to present appropriate content whether or not ESI is available, for example you can include content when ESI is available or link to it when it is not. ESI processors will remove the start ("") when the page is processed, while still processing the contents. If the page is not processed, it will remain intact, becoming a HTML/XML comment tag. ESI processors will remove `` tags and all content contained in them, allowing you to only render the content when the page is not being ESI-processed. For example:: The license What happens when it fails ? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ By default, the fragments must have ``resp.status`` 200 or 204 or their delivery will be considered failed. Likewise, if the fragment is a streaming fetch, and that fetch fails, the fragment delivery is considered failed. If you include synthetic fragments, that is fragments created in ``vcl_backend_error{}`` or ``vcl_synth{}``, you must set ``(be)resp.status`` to 200 before ``return(deliver);``, for example with a ``return (synth(200))`` or ``return (error(200))`` transition. Failure to properly deliver an ESI fragment has no effect on its parent request delivery by default. The parent request can include the ESI fragment with an ``onerror`` attribute:: This attribute is ignored by default. In fact, Varnish will treat failures to deliver ESI fragments as if there was the attribute ``onerror="continue"``. In the absence of this attribute with this specific value, Varnish should normally abort the delivery of the parent request. We say "abort" rather than "fail", because by the time Varnish starts inserting the fragments, the HTTP response header has long since been sent, and it is no longer possible to change the parent requests's ``resp.status`` to a 5xx, so the only way to signal that something is amiss, is to close the connection in the HTTP/1 case or reset the stream for h2 sessions. However, it is possible to allow individual ```) You can SSH into the ``varnishd`` computer and run ``varnishadm``:: ssh $hostname varnishadm help If you give no command arguments, ``varnishadm`` runs in interactive mode with command-completion, command-history and other comforts: .. code-block:: text critter phk> ./varnishadm 200 ----------------------------- Varnish Cache CLI 1.0 ----------------------------- FreeBSD,13.0-CURRENT,amd64,-jnone,-sdefault,-sdefault,-hcritbit varnish-trunk revision 2bd5d2adfc407216ebaa653fae882d3c8d47f5e1 Type 'help' for command list. Type 'quit' to close CLI session. Type 'start' to launch worker process. varnish> The CLI always returns a three digit status code to tell how things went. 200 and 201 means *OK*, anything else means that some kind of trouble prevented the execution of the command. (If you get 201, it means that the output was truncated, See the :ref:`ref_param_cli_limit` parameter.) When commands are given as arguments to ``varnishadm``, a status different than 200 or 201 will cause it to exit with status 1 and print the status code on standard error. What can you do with the CLI ---------------------------- From the CLI you can: * load/use/discard VCL programs * ban (invalidate) cache content * change parameters * start/stop worker process We will discuss each of these briefly below. Load, use and discard VCL programs ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ All caching and policy decisions are made by VCL programs. You can have multiple VCL programs loaded, but one of them is designated the "active" VCL program, and this is where all new requests start out. To load new VCL program:: varnish> vcl.load some_name some_filename Loading will read the VCL program from the file, and compile it. If the compilation fails, you will get an error messages: .. code-block:: text .../mask is not numeric. ('input' Line 4 Pos 17) "192.168.2.0/24x", ----------------#################- Running VCC-compiler failed, exit 1 VCL compilation failed If compilation succeeds, the VCL program is loaded, and you can now make it the active VCL, whenever you feel like it:: varnish> vcl.use some_name If you find out that was a really bad idea, you can switch back to the previous VCL program again:: varnish> vcl.use old_name The switch is instantaneous, all new requests will start using the VCL you activated right away. The requests currently being processed complete using whatever VCL they started with. We highly recommend you design an emergency-VCL, and always keep it loaded, so it can be activated with :: vcl.use emergency Ban cache content ^^^^^^^^^^^^^^^^^ Varnish offers "purges" to remove things from cache, but that requires you to know exactly what they are. Sometimes it is useful to be able to throw things out of cache without having an exact list of what to throw out. Imagine for instance that the company logo changed and now you need Varnish to stop serving the old logo out of the cache: .. code-block:: text varnish> ban req.url ~ "logo.*[.]png" should do that, and yes, that is a regular expression. We call this "banning" because the objects are still in the cache, but they are now banned from delivery, while all the rest of the cache is unaffected. Even when you want to throw out *all* the cached content, banning is both faster and less disruptive that a restart:: varnish> ban obj.http.date ~ .* .. In addition to handling such special occasions, banning can be used .. in many creative ways to keep the cache up to date, more about .. that in: (TODO: xref) Change parameters ^^^^^^^^^^^^^^^^^ Parameters can be set on the command line with the '-p' argument, but almost all parameters can be examined and changed on the fly from the CLI: .. code-block:: text varnish> param.show prefer_ipv6 200 prefer_ipv6 off [bool] Default is off Prefer IPv6 address when connecting to backends which have both IPv4 and IPv6 addresses. varnish> param.set prefer_ipv6 true 200 In general it is not a good idea to modify parameters unless you have a good reason, such as performance tuning or security configuration. .. XXX: Natural delay of some duration sounds vague. benc Most parameters will take effect instantly, or with a short delay, but a few of them requires you to restart the child process before they take effect. This is always mentioned in the description of the parameter. Starting and stopping the worker process ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ In general you should just leave the worker process running, but if you need to stop and/or start it, the obvious commands work:: varnish> stop and:: varnish> start If you start ``varnishd`` with the '-d' (debugging) argument, you will always need to start the child process explicitly. Should the child process die, the master process will automatically restart it, but you can disable that with the :ref:`ref_param_auto_restart` parameter. varnish-7.5.0/doc/sphinx/users-guide/run_security.rst000066400000000000000000000200441457605730600230110ustar00rootroot00000000000000.. Copyright (c) 2013-2021 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _run_security: Security first ============== If you are the only person involved in running Varnish, or if all the people involved are trusted to the same degree, you can skip this chapter. We have protected Varnish as well as we can from anything which can come in through an HTTP socket. If parts of your web infrastructure are outsourced or otherwise partitioned along administrative lines, you need to think about security. Varnish provides four levels of authority, roughly related to how and where control comes into Varnish: * The command line arguments, * The CLI interface, * VCL programs, and * HTTP requests. Command line arguments ---------------------- The top level security decisions is decided and defined when starting Varnish in the form of command line arguments, we use this strategy in order to make them invulnerable to subsequent manipulation. The important decisions to make are: #. Who should have access to the Command Line Interface? #. Which parameters can they change? #. Will inline-C code be allowed? #. If/how VMODs will be restricted? #. How child processes will be jailed? CLI interface access ^^^^^^^^^^^^^^^^^^^^ The command line interface can be accessed in three ways. :ref:`varnishd(1)` can be told to listen and offer CLI connections on a TCP socket. You can bind the socket to pretty much anything the kernel will accept:: -T 127.0.0.1:631 -T localhost:9999 -T 192.168.1.1:34 -T '[fe80::1]:8082' The default is ``-T localhost:0`` which will pick a random port number, which :ref:`varnishadm(1)` can learn from the shared memory. By using a ``localhost`` address, you restrict CLI access to the local machine. You can also bind the CLI port to an IP address reachable across the net, and let other machines connect directly. This gives you no secrecy, i.e. the CLI commands will go across the network as ASCII text with no encryption, but the ``-S`` / pre shared key (`PSK`_) authentication requires the remote end to know the shared secret. Alternatively you can bind the CLI port to a ``localhost`` address, and give remote users access via a secure connection to the local machine, using ssh/VPN or similar. If you use `ssh(1)` you can restrict which commands each user can execute to just :ref:`varnishadm(1)`, or even use a wrapper scripts around :ref:`varnishadm(1)` to allow specific CLI commands. It is also possible to configure :ref:`varnishd(1)` for "reverse mode", using the ``-M`` argument. In that case :ref:`varnishd(1)` will attempt to open a TCP connection to the specified address, and initiate a CLI connection to your central Varnish management facility. .. XXX:Maybe a sample command here with a brief explanation? benc The connection in this case is also without encryption, but the remote end must still authenticate using ``-S``\ /`PSK`_. Finally, if you run varnishd with the ``-d`` option, you get a CLI command on stdin/stdout, but since you started the process, it would be hard to prevent you getting CLI access, wouldn't it ? .. _PSK: CLI interface authentication ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ By default the CLI interface is protected with a simple, yet powerful "Pre Shared Key" authentication method, which do not provide secrecy (ie: The CLI commands and responses are not encrypted). The way ``-S``\ /PSK works is really simple: During startup a file is created with a random content and the file is only accessible to the user who started :ref:`varnishd(1)` (or the superuser). To authenticate and use a CLI connection, you need to know the contents of that file, in order to answer the cryptographic challenge :ref:`varnishd(1)` issues, see :ref:`ref_psk_auth`. :ref:`varnishadm(1)` uses all of this to restrict access, it will only function, provided it can read the secret file. If you want to allow other users, local or remote, to be able to access CLI connections, you must create your own secret file and make it possible for (only!) these users to read it. A good way to create the secret file is:: dd if=/dev/random of=/etc/varnish_secret count=1 When you start :ref:`varnishd(1)`, you specify the filename with '-S', and it goes without saying that the :ref:`varnishd(1)` master process needs to be able to read the file too. You can change the contents of the secret file while :ref:`varnishd(1)` runs, it is read every time a CLI connection is authenticated. On the local system, :ref:`varnishadm(1)` can retrieve the filename from shared memory, but on remote systems, you need to give :ref:`varnishadm(1)` a copy of the secret file, with the -S argument. If you want to disable ``-S``\ /PSK authentication, use an ``-S none`` argument to varnishd:: varnishd [...] -S none [...] Parameters ^^^^^^^^^^ Parameters can be set from the command line, and made "read-only" (using '-r') so they cannot subsequently be modified from the CLI interface. Pretty much any parameter can be used to totally mess up your HTTP service, but a few can do more damage than others: :ref:`ref_param_cc_command` Execute arbitrary programs :ref:`ref_param_vcc_allow_inline_c` Allow inline C in VCL, which would allow any C code from VCL to be executed by Varnish. Furthermore you may want to look at and lock down: :ref:`ref_param_syslog_cli_traffic` Log all CLI commands to `syslog(8)`, so you know what goes on. :ref:`ref_param_vcc_unsafe_path` Restrict VCL/VMODs to :ref:`ref_param_vcl_path` and :ref:`ref_param_vmod_path` :ref:`ref_param_vmod_path` The directory (or colon separated list of directories) where Varnish will look for modules. This could potentially be used to load rogue modules into Varnish. The CLI interface ----------------- The CLI interface in Varnish is very powerful, if you have access to the CLI interface, you can do almost anything to the Varnish process. As described above, some of the damage can be limited by restricting certain parameters, but that will only protect the local filesystem, and operating system, it will not protect your HTTP service. We do not currently have a way to restrict specific CLI commands to specific CLI connections. One way to get such an effect is to "wrap" all CLI access in pre-approved scripts which use :ref:`varnishadm(1)` to submit the sanitized CLI commands, and restrict a remote user to only those scripts, for instance using sshd(8)'s configuration. VCL programs ------------ There are two "dangerous" mechanisms available in VCL code: VMODs and inline-C. Both of these mechanisms allow execution of arbitrary code and will thus allow a person to get access to the machine, with the privileges of the child process. If :ref:`varnishd(1)` is started as root/superuser, we sandbox the child process, using whatever facilities are available on the operating system, but if :ref:`varnishd(1)` is not started as root/superuser, this is not possible. No, don't ask me why you have to be superuser to lower the privilege of a child process... .. XXX the above is not correct for the solaris jail Inline-C is disabled by default since Varnish version 4, so unless you enable it, you don't have to worry about it. The parameters mentioned above can restrict the loading of VMODs to only be loaded from a designated directory, restricting VCL wranglers to a pre-approved subset of VMODs. If you do that, we are confident that your local system cannot be compromised from VCL code. HTTP requests ------------- We have gone to great lengths to make Varnish resistant to anything coming in through the socket where HTTP requests are received, and you should, generally speaking, not need to protect it any further. The caveat is that since VCL is a programming language which lets you decide exactly what to do with HTTP requests, you can also decide to do stupid and potentially dangerous things with them, including opening yourself up to various kinds of attacks and subversive activities. If you have "administrative" HTTP requests, for instance PURGE requests, we strongly recommend that you restrict them to trusted IP numbers/nets using VCL's :ref:`vcl_syntax_acl`. varnish-7.5.0/doc/sphinx/users-guide/running.rst000066400000000000000000000011031457605730600217310ustar00rootroot00000000000000.. Copyright (c) 2013-2014 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _users_running: Starting and running Varnish ============================ This section covers starting, running, and stopping Varnish, command line flags and options, and communicating with the running Varnish processes, configuring storage and sockets and, and about securing and protecting Varnish against attacks. .. toctree:: :maxdepth: 2 run_security command-line run_cli storage-backends params sizing-your-cache varnish-7.5.0/doc/sphinx/users-guide/sizing-your-cache.rst000066400000000000000000000024661457605730600236260ustar00rootroot00000000000000.. Copyright (c) 2012-2015 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license Sizing your cache ----------------- Deciding on cache size can be a tricky task. A few things to consider: * How big is your *hot* data set. For a portal or news site that would be the size of the front page with all the stuff on it, and the size of all the pages and objects linked from the first page. * How expensive is it to generate an object? Sometimes it makes sense to only cache images a little while or not to cache them at all if they are cheap to serve from the backend and you have a limited amount of memory. * Watch the `n_lru_nuked` counter with :ref:`varnishstat(1)` or some other tool. If you have a lot of LRU activity then your cache is evicting objects due to space constraints and you should consider increasing the size of the cache. Be aware that every object that is stored also carries overhead that is kept outside the actually storage area. So, even if you specify ``-s malloc,16G`` Varnish might actually use **double** that. Varnish has a overhead of about 1KB per object. So, if you have lots of small objects in your cache the overhead might be significant. .. XXX:This seems to contradict the last paragraph in "storage-backends". benc varnish-7.5.0/doc/sphinx/users-guide/storage-backends.rst000066400000000000000000000176501457605730600235030ustar00rootroot00000000000000.. Copyright (c) 2012-2020 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _guide-storage: Storage backends ---------------- Intro ~~~~~ Varnish has pluggable storage backends. It can store data in various backends which can have different performance characteristics. The default configuration is to use the malloc backend with a limited size. For a serious Varnish deployment you probably would want to adjust the storage settings. default ~~~~~~~ syntax: default[,size] The default storage backend is an alias to umem, where available, or malloc otherwise. malloc ~~~~~~ syntax: malloc[,size] Malloc is a memory based backend. Each object will be allocated from memory. If your system runs low on memory swap will be used. Be aware that the size limitation only limits the actual storage and that the approximately 1k of memory per object, used for various internal structures, is included in the actual storage as well. .. XXX:This seems to contradict the last paragraph in "sizing-your-cache". benc The size parameter specifies the maximum amount of memory `varnishd` will allocate. The size is assumed to be in bytes, unless followed by one of the following suffixes: K, k The size is expressed in kibibytes. M, m The size is expressed in mebibytes. G, g The size is expressed in gibibytes. T, t The size is expressed in tebibytes. The default size is unlimited. malloc's performance is bound to memory speed so it is very fast. If the dataset is bigger than available memory performance will depend on the operating systems ability to page effectively. .. _guide-storage_umem: umem ~~~~ syntax: umem[,size] Umem is a better alternative to the malloc backend where `libumem`_ is available. All other configuration aspects are considered equal to malloc. `libumem`_ implements a slab allocator similar to the kernel memory allocator used in virtually all modern operating systems and is considered more efficient and scalable than classical implementations. In particular, `libumem`_ is included in the family of OpenSolaris descendent operating systems where jemalloc(3) is not commonly available. If `libumem`_ is not used otherwise, Varnish will only use it for storage allocations and keep the default libc allocator for all other Varnish memory allocation purposes. If `libumem`_ is already loaded when Varnish initializes, this message is output:: notice: libumem was already found to be loaded and will likely be used for all allocations to indicate that `libumem`_ will not only be used for storage. Likely reasons for this to be the case are: * some library ``varnishd`` is linked against was linked against `libumem`_ (most likely ``libpcre2-8``, check with ``ldd``) * ``LD_PRELOAD_64=/usr/lib/amd64/libumem.so.1``, ``LD_PRELOAD_32=/usr/lib/libumem.so.1`` or ``LD_PRELOAD=/usr/lib/libumem.so.1`` is set Varnish will also output this message to recommend settings for using `libumem`_ for all allocations:: it is recommended to set UMEM_OPTIONS=perthread_cache=0,backend=mmap before starting varnish This recommendation should be followed to achieve an optimal `libumem`_ configuration for Varnish. Setting this environment variable before starting Varnish is required because `libumem`_ cannot be reconfigured once loaded. .. _libumem: http://dtrace.org/blogs/ahl/2004/07/13/number-11-of-20-libumem/ file ~~~~ syntax: file,path[,size[,granularity[,advice]]] The file backend stores objects in virtual memory backed by an unlinked file on disk with `mmap`, relying on the kernel to handle paging as parts of the file are being accessed. This implies that sufficient *virtual* memory needs to be available to accomodate the file size in addition to any memory Varnish requires anyway. Traditionally, the virtual memory limit is configured with ``ulimit -v``, but modern operating systems have other abstractions for this limit like control groups (Linux) or resource controls (Solaris). .. XXX idk about the BSD and MacOS abstractions -- slink The 'path' parameter specifies either the path to the backing file or the path to a directory in which `varnishd` will create the backing file. The size parameter specifies the size of the backing file. The size is assumed to be in bytes, unless followed by one of the following suffixes: K, k The size is expressed in kibibytes. M, m The size is expressed in mebibytes. G, g The size is expressed in gibibytes. T, t The size is expressed in tebibytes. If 'path' points to an existing file and no size is specified, the size of the existing file will be used. If 'path' does not point to an existing file it is an error to not specify the size. If the backing file already exists, it will be truncated or expanded to the specified size. Note that if `varnishd` has to create or expand the file, it will not pre-allocate the added space, leading to fragmentation, which may adversely impact performance on rotating hard drives. Pre-creating the storage file using `dd(1)` will reduce fragmentation to a minimum. .. XXX:1? benc The 'granularity' parameter specifies the granularity of allocation. All allocations are rounded up to this size. The granularity is assumed to be expressed in bytes, unless followed by one of the suffixes described for size. The default granularity is the VM page size. The size should be reduced if you have many small objects. File performance is typically limited to the write speed of the device, and depending on use, the seek time. The 'advice' parameter tells the kernel how `varnishd` expects to use this mapped region so that the kernel can choose the appropriate read-ahead and caching techniques. Possible values are ``normal``, ``random`` and ``sequential``, corresponding to MADV_NORMAL, MADV_RANDOM and MADV_SEQUENTIAL madvise() advice argument, respectively. Defaults to ``random``. On Linux, large objects and rotational disk should benefit from "sequential". deprecated_persistent ~~~~~~~~~~~~~~~~~~~~~ syntax: deprecated_persistent,path,size {experimental} *Before using, read* :ref:`phk_persistent`\ *!* Persistent storage. Varnish will store objects in a file in a manner that will secure the survival of *most* of the objects in the event of a planned or unplanned shutdown of Varnish. The 'path' parameter specifies the path to the backing file. If the file doesn't exist Varnish will create it. The 'size' parameter specifies the size of the backing file. The size is expressed in bytes, unless followed by one of the following suffixes: K, k The size is expressed in kibibytes. M, m The size is expressed in mebibytes. G, g The size is expressed in gibibytes. T, t The size is expressed in tebibytes. Varnish will split the file into logical *silos* and write to the silos in the manner of a circular buffer. Only one silo will be kept open at any given point in time. Full silos are *sealed*. When Varnish starts after a shutdown it will discard the content of any silo that isn't sealed. Note that taking persistent silos offline and at the same time using bans can cause problems. This is due to the fact that bans added while the silo was offline will not be applied to the silo when it reenters the cache. Consequently enabling previously banned objects to reappear. Transient Storage ----------------- If you name any of your storage backend "Transient" it will be used for transient (short lived) objects. This includes the temporary objects created when returning a synthetic object. By default Varnish would use an unlimited malloc backend for this. .. XXX: Is this another parameter? In that case handled in the same manner as above? benc Varnish will consider an object short lived if the TTL is below the parameter 'shortlived'. .. XXX: I am generally missing samples of setting all of these parameters, maybe one sample per section or a couple of examples here with a brief explanation to also work as a summary? benc varnish-7.5.0/doc/sphinx/users-guide/troubleshooting.rst000066400000000000000000000210401457605730600235020ustar00rootroot00000000000000.. Copyright (c) 2012-2019 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _users_trouble: Troubleshooting Varnish ======================= Sometimes Varnish misbehaves or rather behaves the way you told it to behave but not necessarily the way you want it to behave. In order for you to understand whats going on there are a couple of places you can check. :ref:`varnishlog(1)`, ``/var/log/syslog``, ``/var/log/messages`` are all good places where Varnish might leave clues of whats going on. This section will guide you through basic troubleshooting in Varnish. When Varnish won't start ------------------------ Sometimes Varnish wont start. There is a plethora of possible reasons why Varnish wont start on your machine. We've seen everything from wrong permissions on ``/dev/null`` to other processes blocking the ports. Starting Varnish in debug mode to see what is going on. Try to start Varnish with the same arguments as otherwise, but ``-d`` added. This will give you some more information on what is going on. Let us see how Varnish will react when something else is listening on its port.:: # varnishd -n foo -f /usr/local/etc/varnish/default.vcl -s malloc,1G -T 127.0.0.1:2000 -a 0.0.0.0:8080 -d storage_malloc: max size 1024 MB. Using old SHMFILE Platform: Linux,2.6.32-21-generic,i686,-smalloc,-hcritbit 200 193 ----------------------------- Varnish Cache CLI. ----------------------------- Type 'help' for command list. Type 'quit' to close CLI session. Type 'start' to launch worker process. Now Varnish is running but only the master process is running, in debug mode the cache does not start. Now you're on the console. You can instruct the master process to start the cache by issuing "start".:: start bind(): Address already in use 300 22 Could not open sockets And here we have our problem. Something else is bound to the HTTP port of Varnish. If this doesn't help try ``strace`` or ``truss`` or come find us on IRC. Varnish is crashing - panics ---------------------------- When Varnish goes bust the child processes crashes. Most of the crashes are caught by one of the many consistency checks we have included in the Varnish source code. When Varnish hits one of these the caching process will crash itself in a controlled manner, leaving a nice stack trace with the mother process. You can inspect any panic messages by typing ``panic.show`` in the CLI.:: panic.show Last panic at: Tue, 15 Mar 2011 13:09:05 GMT Assert error in ESI_Deliver(), cache_esi_deliver.c line 354: Condition(i == Z_OK || i == Z_STREAM_END) not true. thread = (cache-worker) ident = Linux,2.6.32-28-generic,x86_64,-sfile,-smalloc,-hcritbit,epoll Backtrace: 0x42cbe8: pan_ic+b8 0x41f778: ESI_Deliver+438 0x42f838: RES_WriteObj+248 0x416a70: cnt_deliver+230 0x4178fd: CNT_Session+31d (..) The crash might be due to misconfiguration or a bug. If you suspect it is a bug you can use the output in a bug report, see the "Trouble Tickets" section in the Introduction chapter above. Varnish is crashing - stack overflows ------------------------------------- Bugs put aside, the most likely cause of crashes are stack overflows, which is why we have added a heuristic to add a note when a crash looks like it was caused by one. In this case, the panic message contains something like this:: Signal 11 (Segmentation fault) received at 0x7f631f1b2f98 si_code 1 THIS PROBABLY IS A STACK OVERFLOW - check thread_pool_stack parameter as a first measure, please follow this advise and check if crashes still occur when you add 128k to whatever the value of the ``thread_pool_stack`` parameter and restart varnish. If varnish stops crashing with a larger ``thread_pool_stack`` parameter, it's not a bug (at least most likely). Varnish is crashing - segfaults ------------------------------- Sometimes a bug escapes the consistency checks and Varnish gets hit with a segmentation error. When this happens with the child process it is logged, the core is dumped and the child process starts up again. A core dumped is usually due to a bug in Varnish. However, in order to debug a segfault the developers need you to provide a fair bit of data. * Make sure you have Varnish installed with debugging symbols. * Check where your operating system writes core files and ensure that you actually get them. For example on linux, learn about ``/proc/sys/kernel/core_pattern`` from the `core(5)` manpage. * Make sure core dumps are allowed in the parent shell from which varnishd is being started. In shell, this would be:: ulimit -c unlimited but if varnish is started from an init-script, that would need to be adjusted or in the case of systemd, ``LimitCORE=infinity`` set in the service's ``[Service]]`` section of the unit file. Once you have the core, ``cd`` into varnish's working directory (as given by the ``-n`` parameter, whose default is ``$PREFIX/var/varnish/$HOSTNAME`` with ``$PREFIX`` being the installation prefix, usually ``/usr/local``, open the core with ``gdb`` and issue the command ``bt`` to get a stack trace of the thread that caused the segfault. A basic debug session for varnish installed under ``/usr/local`` could look like this:: $ cd /usr/local/var/varnish/`uname -n`/ $ gdb /usr/local/sbin/varnishd core GNU gdb (Debian 7.12-6) 7.12.0.20161007-git Copyright (C) 2016 Free Software Foundation, Inc. [...] Core was generated by `/usr/local/sbin/varnishd -a 127.0.0.1:8080 -b 127.0.0.1:8080'. Program terminated with signal SIGABRT, Aborted. #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51 51 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory. [Current thread is 1 (Thread 0x7f7749ea3700 (LWP 31258))] (gdb) bt #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51 #1 0x00007f775132342a in __GI_abort () at abort.c:89 #2 0x000000000045939f in pan_ic (func=0x7f77439fb811 "VCL", file=0x7f77439fb74c "", line=0, cond=0x7f7740098130 "PANIC: deliberately!", kind=VAS_VCL) at cache/cache_panic.c:839 #3 0x0000000000518cb1 in VAS_Fail (func=0x7f77439fb811 "VCL", file=0x7f77439fb74c "", line=0, cond=0x7f7740098130 "PANIC: deliberately!", kind=VAS_VCL) at vas.c:51 #4 0x00007f77439fa6e9 in vmod_panic (ctx=0x7f7749ea2068, str=0x7f7749ea2018) at vmod_vtc.c:109 #5 0x00007f77449fa5b8 in VGC_function_vcl_recv (ctx=0x7f7749ea2068) at vgc.c:1957 #6 0x0000000000491261 in vcl_call_method (wrk=0x7f7749ea2dd0, req=0x7f7740096020, bo=0x0, specific=0x0, method=2, func=0x7f77449fa550 ) at cache/cache_vrt_vcl.c:462 #7 0x0000000000493025 in VCL_recv_method (vcl=0x7f775083f340, wrk=0x7f7749ea2dd0, req=0x7f7740096020, bo=0x0, specific=0x0) at ../../include/tbl/vcl_returns.h:192 #8 0x0000000000462979 in cnt_recv (wrk=0x7f7749ea2dd0, req=0x7f7740096020) at cache/cache_req_fsm.c:880 #9 0x0000000000461553 in CNT_Request (req=0x7f7740096020) at ../../include/tbl/steps.h:36 #10 0x00000000004a7fc6 in HTTP1_Session (wrk=0x7f7749ea2dd0, req=0x7f7740096020) at http1/cache_http1_fsm.c:417 #11 0x00000000004a72c3 in http1_req (wrk=0x7f7749ea2dd0, arg=0x7f7740096020) at http1/cache_http1_fsm.c:86 #12 0x0000000000496bb6 in Pool_Work_Thread (pp=0x7f774980e140, wrk=0x7f7749ea2dd0) at cache/cache_wrk.c:406 #13 0x00000000004963e3 in WRK_Thread (qp=0x7f774980e140, stacksize=57344, thread_workspace=2048) at cache/cache_wrk.c:144 #14 0x000000000049610b in pool_thread (priv=0x7f774880ec80) at cache/cache_wrk.c:439 #15 0x00007f77516954a4 in start_thread (arg=0x7f7749ea3700) at pthread_create.c:456 #16 0x00007f77513d7d0f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:97 Varnish gives me Guru meditation -------------------------------- First find the relevant log entries in :ref:`varnishlog(1)`. That will probably give you a clue. Since :ref:`varnishlog(1)` logs a lot of data it might be hard to track the entries down. You can set :ref:`varnishlog(1)` to log all your 503 errors by issuing the following command:: $ varnishlog -q 'RespStatus == 503' -g request If the error happened just a short time ago the transaction might still be in the shared memory log segment. To get :ref:`varnishlog(1)` to process the whole shared memory log just add the '-d' parameter:: $ varnishlog -d -q 'RespStatus == 503' -g request Please see the :ref:`vsl-query(7)` and :ref:`varnishlog(1)` man pages for elaborations on further filtering capabilities and explanation of the various options. Varnish doesn't cache --------------------- See :ref:`users-guide-increasing_your_hitrate`. varnish-7.5.0/doc/sphinx/users-guide/vcl-backends.rst000066400000000000000000000301171457605730600226140ustar00rootroot00000000000000.. Copyright (c) 2012-2020 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _users-guide-backend_servers: Backend servers --------------- Varnish has a concept of "backend" or "origin" servers. A backend server is the server providing the content Varnish will accelerate. Our first task is to tell Varnish where it can find its backends. Start your favorite text editor and open the relevant VCL file. Somewhere in the top there will be a section that looks a bit like this.:: # backend default { # .host = "127.0.0.1"; # .port = "8080"; # } We remove the comment markings in this text stanza making the it look like.:: backend default { .host = "127.0.0.1"; .port = "8080"; } Now, this piece of configuration defines a backend in Varnish called *default*. When Varnish needs to get content from this backend it will connect to port 8080 on localhost (127.0.0.1). Varnish can have several backends defined you can even join several backends together into clusters of backends for load balancing purposes. The "none" backend ------------------ Backends can also be declared as ``none`` with the following syntax::: backend default none; ``none`` backends are special: * All backends declared ``none`` compare equal:: backend a none; backend b none; sub vcl_recv { set req.backend_hint = a; if (req.backend_hint == b) { return (synth(200, "this is true")); } } * The ``none`` backend evaluates to ``false`` when used in a boolean context:: backend nil none; sub vcl_recv { set req.backend_hint = nil; if (! req.backend_hint) { return (synth(200, "We get here")); } } * When directors find no healthy backend, they typically return the ``none`` backend Multiple backends ----------------- At some point you might need Varnish to cache content from several servers. You might want Varnish to map all the URL into one single host or not. There are lot of options. Lets say we need to introduce a Java application into out PHP web site. Lets say our Java application should handle URL beginning with `/java/`. We manage to get the thing up and running on port 8000. Now, lets have a look at the `default.vcl`.:: backend default { .host = "127.0.0.1"; .port = "8080"; } We add a new backend.:: backend java { .host = "127.0.0.1"; .port = "8000"; } Now we need tell Varnish where to send the difference URL. Lets look at `vcl_recv`.:: sub vcl_recv { if (req.url ~ "^/java/") { set req.backend_hint = java; } else { set req.backend_hint = default; } } It's quite simple, really. Lets stop and think about this for a moment. As you can see you can define how you choose backends based on really arbitrary data. You want to send mobile devices to a different backend? No problem. ``if (req.http.User-agent ~ /mobile/) ..`` should do the trick. Without an explicit backend selection, Varnish will continue using the `default` backend. If there is no backend named `default`, the first backend found in the vcl will be used as the default backend. Backends and virtual hosts in Varnish ------------------------------------- Varnish fully supports virtual hosts. They might however work in a somewhat counter-intuitive fashion since they are never declared explicitly. You set up the routing of incoming HTTP requests in `vcl_recv`. If you want this routing to be done on the basis of virtual hosts you just need to inspect `req.http.host`. You can have something like this:: sub vcl_recv { if (req.http.host ~ "foo.com") { set req.backend_hint = foo; } elsif (req.http.host ~ "bar.com") { set req.backend_hint = bar; } } Note that the first regular expressions will match "foo.com", "www.foo.com", "zoop.foo.com" and any other host ending in "foo.com". In this example this is intentional but you might want it to be a bit more tight, maybe relying on the ``==`` operator in stead, like this:: sub vcl_recv { if (req.http.host == "foo.com" || req.http.host == "www.foo.com") { set req.backend_hint = foo; } } Connecting Through a Proxy -------------------------- .. _PROXY2: https://raw.githubusercontent.com/haproxy/haproxy/master/doc/proxy-protocol.txt .. _haproxy: http://www.haproxy.org/ .. _SNI: https://en.wikipedia.org/wiki/Server_Name_Indication As of this release, Varnish can connect to an actual *destination* through a *proxy* using the `PROXY2`_ protocol. Other protocols may be added. For now, a typical use case of this feature is to make TLS-encrypted connections through a TLS *onloader*. The *onloader* needs to support dynamic connections with the destination address information taken from a `PROXY2`_ preamble. For example with `haproxy`_ Version 2.2 or higher, this snippet can be used as a basis for configuring an *onloader*:: # to review and adjust: # - maxconn # - bind ... mode ... # - ca-file ... # listen sslon mode tcp maxconn 1000 bind /path/to/sslon accept-proxy mode 777 stick-table type ip size 100 stick on dst server s00 0.0.0.0:0 ssl ca-file /etc/ssl/certs/ca-bundle.crt alpn http/1.1 sni fc_pp_authority server s01 0.0.0.0:0 ssl ca-file /etc/ssl/certs/ca-bundle.crt alpn http/1.1 sni fc_pp_authority server s02 0.0.0.0:0 ssl ca-file /etc/ssl/certs/ca-bundle.crt alpn http/1.1 sni fc_pp_authority # ... # A higher number of servers improves TLS session caching Varnish running on the same server/namespace can then use the *onloader* with the ``.via`` feature (see :ref:`backend_definition_via`):: backend sslon { .path = "/path/to/sslon"; } backend destination { .host = "my.https.service"; .port = "443"; .via = sslon; } The ``.authority`` attribute can be used to specify the `SNI`_ for the connection if it differs from ``.host``. .. _users-guide-advanced_backend_servers-directors: Directors --------- You can also group several backend into a group of backends. These groups are called directors. This will give you increased performance and resilience. You can define several backends and group them together in a director. This requires you to load a VMOD, a Varnish module, and then to call certain actions in `vcl_init`.:: import directors; # load the directors backend server1 { .host = "192.168.0.10"; } backend server2 { .host = "192.168.0.11"; } sub vcl_init { new bar = directors.round_robin(); bar.add_backend(server1); bar.add_backend(server2); } sub vcl_recv { # send all traffic to the bar director: set req.backend_hint = bar.backend(); } This director is a round-robin director. This means the director will distribute the incoming requests on a round-robin basis. There is also a *random* director which distributes requests in a, you guessed it, random fashion. If that is not enough, you can also write your own director (see :ref:`ref-writing-a-director`). But what if one of your servers goes down? Can Varnish direct all the requests to the healthy server? Sure it can. This is where the Health Checks come into play. .. _users-guide-advanced_backend_servers-health: Health checks ------------- Lets set up a director with two backends and health checks. First let us define the backends:: backend server1 { .host = "server1.example.com"; .probe = { .url = "/"; .timeout = 1s; .interval = 5s; .window = 5; .threshold = 3; } } backend server2 { .host = "server2.example.com"; .probe = { .url = "/"; .timeout = 1s; .interval = 5s; .window = 5; .threshold = 3; } } What is new here is the ``probe``. In this example Varnish will check the health of each backend every 5 seconds, timing out after 1 second. Each poll will send a GET request to /. If 3 out of the last 5 polls succeeded the backend is considered healthy, otherwise it will be marked as sick. Refer to the :ref:`reference-vcl_probes` section in the :ref:`vcl(7)` documentation for more information. Now we define the 'director':: import directors; sub vcl_init { new vdir = directors.round_robin(); vdir.add_backend(server1); vdir.add_backend(server2); } You use this `vdir` director as a backend_hint for requests, just like you would with a simple backend. Varnish will not send traffic to hosts that are marked as unhealthy. Varnish can also serve stale content if all the backends are down. See :ref:`users-guide-handling_misbehaving_servers` for more information on how to enable this. Please note that Varnish will keep health probes running for all loaded VCLs. Varnish will coalesce probes that seem identical - so be careful not to change the probe config if you do a lot of VCL loading. Unloading the VCL will discard the probes. For more information on how to do this please see ref:`reference-vcl-director`. Layering -------- By default, most directors' ``.backend()`` methods return a reference to the director itself. This allows for layering, like in this example:: import directors; sub vcl_init { new dc1 = directors.round_robin(); dc1.add_backend(server1A); dc1.add_backend(server1B); new dc2 = directors.round_robin(); dc2.add_backend(server2A); dc2.add_backend(server2B); new dcprio = directors.fallback(); dcprio.add_backend(dc1); dcprio.add_backend(dc2); } With this initialization, ``dcprio.backend()`` will resolve to either ``server1A`` or ``server1B`` if both are healthy or one of them if only one is healthy. Only if both are sick will a healthy server from ``dc2`` be returned, if any. Director Resolution ------------------- The actual resolution happens when the backend connection is prepared after a return from ``vcl_backend_fetch {}`` or ``vcl_pipe {}``. In some cases like server sharding the resolution outcome is required already in VCL. For such cases, the ``.resolve()`` method can be used, like in this example:: set req.backend_hint = dcprio.backend().resolve(); When using this statement with the previous example code, ``req.backend_hint`` will be set to one of the ``server*`` backends or the ``none`` backend if they were all sick. ``.resolve()`` works on any object of the ``BACKEND`` type. .. _users-guide-advanced_backend_connection-pooling: Connection Pooling ------------------ Opening connections to backends always comes at a cost: Depending on the type of connection and backend infrastructure, the overhead for opening a new connection ranges from pretty low for a local Unix domain socket (see :ref:`backend_definition` ``.path`` attribute) to substantial for establishing possibly multiple TCP and/or TLS connections over possibly multiple hops and long network paths. However relevant the overhead, it certainly always exists. So because re-using existing connections can generally be considered to reduce overhead and latencies, Varnish pools backend connections by default: Whenever a backend task is finished, the used connection is not closed but rather added to a pool for later reuse. To avoid a connection from being reused, the ``Connection: close`` http header can be added in :ref:`vcl_backend_fetch`. While backends are defined per VCL, connection pooling works across VCLs and even across backends: By default, the identifier for pooled connections is constructed from the ``.host``\ /\ ``.port`` or ``.path`` attributes of the :ref:`backend_definition` (VMODs can make use of custom identifiers). So whenever two backends share the same address information, irrespective of which VCLs they are defined in, their connections are taken from a common pool. If not actively closed by the backend, pooled connections are kept open by Varnish until the :ref:`ref_param_backend_idle_timeout` expires. varnish-7.5.0/doc/sphinx/users-guide/vcl-built-in-code.rst000066400000000000000000000060031457605730600234720ustar00rootroot00000000000000.. _vcl-built-in-code: Built-in VCL ============ Whenever a VCL program is loaded, the built-in VCL is appended to it. The vcl built-in subs (:ref:`vcl_steps`) have a special property, they can appear multiple times and the result is concatenation of all built-in subroutines. For example, let's take the following snippet:: sub vcl_recv { # loaded code for vcl_recv } The effective VCL that is supplied to the compiler looks like:: sub vcl_recv { # loaded code for vcl_recv # built-in code for vcl_recv } This is how it is guaranteed that all :ref:`reference-states` have at least one ``return ()``. It is generally recommended not to invariably return from loaded code to let Varnish execute the built-in code, because the built-in code provides essentially a sensible default behavior for an HTTP cache. Built-in subroutines split -------------------------- It might however not always be practical that the built-in VCL rules take effect at the very end of a state, so some subroutines like ``vcl_recv`` are split into multiple calls to other subroutines. By convention, those assistant subroutines are named after the variable they operate on, like ``req`` or ``beresp``. This allows for instance to circumvent default behavior. For example, ``vcl_recv`` in the built-in VCL prevents caching when clients have a cookie. If you can trust your backend to always specify whether a response is cacheable or not regardless of whether the request contained a cookie you can do this:: sub vcl_req_cookie { return; } With this, all other default behaviors from the built-in ``vcl_recv`` are executed and only cookie handling is affected. Another example is how the built-in ``vcl_backend_response`` treats a negative TTL as a signal not to cache. It's a historical mechanism to mark a response as uncacheable, but only if the built-in ``vcl_backend_response`` is not circumvented by a ``return ()``. However, in a multi-tier architecture where a backend might be another Varnish server, you might want to cache stale responses to allow the delivery of graced objects and enable revalidation on the next fetch. This can be done with the following snippet:: sub vcl_beresp_stale { if (beresp.ttl + beresp.grace > 0s) { return; } } This granularity, and the general goal of the built-in subroutines split is to allow to circumvent a specific aspect of the default rules without giving the entire logic up. Built-in VCL reference ---------------------- A copy of the ``builtin.vcl`` file might be provided with your Varnish installation but :ref:`varnishd(1)` is the reference to determine the code that is appended to any loaded VCL. The VCL compilation happens in two passes: - the first one compiles the built-in VCL only, - and the second pass compiles the concatenation of the loaded and built-in VCLs. Any VCL subroutine present in the built-in VCL can be extended, in which case the loaded VCL code will be executed before the built-in code. varnish-7.5.0/doc/sphinx/users-guide/vcl-example-acls.rst000066400000000000000000000012411457605730600234110ustar00rootroot00000000000000.. Copyright (c) 2013-2015 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license ACLs ~~~~ You create a named access control list with the ``acl`` keyword. You can match the IP address of the client against an ACL with the match operator.:: # Who is allowed to purge.... acl local { "localhost"; "192.168.1.0"/24; /* and everyone on the local network */ ! "192.168.1.23"; /* except for the dialin router */ } sub vcl_recv { if (req.method == "PURGE") { if (client.ip ~ local) { return(purge); } else { return(synth(403, "Access denied.")); } } } varnish-7.5.0/doc/sphinx/users-guide/vcl-example-manipulating-headers.rst000066400000000000000000000013341457605730600265730ustar00rootroot00000000000000.. Copyright (c) 2013-2015 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license Manipulating request headers in VCL ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Lets say we want to remove the cookie for all objects in the `/images` directory of our web server:: sub vcl_recv { if (req.url ~ "^/images") { unset req.http.cookie; } } Now, when the request is handled to the backend server there will be no cookie header. The interesting line is the one with the if-statement. It matches the URL, taken from the request object, and matches it against the regular expression. Note the match operator. If it matches the Cookie: header of the request is unset (deleted). varnish-7.5.0/doc/sphinx/users-guide/vcl-example-manipulating-responses.rst000066400000000000000000000010541457605730600272000ustar00rootroot00000000000000.. Copyright (c) 2013-2017 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license Altering the backend response ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Here we override the TTL of a object coming from the backend if it matches certain criteria:: sub vcl_backend_response { if (bereq.url ~ "\.(png|gif|jpg)$") { unset beresp.http.set-cookie; set beresp.ttl = 1h; } } We also remove any Set-Cookie headers in order to avoid creation of a `hit-for-miss` object. See :ref:`vcl_actions`. varnish-7.5.0/doc/sphinx/users-guide/vcl-example-websockets.rst000066400000000000000000000012631457605730600246440ustar00rootroot00000000000000.. Copyright (c) 2013-2017 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license Adding WebSockets support ------------------------- WebSockets is a technology for creating a bidirectional stream-based channel over HTTP. To run WebSockets through Varnish you need to pipe the request and copy the Upgrade and Connection headers as follows:: sub vcl_recv { if (req.http.upgrade ~ "(?i)websocket") { return (pipe); } } sub vcl_pipe { if (req.http.upgrade) { set bereq.http.upgrade = req.http.upgrade; set bereq.http.connection = req.http.connection; } } varnish-7.5.0/doc/sphinx/users-guide/vcl-examples.rst000066400000000000000000000006111457605730600226540ustar00rootroot00000000000000.. Copyright (c) 2012-2014 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license VCL Examples ------------ These are a short collection of examples that showcase some of the capabilities of the VCL language. .. toctree:: vcl-example-manipulating-headers vcl-example-manipulating-responses vcl-example-acls vcl-example-websockets varnish-7.5.0/doc/sphinx/users-guide/vcl-grace.rst000066400000000000000000000132351457605730600221250ustar00rootroot00000000000000.. Copyright (c) 2014-2020 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _users-guide-handling_misbehaving_servers: Grace mode and keep ------------------- Sometimes you want Varnish to serve content that is somewhat stale instead of waiting for a fresh object from the backend. For example, if you run a news site, serving a main page that is a few seconds old is not a problem if this gives your site faster load times. In Varnish this is achieved by using `grace mode`. A related idea is `keep`, which is also explained here. Grace mode ~~~~~~~~~~ When several clients are requesting the same page Varnish will send one request to the backend and place the others on hold while fetching one copy from the backend. In some products this is called request coalescing and Varnish does this automatically. If you are serving thousands of hits per second the queue of waiting requests can get huge. There are two potential problems - one is a thundering herd problem - suddenly releasing a thousand threads to serve content might send the load sky high. Secondly - nobody likes to wait. Setting an object's `grace` to a positive value tells Varnish that it should serve the object to clients for some time after the TTL has expired, while Varnish fetches a new version of the object. The default value is controlled by the runtime parameter ``default_grace``. Keep ~~~~ Setting an object's `keep` tells Varnish that it should keep an object in the cache for some additional time. The reasons to set `keep` is to use the object to construct a conditional GET backend request (with If-Modified-Since: and/or Ìf-None-Match: headers), allowing the backend to reply with a 304 Not Modified response, which may be more efficient on the backend and saves re-transmitting the unchanged body. The values are additive, so if grace is 10 seconds and keep is 1 minute, then objects will survive in cache for 70 seconds after the TTL has expired. Setting grace and keep ~~~~~~~~~~~~~~~~~~~~~~ We can use VCL to make Varnish keep all objects for 10 minutes beyond their TTL with a grace period of 2 minutes:: sub vcl_backend_response { set beresp.grace = 2m; set beresp.keep = 8m; } The effect of grace and keep ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ For most users setting the default grace and/or a suitable grace for each object is enough. The default VCL will do the right thing and behave as described above. However, if you want to customize how Varnish behaves, then you should know some of the details on how this works. When ``sub vcl_recv`` ends with ``return (lookup)`` (which is the default behavior), Varnish will look for a matching object in its cache. Then, if it only found an object whose TTL has run out, Varnish will consider the following: * Is there already an ongoing backend request for the object? * Is the object within the `grace period`? Then, Varnish reacts using the following rules: * If the `grace period` has run out and there is no ongoing backend request, then ``sub vcl_miss`` is called immediately, and the object will be used as a 304 candidate. * If the `grace period` has run out and there is an ongoing backend request, then the request will wait until the backend request finishes. * If there is no backend request for the object, one is scheduled. * Assuming the object will be delivered, ``sub vcl_hit`` is called immediately. Note that the backend fetch happens asynchronously, and the moment the new object is in it will replace the one we've already got. If you do not define your own ``sub vcl_hit``, then the default one is used. It looks like this:: sub vcl_hit { return (deliver); } Note that the condition ``obj.ttl + obj.grace > 0s`` will (in ``sub vcl_hit``) always evaluate to true. In earlier versions (6.0.0 and earlier), this was not the case, and a test in the builtin VCL was necessary to make sure that "keep objects" (objects in the cache where both TTL and grace had run out) would not be delivered to the clients. In the current version, when there are only "keep objects" available, ``sub vcl_miss`` will be called, and a fetch for a new object will be initiated. Misbehaving servers ~~~~~~~~~~~~~~~~~~~ A key feature of Varnish is its ability to shield you from misbehaving web- and application servers. If you have enabled :ref:`users-guide-advanced_backend_servers-health` you can check if the backend is sick and modify the behavior when it comes to grace. This can done in the following way:: sub vcl_backend_response { set beresp.grace = 24h; // no keep - the grace should be enough for 304 candidates } sub vcl_recv { if (std.healthy(req.backend_hint)) { // change the behavior for healthy backends: Cap grace to 10s set req.grace = 10s; } } In the example above, the special variable ``req.grace`` is set. The effect is that, when the backend is healthy, objects with grace above 10 seconds will have an `effective` grace of 10 seconds. When the backend is sick, the default VCL kicks in, and the long grace is used. Additionally, you might want to stop cache insertion when a backend fetch returns an ``5xx`` error:: sub vcl_backend_response { if (beresp.status >= 500 && bereq.is_bgfetch) { return (abandon); } } Summary ~~~~~~~ Grace mode allows Varnish to deliver slightly stale content to clients while getting a fresh version from the backend. The result is faster load times at lower cost. It is possible to limit the grace during lookup by setting ``req.grace`` and then change the behavior when it comes to grace. Often this is done to change the `effective` grace depending on the health of the backend. varnish-7.5.0/doc/sphinx/users-guide/vcl-hashing.rst000066400000000000000000000035401457605730600224630ustar00rootroot00000000000000.. Copyright (c) 2012-2021 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license Hashing ------- Internally, when Varnish stores content in the cache indexed by a hash key used to find the object again. In the default setup this key is calculated based on `URL`, the `Host:` header, or if there is none, the IP address of the server:: sub vcl_hash { hash_data(req.url); if (req.http.host) { hash_data(req.http.host); } else { hash_data(server.ip); } return (lookup); } As you can see it first hashes `req.url` and then `req.http.host` if it exists. It is worth pointing out that Varnish doesn't lowercase the hostname or the URL before hashing it so in theory having "Varnish.org/" and "varnish.org/" would result in different cache entries. Browsers however, tend to lowercase hostnames. You can change what goes into the hash. This way you can make Varnish serve up different content to different clients based on arbitrary criteria. Let's say you want to serve pages in different languages to your users based on where their IP address is located. You would need some Vmod to get a country code and then put it into the hash. It might look like this. In `vcl_recv`:: set req.http.X-Country-Code = geoip.lookup(client.ip); And then add a `vcl_hash`:: sub vcl_hash { hash_data(req.http.X-Country-Code); } Because there is no `return(lookup)`, the builtin VCL will take care of adding the URL, `Host:` or server IP# to the hash as usual. If `vcl_hash` did return, ie:: sub vcl_hash { hash_data(req.http.X-Country-Code); return(lookup); } then *only* the country-code would matter, and Varnish would return seemingly random objects, ignoring the URL, (but they would always have the correct `X-Country-Code`). varnish-7.5.0/doc/sphinx/users-guide/vcl-inline-c.rst000066400000000000000000000015651457605730600225450ustar00rootroot00000000000000.. Copyright (c) 2012-2015 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license Using inline C to extend Varnish --------------------------------- (Here there be dragons. Big and mean ones.) You can use *inline C* to extend Varnish. Please note that you can seriously mess up Varnish this way. The C code runs within the Varnish Cache process so if your code generates a segfault the cache will crash. One of the first uses of inline C was logging to `syslog`.:: # The include statements must be outside the subroutines. C{ #include }C sub vcl_something { C{ syslog(LOG_INFO, "Something happened at VCL line XX."); }C } To use inline C you need to enable it with the ``vcc_allow_inline_c`` parameter. varnish-7.5.0/doc/sphinx/users-guide/vcl-separate.rst000066400000000000000000000052441457605730600226510ustar00rootroot00000000000000.. Copyright (c) 2016-2019 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _users-guide-separate_VCL: Separate VCL files ================== Having multiple different vhosts in the same Varnish is a very typical use-case, and from Varnish 5.0 it is possible to have a separate VCL files for separate vhosts or any other distinct subset of requests. Assume that we want to handle ``varnish.org`` with one VCL file and ``varnish-cache.org`` with another VCL file. First load the two VCL files:: vcl.load vo_1 /somewhere/vo.vcl vcl.load vc_1 /somewhere/vc.vcl These are 100% normal VCL files, as they would look if you ran only that single domain on your Varnish instance. Next we need to point VCL labels to them:: vcl.label l_vo vo_1 vcl.label l_vc vc_1 Next we write the top-level VCL program, which branches out to the other two, depending on the Host: header in the request:: import std; # We have to have a backend, even if we do not use it backend default { .host = "127.0.0.1"; } sub vcl_recv { # Normalize host header set req.http.host = std.tolower(req.http.host); if (req.http.host ~ "\.?varnish\.org$") { return (vcl(l_vo)); } if (req.http.host ~ "\.?varnish-cache\.org$") { return (vcl(l_vc)); } return (synth(302, "http://varnish-cache.org")); } sub vcl_synth { if (resp.status == 301 || resp.status == 302) { set resp.http.location = resp.reason; set resp.reason = "Moved"; return (deliver); } } Finally, we load the top level VCL and make it the active VCL:: vcl.load top_1 /somewhere/top.vcl vcl.use top_1 If you want to update one of the separated VCLs, you load the new one and change the label to point to it:: vcl.load vo_2 /somewhere/vo.vcl vcl.label l_vo vo_2 If you want to change the top level VCL, do as you always did:: vcl.load top_2 /somewhere/top.vcl vcl.use top_2 Details, details, details: -------------------------- * All requests *always* start in the active VCL - the one from ``vcl.use`` * Only VCL labels can be used in ``return(vcl(name))``. Without this restriction the top level VCL would have to be reloaded every time one of the separate VCLs were changed. * You can only switch VCLs from the active VCL. If you try it from one of the separate VCLs, you will get a 503 * You cannot remove VCL labels (with ``vcl.discard``) if any VCL contains ``return(vcl(name_of_that_label))`` * You cannot remove VCLs which have a label attached to them. * This code is tested in testcase c00077 * This is a very new feature, it may change * We would very much like feedback how this works for you varnish-7.5.0/doc/sphinx/users-guide/vcl-syntax.rst000066400000000000000000000075611457605730600223770ustar00rootroot00000000000000.. Copyright (c) 2012-2020 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license VCL Syntax ---------- VCL has inherited a lot from C and it reads much like simple C or Perl. Blocks are delimited by curly brackets, statements end with semicolons, and comments may be written as in C, C++ or Perl according to your own preferences. Note that VCL doesn't contain any loops or jump statements. This section provides an outline of the more important parts of the syntax. For a full documentation of VCL syntax please see :ref:`vcl(7)` in the reference. Strings ~~~~~~~ Basic strings are enclosed in " ... ", and may not contain newlines. Backslash is not special, so for instance in `regsub()` you do not need to do the "count-the-backslashes" polka:: regsub("barf", "(b)(a)(r)(f)", "\4\3\2p") -> "frap" Long strings are enclosed in {" ... "} or """ ... """. They may contain any character including ", newline and other control characters except for the NUL (0x00) character. If you really want NUL characters in a string there is a VMOD that makes it possible to create such strings. .. _vcl_syntax_acl: Access control lists (ACLs) ~~~~~~~~~~~~~~~~~~~~~~~~~~~ An ACL declaration creates and initializes a named access control list which can later be used to match client addresses:: acl local { "localhost"; // myself "192.0.2.0"/24; // and everyone on the local network ! "192.0.2.23"; // except for the dialin router } If an ACL entry specifies a host name which Varnish is unable to resolve, it will match any address it is compared to. Consequently, if it is preceded by a negation mark, it will reject any address it is compared to, which may not be what you intended. If the entry is enclosed in parentheses, however, it will simply be ignored. To match an IP address against an ACL, simply use the match operator:: if (client.ip ~ local) { return (pipe); } In Varnish versions before 7.0, ACLs would always emit a `VCL_acl` record in the VSL log, from 7.0 and forward, this must be explicitly enabled by specifying the `+log` flag:: acl local +log { "localhost"; // myself "192.0.2.0"/24; // and everyone on the local network ! "192.0.2.23"; // except for the dialin router } Operators ~~~~~~~~~ The following operators are available in VCL. See the examples further down for, uhm, examples. = Assignment operator. == Comparison. ~ Match. Can either be used with regular expressions or ACLs. ! Negation. && Logical *and* || Logical *or* Built in subroutines ~~~~~~~~~~~~~~~~~~~~ Varnish has quite a few built-in subroutines that are called for each transaction as it flows through Varnish. These built-in subroutines are all named ``vcl_*`` and are explained in :ref:`vcl_steps`. Processing in built-in subroutines ends with ``return ()`` (see :ref:`vcl_actions`). The :ref:`vcl-built-in-code` also contains custom assistant subroutines called by the built-in subroutines, also prefixed with ``vcl_``. Custom subroutines ~~~~~~~~~~~~~~~~~~ You can write your own subroutines, whose names cannot start with ``vcl_``. A subroutine is typically used to group code for legibility or reusability:: sub pipe_if_local { if (client.ip ~ local) { return (pipe); } } To call a subroutine, use the ``call`` keyword followed by the subroutine's name:: call pipe_if_local; Custom subroutines in VCL do not take arguments, nor do they return values. ``return ()`` (see :ref:`vcl_actions`) as shown in the example above returns all the way from the top level built in subroutine (see :ref:`vcl_steps`) which, possibly through multiple steps, lead to the call of the custom subroutine. ``return`` without an action resumes execution after the ``call`` statement of the calling subroutine. varnish-7.5.0/doc/sphinx/users-guide/vcl-variables.rst000066400000000000000000000021571457605730600230150ustar00rootroot00000000000000.. Copyright (c) 2012-2016 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license Request and response VCL objects ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. XXX: refactored headline. benc In VCL, there several important objects that you need to be aware of. These objects can be accessed and manipulated using VCL. *req* The request object. When Varnish has received the request the `req` object is created and populated. Most of the work you do in `vcl_recv` you do on or with the `req` object. *bereq* The backend request object. Varnish constructs this before sending it to the backend. It is based on the `req` object. .. XXX:in what way? benc *beresp* The backend response object. It contains the headers of the object coming from the backend. If you want to modify the response coming from the server you modify this object in `vcl_backend_response`. *resp* The HTTP response right before it is delivered to the client. It is typically modified in `vcl_deliver`. *obj* The object as it is stored in cache. Read only. .. XXX:What object? the current request? benc varnish-7.5.0/doc/sphinx/users-guide/vcl.rst000066400000000000000000000033341457605730600210450ustar00rootroot00000000000000.. Copyright (c) 2012-2016 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _users_vcl: VCL - Varnish Configuration Language ------------------------------------ This section covers how to tell Varnish how to handle your HTTP traffic, using the Varnish Configuration Language (VCL). Varnish has a great configuration system. Most other systems use configuration directives, where you basically turn on and off lots of switches. We have instead chosen to use a domain specific language called VCL for this. Every inbound request flows through Varnish and you can influence how the request is being handled by altering the VCL code. You can direct certain requests to particular backends, you can alter the requests and the responses or have Varnish take various actions depending on arbitrary properties of the request or the response. This makes Varnish an extremely powerful HTTP processor, not just for caching. Varnish translates VCL into binary code which is then executed when requests arrive. The performance impact of VCL is negligible. The VCL files are organized into subroutines. The different subroutines are executed at different times. One is executed when we get the request, another when files are fetched from the backend server. If you don't call an action in your subroutine and it reaches the end Varnish will execute some built-in VCL code. You will see this VCL code commented out in the file `builtin.vcl` that ships with Varnish Cache. .. _users-guide-vcl_fetch_actions: .. toctree:: :maxdepth: 1 vcl-syntax vcl-built-in-code vcl-variables vcl-backends vcl-hashing vcl-grace vcl-separate vcl-inline-c vcl-examples devicedetection varnish-7.5.0/doc/sphinx/vcl-design-patterns/000077500000000000000000000000001457605730600211615ustar00rootroot00000000000000varnish-7.5.0/doc/sphinx/vcl-design-patterns/index.rst000066400000000000000000000007201457605730600230210ustar00rootroot00000000000000.. Copyright (c) 2021 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _vcl-design-patterns-index: %%%%%%%%%%%%%%%%%%% VCL Design Patterns %%%%%%%%%%%%%%%%%%% This section showcases design patterns for :ref:`reference-vcl`. To keep code examples short, some aspects not directly related to a given design pattern may be simplified. .. toctree:: :maxdepth: 1 resp-status.rst req-hash_ignore_vary.rst varnish-7.5.0/doc/sphinx/vcl-design-patterns/req-hash_ignore_vary.rst000066400000000000000000000040051457605730600260260ustar00rootroot00000000000000.. Copyright (c) 2021 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license Ignoring the Vary header for bots ================================= Varnish supports HTTP variants out of the box, but the *Vary* header is somewhat limited since it operates on complete header values. If you want for example to conduct an A/B testing campaign or perform blue/green deployment you can make clients "remember" their path with a first-party cookie. When a search engine bot asks for contents however, there's a high chance that they don't process cookies and in all likelihood you would prefer to serve a response quickly. In that case you would probably prefer not to even try to attribute a category to the client, but in that case you create a new variant in your cache that is none of A, B, blue, green, or whatever your backend serves. If the way content is served makes no difference to the bot, because you changed the color of a button or something else orthogonal to the content itself, then you risk a cache miss with the detrimental effects of adding a needless variant to the cache and serving it with extra latency. If latency is paramount, you can use ``req.hash_ignore_vary`` to opt out of the Vary match during the lookup and get the freshest variant. Ignoring how the cookie is set, and assuming the backend always provides an accurate *Cache-Control* even when cookies are present, below is an example of an A/B testing setup where bots are served the freshest variant:: import cookie; include "devicedetect.vcl"; sub vcl_recv { call devicedetect; if (req.http.X-UA-Device ~ "bot") { set req.hash_ignore_vary = true; } } sub vcl_req_cookie { cookie.parse(req.http.Cookie); set req.http.X-AB-Test = cookie.get("ab-test"); return; } sub vcl_deliver { unset resp.http.Vary; } It is also assumed that the backend replies with a ``Vary: X-AB-Test`` header and varies on no other header. varnish-7.5.0/doc/sphinx/vcl-design-patterns/resp-status.rst000066400000000000000000000012201457605730600242000ustar00rootroot00000000000000.. Copyright (c) 2021 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license Using extra digits in resp.status ================================= In Varnish the ``.status`` variables can hold more than three digits, which is useful to send information to ``vcl_synth{}`` about which error message to produce:: sub vcl_recv { if ([...]) { return(synth(12404)); } } sub vcl_synth { if (resp.status == 12404) { [...] // this specific 404 } else if (resp.status % 1000 == 404) { [...] // all other 404's } } varnish-7.5.0/doc/sphinx/vtc-syntax.py000066400000000000000000000054541457605730600177720ustar00rootroot00000000000000#!/usr/bin/env python3 # # Copyright (c) 2006-2016 Varnish Software AS # All rights reserved. # # Author: Guillaume Quintard # # SPDX-License-Identifier: BSD-2-Clause # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # 1. Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # 2. Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in the # documentation and/or other materials provided with the distribution. # # THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND # ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE # ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS # OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) # HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY # OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF # SUCH DAMAGE. # # Process various varnishtest C files and output reStructuredText to be # included in vtc(7). import sys import re def parse_file(fn, cl, tl, sl): p = False section = "" resec = re.compile(r"\s*/?\* SECTION: ") try: # Python3 f = open(fn, "r", encoding="UTF-8") except TypeError: # Python2 f = open(fn, "r") for l in f: if "*/" in l: p = 0 if resec.match(l): a = l.split() section = a[2] sl.append(section) cl[section] = [] if len(a) > 3: tl[section] = re.sub( r"^\s*/?\* SECTION: [^ ]+ +", "", l) else: tl[section] = "" p = 1 elif p: cl[section].append(re.sub(r"^\s*\* ?", "", l)) f.close() if __name__ == "__main__": cl = {} tl = {} sl = [] for fn in sys.argv[1:]: parse_file(fn, cl, tl, sl) sl.sort() for section in sl: print(".. _vtc-" + section + ":\n") print(tl[section], end="") a = section c = section.count(".") if c == 0: r = "-" elif c == 1: r = "~" elif c == 2: r = "." else: r = "*" print(re.sub(r".", r, tl[section]), end="") print("".join(cl[section])) varnish-7.5.0/doc/sphinx/whats-new/000077500000000000000000000000001457605730600172055ustar00rootroot00000000000000varnish-7.5.0/doc/sphinx/whats-new/changes-4.1.rst000066400000000000000000000126561457605730600216610ustar00rootroot00000000000000.. Copyright (c) 2016 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _whatsnew_changes_4_1: Changes in Varnish 4.1 ====================== Varnish 4.1 is the continuation of the new streaming architecture seen in Varnish 4.0. Proactive security features --------------------------- New in 4.1 is support for different kinds of privilege separation methods, collectively described as jails. On most systems, the Varnish parent process will now drop effective privileges to normal user mode when not doing operations needing special access. The Varnish worker child should now be run as a separate `vcache` user. ``varnishlog``, ``varnishncsa`` and other Varnish shared log utilities now must be run in a context with `varnish` group membership. Warm and cold VCL configurations -------------------------------- Traditionally Varnish have had the concept of active and inactive loaded VCLs. Any loaded VCL lead to state being kept, and a separate set of health checks (if configured) were being run against the backends. To avoid the extra state and backend polling, a loaded VCL is now either warm or cold. Runtime state (incl. backend counters) and health checks are not present for cold VCLs. A warm VCL will automatically be set to cold after `vcl_cooldown` seconds. Output from `vcl.list`:: varnish> vcl.list 200 available auto/warm 0 boot available auto/warm 0 62f5275f-a937-4df9-9fbb-c12336bdfdb8 A single VCL's state can be changed with the `vcl.state` call in ``varnishadm``:: vcl.state Force the state of the specified configuration. State is any of auto, warm or cold values. Example:: varnish> vcl.state 62f5275f-a937-4df9-9fbb-c12336bdfdb8 cold 200 varnish> vcl.list 200 available auto/warm 0 boot available auto/cold 0 62f5275f-a937-4df9-9fbb-c12336bdfdb8 VMOD writers should read up on the new vcl_event system to release unnecessary state when a VCL is transitioned to cold (see :ref:`ref-vmod-event-functions`). PROXY protocol support ---------------------- Socket support for PROXY protocol connections has been added. PROXY defines a short preamble on the TCP connection where (usually) a SSL/TLS terminating proxy can signal the real client address. The ``-a`` startup argument syntax has been expanded to allow for this:: $ varnishd -f /etc/varnish/default.vcl -a :6081 -a 127.0.0.1:6086,PROXY Both PROXY1 and PROXY2 protocols are supported on the resulting listening socket. For connections coming in over a PROXY socket, ``client.ip`` and ``server.ip`` will contain the addresses given to Varnish in the PROXY header/preamble (the "real" IPs). The new VCL variables ``remote.ip`` and ``local.ip`` contains the local TCP connection endpoints. On non-PROXY connections these will be identical to ``client.ip`` and ``server.ip``. An expected pattern following this is `if (std.port(local.ip) == 80) { }` in ``vcl_recv`` to see if traffic came in over the HTTP listening socket (so a client redirect to HTTPS can be served). VMOD backends ------------- Before Varnish 4.1, backends could only be declared in native VCL. Varnish 4.0 moved directors from VCL to VMODs, and VMODs can now also create backends. It is possible to both create the same backends than VCL but dynamically, or create backends that don't necessarily speak HTTP/1 over TCP to fetch resources. More details in the :ref:`ref-writing-a-director` documentation. Backend connection timeout -------------------------- Backend connections will now be closed by Varnish after `backend_idle_timeout` seconds of inactivity. Previously they were kept around forever and the backend servers would close the connection without Varnish noticing it. On the next traffic spike needing these extra backend connections, the request would fail, perhaps multiple times, before a working backend connection was found/created. Protocol support ---------------- Support for HTTP/0.9 on the client side has been retired. More modules available ---------------------- Varnish has an ecosystem for third-party modules (vmods). New since the last release, these are worth knowing about: libvmod-saintmode: Saint mode ("inferred health probes from traffic") was taken out of Varnish core in 4.0, and is now back as a separate vmod. This is useful for detecting failing backends before the health probes pick it up. libvmod-xkey: Secondary hash keys for cache objects, based on the hashtwo vmod written by Varnish Software. Allows for arbitrary grouping of objects to be purged in one go, avoiding use of ban invalidation. Also known as Cache Keys or Surrogate Key support. libvmod-rtstatus: Real time statistics dashboard. Passing data between ESI requests --------------------------------- A new `req_top` identifier is available in VCL, which is a reference to `req` in the top-level ESI request. This is useful to pass data back and forth between the main ESI request and any ESI sub-requests it leads to. Other noteworthy small changes ------------------------------ * Varnish will now use the ``stale-while-revalidate`` defined in RFC5861 to set object grace time. * -smalloc storage is now recommended over -sfile on Linux systems. * New VCL variable ``beresp.was_304`` has been introduced in ``vcl_backend_response``. Will be set to ``true`` if the response from the backend was a positive result of a conditional fetch (``304 Not Modified``). varnish-7.5.0/doc/sphinx/whats-new/changes-5.0.rst000066400000000000000000000231741457605730600216560ustar00rootroot00000000000000.. Copyright (c) 2016 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _whatsnew_changes_5.0: Changes in Varnish 5.0 ====================== Varnish 5.0 changes some (mostly) internal APIs and adds some major new features over Varnish 4.1. Separate VCL files and VCL labels ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Varnish 5.0 supports jumping from the active VCL's ``vcl_recv{}`` to another VCL via a VCL label. The major use of this will probably be to have a separate VCL for each domain/vhost, in order to untangle complex VCL files, but it is not limited to this criteria, it would also be possible to send all POSTs, all JPEG images or all traffic from a certain IP range to a separate VCL file. VCL labels can also be used to give symbolic names to loaded VCL configurations, so that operations personnel only need to know about "normal", "weekend" and "emergency", and web developers can update these as usual, without having to tell ops what the new weekend VCL is called. Very Experimental HTTP/2 support ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ We are in the process of adding HTTP/2 support to Varnish, but the code is very green still - life happened. But you can actually get a bit of traffic though it already, and we hope to have it production ready for the next major release (2017-03-15). Varnish supports HTTP/1 -> 2 upgrade. For political reasons, no browsers support that, but tools like curl does. For encrypted HTTP/2 traffic, put a SSL proxy in front of Varnish. HTTP/2 support is disabled by default, to enable, set the ``http2`` feature bit. The Shard Director ~~~~~~~~~~~~~~~~~~ We have added to the directors VMOD an overhauled version of a director which was available as an out-of-tree VMOD under the name VSLP for a couple of years: It's basically a better hash director, which uses consistent hashing to provide improved stability of backend node selection when the configuration and/or health state of backends changes. There are several options to provide the shard key. The rampup feature allows to take just-gone-healthy backends in production smoothly, while the prewarm feature allows to prepare backends for traffic which they would see if the primary backend for a certain key went down. It can be reconfigured dynamically (outside ``vcl_init{}``), but different to our other directors, configuration is transactional: Any series of backend changes must be concluded by a reconfigure call for activation. Hit-For-Pass is now actually Hit-For-Miss ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Almost since the beginning of time (2008), varnish has hit-for-pass: It is basically a negative caching feature, putting into the cache objects as markers saying "when you hit this, your request should be a pass". The purpose is to selectively avoid the request coalescing (waitinglist) feature, which is useful for cacheable content, but not for uncacheable objects. If we did not have hit-for-pass, without additional configuration in vcl_recv, requests to uncacheable content would be sent to the backend serialized (one after the other). As useful as this feature is, it has caused a lot of headaches to varnish administrators along the lines of "why the *beep* doesn't Varnish cache this": A hit-for-pass object stayed in cache for however long its ttl dictated and prevented caching whenever it got hit ("for that url" in most cases). In particular, as a pass object can not be turned into something cacheable retrospectively (``beresp.uncacheable`` can be changed from ``false`` to ``true``, but not the other way around), even responses which would have been cacheable were not cached. So, when a hit-for-pass object got into cache unintentionally, it had to be removed explicitly (using a ban or purge). We've changed this now: A hit-for-pass object (we still call it like this in the docs, logging and statistics) will now cause a cache-miss for all subsequent requests, so if any backend response qualifies for caching, it will get cached and subsequent requests will be hits. In short: We've changed from "the uncacheable case wins" to "the cacheable case wins" or from hit-for-pass to hit-for-miss. The primary consequence which we are aware of at the time of this release is caused be the fact that, to create cacheable objects, we need to make backend requests unconditional (that is, remove the ``If-Modified-Since`` and ``If-None-Match headers``): For conditional client requests on hit-for-pass objects, Varnish will now issue an unconditional backend fetch and, for 200 responses, send a 304 or 200 response to the client as appropriate. As of the time of this release we cannot say if this will remain the final word on this topic, but we hope that it will mean an improvement for most users of Varnish. Ban Lurker Improvements ~~~~~~~~~~~~~~~~~~~~~~~ We have made the ban lurker even more efficient by example of some real live situations with tens of thousands of bans using inefficient regular expressions. The new parameter ``ban_lurker_holdoff`` tells the ban lurker for how long it should get out of the way when it could potentially slow down lookups due to lock contention. Previously this was the same as ``ban_lurker_sleep``. .. _whatsnew_changes_5.0_reqbody: Request Body sent always / "cacheable POST" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Previously, we would only send a request body for passed requests (and for pipe mode, but this is special anyway and should be avoided). Not so any more, but the default behaviour has not changed: Whenever a request has a body, it will get sent to the backend for a cache miss (and pass, as before). This can be prevented by an ``unset bereq.body`` and the ``builtin.vcl`` removes the body for GET requests because it is questionable if GET with a body is valid anyway (but some applications use it). So the often-requested ability to cache POST/PATCH/... is now available, but not out-of-the-box: * The ``builtin.vcl`` still contains a ``return(pass)`` for anything but a GET or HEAD because other HTTP methods, by definition, may cause state changes / side effects on backends. The application at hand should be understood well before caching of non-GET/non-HEAD is considered. * For misses, core code still calls the equivalent of ``set bereq.method = "GET"`` before calling ``vcl_backend_fetch``, so to make a backend request with the original request method, it needs to be saved in ``vcl_recv`` and restored in ``vcl_backend_fetch``. * Care should be taken to choose an appropriate cache key and/or Vary criteria. Adding the request body to the cache key is not possible with core varnish, but through a VMOD https://github.com/aondio/libvmod-bodyaccess To summarize: You should know what you are doing when caching anything but a GET or HEAD and without creating an appropriate cache key doing so is almost guaranteed to be wrong. ESI and Backend Request Coalescing ("waitinglist") Improvement ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Previously, ESI subrequests depending on objects being fetched from the backed used polling, which typically added some ~5ms of processing time to such subrequests and could lead to starvation effects in extreme corner cases. The waitinglist logic for ESI subrequests now uses condition variables to trigger immediate continuation of ESI processing when an object being waited for becomes available. Backend PROXY protocol requests ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Are now supported through the ``.proxy_header`` attribute of the backend definition. Default VCL search path ~~~~~~~~~~~~~~~~~~~~~~~ For default builds, vcl files are now also being looked for under ``/usr/share/varnish/vcl`` if not found in ``/etc/varnish``. For custom builds, the actual search path is ``${varnishconfdir}:${datarootdir}/varnish/vcl`` devicedetect.vcl ~~~~~~~~~~~~~~~~ The basic device detection vcl is now bundled with varnish. varnishtest ~~~~~~~~~~~ * ``resp.msg`` renamed to ``resp.reason`` for consistency with vcl * HTTP2 testing capabilities added * default search path for executables and vmods added * ``sema`` mechanism replaced by ``barrier`` * support for PROXY requests misc ~~~~ Brief notes on other changes * Added separate thread for object expiry * The ESI parser is now more tolerant to some syntactic corner cases * Reduced needless rushing of requests on the waitinglist * ``varnishhist`` can now process backend requests and offers a timebend function to control the processing speed * ``std.integer()`` can now also parse real numbers and truncates them * ``std.log()`` now also works correctly during ``vcl_init{}`` * further improved stability when handling workspace overflows * numerous vcl compiler improvements News for VMOD authors ~~~~~~~~~~~~~~~~~~~~~ * It is now mandatory to have a description in the ``$Module`` line of a ``vcc`` file. * vcl cli events (in particular, ``vcl_init{}`` /``vcl_fini{}``) now have a workspace and ``PRIV_TASK`` available for VMODs. * ``PRIV_*`` now also work for object methods with unchanged scope. In particular, they are per VMOD and `not` per object - e.g. the same ``PRIV_TASK`` gets passed to object methods as to functions during a VCL task. * varnish now provides a random number api, see vrnd.h * vbm (variable size bitmaps) improved * ``vmodtool.py`` for translating vcc files has been largely rewritten, there may still exist regressions which remained unnoticed * ``vmodtool.py`` now requires at least Python 2.6 * New autoconf macros are available, they should greatly simplify build systems of out-of-tree VMODs. They are implemented and documented in ``varnish.m4``, and the previous macros now live in ``varnish-legacy.m4`` so existing VMODs should still build fine. varnish-7.5.0/doc/sphinx/whats-new/changes-5.1.rst000066400000000000000000000354531457605730600216620ustar00rootroot00000000000000.. Copyright (c) 2017 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _whatsnew_changes_5.1: Changes in Varnish 5.1 ====================== We have a couple of new and interesting features in Varnish 5.1, and we have a lot of smaller improvements and bugfixes all over the place, in total we have made about 750 commits since Varnish 5.0, so this is just some of the highlights. Probably the biggest change in Varnish 5.1 is that a couple of very significant contributors to Varnish have changed jobs, and therefore stopped being active contributors to the Varnish Project. Per Buer was one of the first people who realized that Varnish was not just "Some program for a couple of Nordic newspapers", and he started the company Varnish Software, which is one of the major sponsors of the Varnish Project. Lasse Karstensen got roped into Varnish Software by Per, and in addition to his other duties, he has taken care of the projects system administration and release engineering for most of the 11 years we have been around now. Per & Lasse: "Thanks" doesn't even start to cover it, and we wish you all the best for the future! .. _whatsnew_clifile: Startup CLI command file ~~~~~~~~~~~~~~~~~~~~~~~~ The new '-I cli_file' option to varnishd will make it much more practical to use the VCL labels introduced in Varnish 5.0. The cli commands in the file will be executed before the worker process starts, so it could for instance contain:: vcl.load panic /etc/varnish_panic.vcl vcl.load siteA0 /etc/varnish_siteA.vcl vcl.load siteB0 /etc/varnish_siteB.vcl vcl.load siteC0 /etc/varnish_siteC.vcl vcl.label siteA siteA0 vcl.label siteB siteB0 vcl.label siteC siteC0 vcl.load main /etc/varnish_main.vcl vcl.use main If a command in the file is prefixed with '-', failure will not abort the startup. Related to this change we have reordered the argument checking so that argument problems are reported more consistently. In case you didn't hear about them yet, labelling VCL programs allows you to branch out to other VCLs in the main::vcl_recv{}, which in the above example could look like:: sub vcl_recv { if (req.http.host ~ "asite.example.com$") { return(vcl(siteA)); } if (req.http.host ~ "bsite.example.com$") { return(vcl(siteB)); } if (req.http.host ~ "csite.example.com$") { return(vcl(siteC)); } // Main site processing ... } Universal VCL return(fail) ~~~~~~~~~~~~~~~~~~~~~~~~~~ It is now possible to ``return(fail)`` anywhere in VCL, including inside VMODs. This will cause VCL processing to terminate forthright. In addition to ``return(fail)``, this mechanism will be used to handle all failure conditions without a safe fallback, for instance workspace exhaustion, too many headers etc. (This is a work in progress, there is a lot of code to review before we are done.) In ``vcl_init{}`` failing causes the ``vcl.load`` to fail, this is nothing new for this sub-routine. A failure in any of the client side VCL methods (``vcl_recv{}``, ``vcl_hash{}`` ...) *except* ``vcl_synth{}``, sends the request to ``vcl_synth{}`` with a 503, and reason "VCL failed". A failure on the backend side (``vcl_backend_*{}``) causes the fetch to fail. (VMOD writers should use the new ``VRT_fail(ctx, format_string, ...)`` function which logs a SLT_VCL_Error record.) Progress on HTTP/2 support ~~~~~~~~~~~~~~~~~~~~~~~~~~ HTTP/2 support is better than in 5.0, and is now enabled and survives pretty well on our own varnish-cache.org website, but there are still things missing, most notably windows and priority, which may be fatal to more complex websites. We expect HTTP/2 support to be production ready in the autumn 2017 release of Varnish-Cache, but that requires a testing and feedback from real-world applications. So if you have a chance to test our HTTP/2 code, by all means do so, please report any crashes, bugs or other trouble back to us. To enable HTTP/2 you need to ``param.set feature +http2`` but due to internet-politics, you will only see HTTP/2 traffic if you have an SSL proxy in front of Varnish which advertises HTTP2 with ALPN. For the hitch SSL proxy, add the argument ``--alpn-protos="h2,http/1.1"`` .. _whatsnew_changes_5.1_hitpass: Hit-For-Pass has returned ~~~~~~~~~~~~~~~~~~~~~~~~~ As hinted in :ref:`whatsnew_changes_5.0`, we have restored the possibility of invoking the old hit-for-pass feature in VCL. The treatment of uncacheable content that was new in version 5.0, which we have taken to calling "hit-for-miss", remains the default. Now you can choose hit-for-pass with ``return(pass(DURATION))`` from ``vcl_backend_response``, setting the duration of the hit-for-pass state in the argument to ``pass``. For example: ``return(pass(120s))``. To recap: when ``beresp.uncacheable`` is set to ``true`` in ``vcl_backend_response``, Varnish makes a note of it with a minimal object in the cache, and finds that information again on the next lookup for the same object. In essence, the cache is used to remember that the last backend response was not cacheable. In that case, Varnish proceeds as with a cache miss, so that the response may become cacheable on subsequent requests. The difference is that Varnish does not perform request coalescing, as it does for ordinary misses, when a response has been marked uncacheable. For ordinary misses, when there are requests pending for the same object at the same time, only one fetch is executed at a time, since the response may be cached, in which case the cached response may be used for the remaining requests. But this is not done for "hit-for-miss" objects, since they are known to have been uncacheable on the previous fetch. ``builtin.vcl`` sets ``beresp.uncacheable`` to ``true`` when a number of conditions hold for a backend response that indicate that it should not be cached, for example if the TTL has been determined to be 0 (perhaps due to a ``Cache-Control`` header), or if a ``Set-Cookie`` header is present in the response. So hit-for-miss is the default for uncacheable backend responses. A consequence of this is that fetches for uncacheable responses cannot be conditional in the default case. That is, the backend request may not include the headers ``If-Modified-Since`` or ``If-None-Match``, which might cause the backend to return status "304 Not Modified" with no response body. Since the response to a cache miss might be cached, there has to be a body to cache, and this is true of hit-for-miss as well. If either of those two headers were present in the client request, they are removed from the backend request for a miss or hit-for-miss. Since conditional backend requests and the 304 response may be critical to performance for non-cacheable content, especially if the response body is large, we have made the old hit-for-pass feature available again, with ``return(pass(DURATION))`` in VCL. As with hit-for-miss, Varnish uses the cache to make a note of hit-for-pass objects, and finds them again on subsequent lookups. The requests are then processed as for ordinary passes (``return(pass)`` from ``vcl_recv``) -- there is no request coalescing, and the response will not be cached, even if it might have been otherwise. ``If-Modified-Since`` or ``If-None-Match`` headers in the client request are passed along in the backend request, and a backend response with status 304 and no body is passed back to the client. The hit-for-pass state of an object lasts for the time given as the DURATION in the previous return from ``vcl_backend_response``. After the "hit-for-pass TTL" elapses, the next request will be an ordinary miss. So a hit-for-pass object cannot become cacheable again until that much time has passed. 304 Not Modified responses after a pass ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Related to the previous topic, there has been a change in the way Varnish handles a very specific case: deciding whether to send a "304 Not Modified" response to the client after a pass, when the backend had the opportunity to send a 304 response, but chose not to by sending a 200 response status instead. Previously, Varnish went along with the backend when this happened, sending the 200 response together with the response body to the client. This was the case even if the backend set the response headers ``ETag`` and/or ``Last-Modified`` so that, when compared to the request headers ``If-None-Match`` and ``If-Modified-Since``, a 304 response would seem to be warranted. Since those headers are passed back to the client, the result could appear a bit odd from the client's perspective -- the client used the request headers to ask if the response was unmodified, and the response headers seem to indicate that it wasn't, and yet the response status suggests that it was. Now the decision to send a 304 client response status is made solely at delivery time, based on the contents of the client request headers and the headers in the response that Varnish is preparing to send, regardless of whether the backend fetch was a pass. So Varnish may send a 304 client response after a pass, even though the backend chose not to, having seen the same request headers (if the response headers permit it). We made this change for consistency -- for hits, misses, hit-for-miss, hit-for-pass, and now pass, the decision to send a 304 client response is based solely on the contents of client request headers and the response headers. You can restore the previous behavior -- don't send a 304 client response on pass if the backend didn't -- with VCL means, either by removing the ``ETag`` or ``Last-Modified`` headers in ``vcl_backend_response``, or by removing the If-* client request headers in ``vcl_pass``. VXID in VSL queries ~~~~~~~~~~~~~~~~~~~ The Varnish Shared Log (VSL) became much more powerful starting Varnish 4.0 and hasn't changed much since. Changes usually consist in adding new log records when new feature are introduced, or when we realize that some missing piece of information could really help troubleshooting. Varnish UTilities (VUT) relying on the VSL usually share the same ``-q`` option for querying, which allows to filter transactions based on log records. For example you could be looking for figures on a specific domain:: varnishtop -i ReqURL -q 'ReqHeader:Host eq www.example.com' While options like ``-i`` and ``-q`` were until now both limited to log records, it also meant you could only query a specific transaction using the ``X-Varnish`` header. Depending on the nature of the transaction (client or backend side) the syntax is not the same and you can't match a session. For instance, we are looking for the transaction 1234 that occurred very recently and we would like to collect everything from the same session. We have two options:: # client side varnishlog -d -g session -q 'RespHeader:X-Varnish[1] == 1234' # backend side varnishlog -d -g session -q 'BereqHeader:X-Varnish == 1234' There was no simple way to match any transaction using its id until the introduction of ``vxid`` as a possible left-hand side of a ``-q`` query expression:: # client side varnishlog -d -g session -q 'vxid == 1234' # backend side varnishlog -d -g session -q 'vxid == 1234' # session varnishlog -d -g session -q 'vxid == 1234' Another use case is the collection of non-transactional logs. With raw grouping the output is organized differently and each record starts with its transaction id or zero for non-transactional logs:: # before 5.1 varnishlog -g raw | awk '$1 == 0' # from now on varnishlog -g raw -q 'vxid == 0' This should offer you a more concise, and more consistent means to filter transactions with ``varnishlog`` and other VUTs. .. _whatsnew_changes_5.1_vtest: Project tool improvements ~~~~~~~~~~~~~~~~~~~~~~~~~ We have spent a fair amount of time on the tools we use internally in the project. The ``varnishtest`` program has been improved in many small ways, in particular it is now much easier to execute and examine results from other programs with the ``shell`` and ``process`` commands. It might break existing test cases if you were already using ``varnishtest``. The project now has *KISS* web-backend which summarizes ``make distcheck`` results from various platforms: http://varnish-cache.org/vtest/ If you want Varnish to be tested on a platform not already covered, all you need to do is run the tools/vtest.sh script from the source tree. We would love to see more platforms covered (arm64, ppc, mips) and OS/X would also be nice. We also publish our code-coverage status now: http://varnish-cache.org/gcov/ Our goal is 90+% coverage, but we need to start implementing terminal emulation in ``varnishtest`` before we can test the curses(1) based programs (top/stat/hist) comprehensively, so they currently drag us down. News for authors of VMODs and Varnish API client applications ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ * The VRT version has been bumped to 6.0, since there have been some changes and additions to the ABI. See ``vrt.h`` for an overview. * In particular, there have been some changes to the ``WS_*`` interface for accessing workspaces. We are working towards fully encapsulating workspaces with the ``WS_*`` family of functions, so that it should not be necessary to access the internals of a ``struct ws``, which may be revised in a future release. There are no revisions at present, so your code won't break if you're working with the innards of a ``struct ws`` now, but you would be prudent to replace that code with ``WS_*`` calls some time before the next release. And please let us know if there's something you need to do that the workspace interface won't handle. * ``libvarnishapi.so`` now exports more symbols from Varnish internal libraries: * All of the ``VTIM_*`` functions -- getting clock times, formatting and parsing date & time formats, sleeping and so forth. * All of the ``VSB_*`` functions for working with safe string buffers. * ``varnish.m4`` and ``varnishapi.pc`` now expose more information about the Varnish installation. See "Since 5.1.0" comments for a comprehensive list of what was added. * VMOD version coexistence improvements: In difference from executable files, shared libraries are not protected against overwriting under UNIX, and this has generally caused grief when VMODs were updated by package management tools. We have decided to bite the bullet, and now the Varnishd management process makes a copy of the VMOD shared library to a version-unique name inside the workdir, from which the running VCL access it. This ensures that Varnishd can always restart the worker process, no matter what happened to the original VMOD file. It also means that VMODs maintaining state spanning VCL reloads might break. It is still possible to maintain global state in a VMOD despite VMOD caching: one solution is to move the global state into separate shared library that won't be cached by Varnish. *EOF* varnish-7.5.0/doc/sphinx/whats-new/changes-5.2.rst000066400000000000000000000143551457605730600216610ustar00rootroot00000000000000.. Copyright (c) 2017 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _whatsnew_changes_5.2: Changes in Varnish 5.2 ====================== Varnish 5.2 is mostly changes under the hood so most varnish installations will be able to upgrade with no modifications. .. _whatsnew_new_vmods: New VMODs in the standard distribution ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ We have added three new VMODs to the varnish project. VMOD blob --------- We have added the variables ``req.hash`` and ``bereq.hash`` to VCL, which contain the hash value computed by Varnish for the current request, for use in cache lookup. Their data type is BLOB, which represents opaque data of any length -- the new variables contain the raw binary hashes. This is the first time that an element of standard VCL has the BLOB type (BLOBs have only been used in third-party VMODs until now). So we have added VMOD blob to facilitate their use. In particular, the VMOD implements binary-to-text encodings, for example so that you can assign the hash to a header as a base64 or hex string. It also provides some other utilities such as getting the length of a BLOB or testing BLOBs for equality. See :ref:`vmod_blob(3)`. VMOD purge ---------- Before the introduction of ``vcl 4.0`` there used to be a ``purge`` function instead of a ``return(purge)`` transition. This module works like old-style VCL purges (which should be used from both ``vcl_hit`` and ``vcl_miss``) and provides more capabilities than regular purges, and lets you know how many objects were affected. See :ref:`vmod_purge(3)`. VMOD vtc -------- As long as we have had VMODs, we had an internal vmod called ``vmod_debug`` which was used with ``varnishtest`` to exercise the VMOD related parts of ``varnishd``. Over time this vmod grew other useful functions for writing test-cases. We only distribute ``vmod_debug`` in source releases, because it has some pretty evil functionality, for instance ``debug.panic()``. We have taken the non-suicidal test-writing goodies out of ``vmod_debug`` and put them into a new ``vmod_vtc``, to make them available to people using ``varnishtest`` to test local configurations, VMODs etc. The hottest trick in ``vmod_vtc`` is that VTC-barriers can be accessed from the VCL code, but there are other conveniences like workspace manipulations etc. See :ref:`vmod_vtc(3)`. News for authors of VMODs and Varnish API client applications ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. _whatsnew_abi: $ABI [strict|vrt] ----------------- VMOD authors have the option of only integrating with the blessed interface provided by ``varnishd`` or go deeper in the stack. As a general rule of thumb you are considered "on your own" if your VMOD uses more than the VRT (Varnish RunTime) and it is supposed to be built for the exact Varnish version. Varnish was already capable of checking the major/minor VRT version a VMOD was built against, or require the exact version, but picking one or the other depended on how Varnish was built. VMOD authors can now specify whether a module complies to the VRT and only needs to be rebuilt when breaking changes are introduced by adding ``$ABI vrt`` to their VCC descriptor. The default value is ``$ABI strict`` when omitted. .. _whatsnew_vsm_vsc_5.2: VSM/VSC API changes ------------------- The export of statistics counters via shared memory has been overhauled to get rid of limitations which made sense 11 years ago but not so much now. A set of statistics counters are now fully defined in a ``.vsc`` file which is processed by the ``vsctool.py`` script into a .c and .h file, which is compiled into the relevant body of code. This means that statistics counters are now self-describing in shared memory, and ``varnishstat`` or other VSC-API using programs no longer have a compiled in list of which counters exist or how to handle them. This paves the way for VMODs or maybe even VCL to define custom counters, and have them show up in varnishstat and other VSC-API based programs just like the rest of the counters. The rewrite of the VSM/VSC code simplified both APIs and made them much more robust but code calling into these APIs will have to be updated to match. The necessary changes mostly center around detecting if the varnishd management/worker process has restarted. In the new VSM-API once setup is done, VSM_Attach() latches on to a running varnishd master process and stays there. VSM_Status() updates the in-memory list of VSM segments, and returns status information about the master and worker processes: Are they running? Have they been restarted? Have VSM segments been added/deleted? Each VSM segment is now a separate piece of shared memory and the name of the segment can be much longer. Before the actual shared memory can be accessed, the application must call VSM_Map() and when VSM_StillValid() indicates that the segment is no longer valid, VSM_Unmap() should be called to release the segment again. All in all, this should be simpler and more robust. .. _whatsnew_vrt_5.2: VRT API changes --------------- ``VRT_purge`` now fails a transaction instead of panicking when used outside of ``vcl_hit`` or ``vcl_miss``. It also returns the number of purged objects. .. _whatsnew_vut_5.2: Added VUT API ------------- One way to extend Varnish is to write VSM clients, programs that tap into the Varnish Shared Memory (VSM) usually via ``libvarnishapi`` or community bindings for other languages than C. Varnish already ships with VUTs (Varnish UTilities) that either process the Varnish Shared Log (VSL) like ``varnishlog`` or ``varnishncsa`` or the Varnish Shared Counters (VSC) like ``varnishstat``. Most of the setup for these programs is similar, and so they shared an API that is now available outside of the Varnish source tree. The VUT API has been cleaned up to remove assumptions made for our utilities. It hides most of the complexity and redundancy of setting up a log processor and helps you focus on your functionality. If you use autotools for building, a new macro in ``varnish.m4`` removes some of the boilerplate to generate part of the documentation. We hope that we will see new tools that take advantage of this API to extend Varnish in new ways, much like VMODs made it easy to add new functionality to VCL. *eof* varnish-7.5.0/doc/sphinx/whats-new/changes-6.0.rst000066400000000000000000000123541457605730600216550ustar00rootroot00000000000000.. Copyright (c) 2018 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _whatsnew_changes_6.0: Changes in Varnish 6.0 ====================== Usually when we do dot-zero releases in Varnish, it means that users are in for a bit of work to upgrade to the new version, but 6.0 is actually not that scary, because most of the changes are either under the hood or entirely new features. The biggest user-visible change is probably that we, or to be totally honest here: Geoff Simmons (UPLEX), have added support for Unix Domain Sockets, both :ref:`for clients ` and for :ref:`backend servers `. Because UNIX Domain Sockets have nothing like IP numbers, we were forced to define a new level of the VCL language ``vcl 4.1`` to cope with UDS. Both ``vcl 4.0`` and ``vcl 4.1`` are supported, and it is the primary source-file which controls which it will be, and you can ``include`` lower versions, but not higher versions than that. Some old variables are not available in 4.1 and some new variables are not available in 4.0. Please see :ref:`vcl_variables` for specifics. There are a few other changes to the ``vcl 4.0``, most notably that we now consider upper- and lower-case the same for symbols. The HTTP/2 code has received a lot of attention from Dag Haavi Finstad (Varnish Software) and it holds up in production on several large sites now. There are new and improved VMODs: * :ref:`vmod_directors(3)` -- Much work on the ``shard`` director * :ref:`vmod_proxy(3)` -- Proxy protocol information * :ref:`vmod_unix(3)` -- Unix Domain Socket information * :ref:`vmod_vtc(3)` -- Utility functions for writing :ref:`varnishtest(1)` cases. The ``umem`` stevedore has been brought back on Solaris and it is the default storage method there now. More error situations now get vcl ``failure`` handling, this should make life simpler for everybody we hope. And it goes without saying that we have fixed a lot of bugs too. Under the hood (mostly for developers) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The big thing is that the ``$Abi [vrt|strict]`` should now have settled. We have removed all the stuff from ```` which is not available under ``$Abi vrt``, and this hopefully means that VMODS will work without recompilation on several subsequent varnish versions. (There are some stuff related to packaging which takes advantage of this, but it didn't get into this release.) VMODS can define their own stats counters now, and they work just like builtin counters, because there is no difference. The counters are described in a ``.vsc`` file which is processed with a new python script which does a lot of magic etc. There is a tiny example in ``vmod_debug`` in the source tree. If you're using autotools, a new ``VARNISH_COUNTERS`` macro helps you set everything up, and is documented in ``varnish.m4``. This took a major retooling of the stats counters in general, and the VSM, VSC and VSL apis have all subtly or not so subtly changed as a result. VMOD functions can take optional arguments, these are different from defaulted arguments in that a separate flag tells if they were specified or not in the call. For reasons of everybodys sanity, all the arguments gets wrapped in a function-specific structure when this is used. The ``vmodtool.py`` script has learned other new tricks, and as a result also produces nicer ``.rst`` output. VCL types ``INT`` and ``BYTES`` are now 64bits on all platforms. VCL ENUM have gotten a new implementation, so the pointers are now constant and can be compared as such, rather than with ``strcmp(3)``. We have a new type of ``binary`` VSL records which are hexdumped by default, but on the API side, rather than in ``varnishd``. This saves both VSL bandwidth and processing power, as they are usually only used for deep debugging and mostly turned off. The ``VCC`` compilers has received a lot of work in two areas: The symbol table has been totally revamped to make it ready for variant symbols, presently symbols which are different in ``vcl 4.0`` and ``vcl 4.1``. The "prototype" information in the VMOD shared library has been changed to JSON, (look in your vcc_if.c file if you don't believe me), and this can express more detailed information, presently the optional arguments. The stuff only we care about ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Varnishtest's ``process`` has grown ``pty(4)`` support, so that we can test curses-based programs like our own utilities. This has (finally!) pushed our code coverage, across all the source code in the project up to 90%. We have also decided to make our python scripts PEP8 compliant, and ``vmodtool.py`` is already be there. The VCL variables are now defined in the ``.rst`` file, rather than the other way around, this makes the documentation better at the cost of minor python-script complexity. We now produce weekly snapshots from ``-trunk``, this makes it easier for people to test all the new stuff. We have not quite gotten the half-yearly release-procedure under control. I'm writing this the evening before the release, trying to squeeze out of my brain what I should have written here long time ago, and we have had far more commits this last week than is reasonable. But we *have* gotten better at it. Really! *eof* varnish-7.5.0/doc/sphinx/whats-new/changes-6.1.rst000066400000000000000000000015301457605730600216500ustar00rootroot00000000000000.. Copyright (c) 2018 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _whatsnew_changes_6.1: Changes in Varnish 6.1 ====================== This release is a maintenance release, so while there are many actual changes, and of course many bugfixes, they should not have little to no impact on running Varnish installations. Nothing to see here, really --------------------------- Since new users often forget to `vcl.discard` their old VCLs, we have added a warning when you have more than 100 VCLs loaded. There are parameters to set the threshold and decide what happens when it is exceeded (ignore/warn/error). We have made `req.http.Host` mandatory and handle requests without it on the fast DoS avoidance path. For all the details and new stuff, see :ref:`whatsnew_upgrading_6.1` *eof* varnish-7.5.0/doc/sphinx/whats-new/changes-6.2.rst000066400000000000000000000230201457605730600216470ustar00rootroot00000000000000.. Copyright (c) 2019 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _whatsnew_changes_2019_03: %%%%%%%%%%%%%%%%%%%%%% Changes in Varnish 6.2 %%%%%%%%%%%%%%%%%%%%%% For information about updating your current Varnish deployment to the new version, see :ref:`whatsnew_upgrading_2019_03`. A more detailed and technical account of changes in Varnish, with links to issues that have been fixed and pull requests that have been merged, may be found in the `change log`_. .. _change log: https://github.com/varnishcache/varnish-cache/blob/master/doc/changes.rst varnishd ======== Cache lookups have undergone a number of optimizations, among them reduction in lock contention, and to shorten and simplify the critical section of lookup code. We expect that this will improve performance and scalability. We have added a "watchdog" for thread pools that will panic the worker process, causing it to restart, if scheduling tasks onto worker threads appears to be deadlocking. The length of time until the panic is set by the :ref:`ref_param_thread_pool_watchdog` parameter. If this happens, it probably means that thread pools are too small, and you should consider increasing the parameters :ref:`ref_param_thread_pool_min`, :ref:`ref_param_thread_pool_max` and/or :ref:`ref_param_thread_pools`. .. _whatsnew_changes_params_2019_03: Parameters ~~~~~~~~~~ The default value of :ref:`ref_param_thread_pool_stack` has been increased from 48k to 56k on 64-bit platforms and to 52k on 32-bit platforms. Recently we had occasional reports of stack overflow, apparently related to changes in external libraries that are not under control of the Varnish project (such as glibc). This may also have been related to stack overflow issues on some platforms when recent versions of `jemalloc`_, the recommended memory allocator for Varnish, have been used together with `pcre`_ with JIT compilation enabled. Compiler hardening flags may also increase stack usage and on some systems such stack protector flags may be enabled by default. With the addition of new mitigations to new compiler releases, stack consumption may also increase on that front. Tests have shown that Varnish runs stably with the new default stack size on a number of platforms, under conditions that previously may have led to stack overflow -- such as ESI includes up to the default limit of :ref:`ref_param_max_esi_depth`, relatively deep VCL subroutine call depth, and recent jemalloc together with pcre-jit. Different sites have different requirements regarding the stack size. For example, if your site uses a high depth of ESI includes, you are probably already using an increased value of :ref:`ref_param_thread_pool_stack`. If you don't have such requirements, and you want to reduce memory footprint, you can consider lowering :ref:`ref_param_thread_pool_stack`, but make sure to test the result. .. _jemalloc: http://jemalloc.net/ .. _pcre: https://www.pcre.org/ Some parameters that have been long deprecated are now retired. See :ref:`whatsnew_upgrading_params_2019_03` in :ref:`whatsnew_upgrading_2019_03`. Added :ref:`ref_param_thread_pool_watchdog`, see above. The :ref:`ref_param_debug` parameter now has a flag ``vcl_keep``. When the flag is turned on, C sources and shared object libraries that were generated from VCL sources are retained in the Varnish working directory (see the notes about ``varnishtest`` below). For 32-bit platforms, we have increased the default :ref:`ref_param_workspace_backend` from 16k to 20k accommodate larger response headers which have become common. Other changes in varnishd ~~~~~~~~~~~~~~~~~~~~~~~~~ The VCL syntax version is now displayed in a panic message, as 41 for VCL 4.1 and 40 for VCL 4.0. Changes to VCL ============== VCL variables ~~~~~~~~~~~~~ Added ``req.is_hitmiss`` and ``req.is_hitpass``, see :ref:`vcl(7)`. Other changes to VCL ~~~~~~~~~~~~~~~~~~~~ Runtime restrictions concerning the accessibility of Unix domain sockets have been relaxed, see :ref:`whatsnew_upgrading_vcl_2019_03` in :ref:`whatsnew_upgrading_2019_03`. ``return(miss)`` from ``vcl_hit{}`` did never work as intended for the common case (it actually turned into a pass), so we now removed it and changed the ``builtin.vcl``. See :ref:`whatsnew_upgrading_vcl_2019_03`. VMODs ===== The type-conversion functions in :ref:`vmod_std(3)` have been reworked to make them more flexible and easier to use. The ``std.``\ *x2y* conversion functions are now deprecated. See :ref:`whatsnew_upgrading_std_conversion_2019_03`. The function :ref:`directors.lookup()` has been added to :ref:`vmod_directors(3)`, only for use in ``vcl_init`` or ``vcl_fini``. varnishlog(1), varnishncsa(1) and vsl(7) ======================================== The performance of bundled log readers, including ``varnishlog`` and ``varnishncsa`` (and any tool using the internal VUT interface for Varnish utilities) has been improved. They continue reading log contents in bulk as long as more contents are known to be available, not stopping as frequently (and unnecessarily) to check the status of the shared memory mapping. ``varnishlog`` and ``varnishncsa`` now have the ``-R`` command-line option for rate-limiting, to limit the number of log transactions read per unit time. This can make it less likely for log reads to fall behind and fail with overrun errors under heavy loads. See :ref:`varnishlog(1)` and :ref:`varnishncsa(1)` for details. Timing information is now uniformly reported in the log with microsecond precision. This affects the tags ``ExpKill`` and ``ExpRearm`` (previously with nanosecond precision). varnishadm(1) and varnish-cli(7) ================================ The output formats of the ``vcl.list`` and ``backend.list`` commands have changed, see :ref:`whatsnew_upgrading_backend_list_2019_03` and :ref:`whatsnew_upgrading_vcl_list_2019_03` in :ref:`whatsnew_upgrading_2019_03`, as well as :ref:`varnish-cli(7)` for details. .. _whatsnew_changes_cli_json: JSON output ~~~~~~~~~~~ JSON responses, requested with the ``-j`` option, are now possible for the following commands (see :ref:`varnish-cli(7)`): * ``status -j`` * ``vcl.list -j`` * ``param.show -j`` * ``ban.list -j`` * ``storage.list -j`` * ``panic.show -j`` The ``-j`` option was already available for ``backend.list``, ``ping`` and ``help`` in previous versions. For automated parsing of CLI responses (:ref:`varnishadm(1)` output), we recommend the use of JSON format. ``param.reset `` ~~~~~~~~~~~~~~~~~~~~~~~ Added the command ``param.reset`` to reset a parameter's value to its default, see :ref:`varnish-cli(7)`. Banning by expiration parameters ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Bans may now be defined with respect to ``obj.ttl``, ``obj.age``, ``obj.grace`` and ``obj.keep``, referring to the expiration and age properties of the cached object. A ban expression may also be defined with one of the comparison operators ``<``, ``<=``, ``>`` and ``>=``; these may only be used with one of the new duration variables for bans. Duration constants (such as ``5m`` for five minutes of ``3h`` for three hours) may be used in the ```` position against which these objects are compared in a ban expression. ``obj.ttl`` and ``obj.age`` are evaluated with respect to the time at which the ban was defined, while ``obj.grace`` and ``obj.keep`` are evaluated as the grace or keep time assigned to the object. So to issue a ban for objects whose TTL expires more than 5 hours from now and whose keep parameter is greater than 3 hours, use this expression:: obj.ttl > 5h && obj.keep > 3h See :ref:`vcl(7)` and :ref:`users-guide-purging` for details. varnishstat(1) and varnish-counters(7) ====================================== Added the ``ws_*_overflow`` and ``client_resp_500`` counters to better diagnose workspace overflow issues, see :ref:`varnish-counters(7)`. In curses mode, :ref:`varnishstat(1)` now allows use of the ``+`` and ``-`` keys to increase or decrease the refresh rate of the curses window. varnishtest =========== When :ref:`varnishtest(1)` is invoked with either of the ``-L`` or ``-l`` options to retain the temporary directory after tests, the ``vcl_keep`` flag for the :ref:`ref_param_debug` parameter is switched on (see `Parameters`_ above). This means that C sources and shared objects generated from VCL can also be inspected after a test. By default, the temporary directory is deleted after each test. Since around the time of the last release, we have begun the project `VTest`_, which is adapted from :ref:`varnishtest(1)`, but is made available as a stand-alone program useful for testing various HTTP clients, servers and proxies (not just Varnish). But for the time being, we still use :ref:`varnishtest(1)` for our own testing. .. _VTest: https://github.com/vtest/VTest Changes for developers and VMOD authors ======================================= Python tools that generate code now require Python 3. .. _whatsnew_changes_director_api_2019_03: Directors ~~~~~~~~~ The director API has been changed slightly: The most relevant design change is that the ``healthy`` callback now is the only means to determine a director's health state dynamically, the ``sick`` member of ``struct director`` has been removed. Consequently, ``VRT_SetHealth()`` has been removed and ``VRT_SetChanged()`` added to update the last health state change time. Presence of the ``healthy`` callback now also signifies if the director is considered to have a *probe* with respect to the CLI. The signature of the ``list`` callback has been changed to reflect the retirement of the undocumented ``backend.list -v`` parameter and to add a ``VRT_CTX``. *eof* varnish-7.5.0/doc/sphinx/whats-new/changes-6.3.rst000066400000000000000000000173221457605730600216600ustar00rootroot00000000000000.. Copyright (c) 2020 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _whatsnew_changes_6.3: %%%%%%%%%%%%%%%%%%%%%% Changes in Varnish 6.3 %%%%%%%%%%%%%%%%%%%%%% For information about updating your current Varnish deployment to the new version, see :ref:`whatsnew_upgrading_6.3`. A more detailed and technical account of changes in Varnish, with links to issues that have been fixed and pull requests that have been merged, may be found in the `change log`_. .. _change log: https://github.com/varnishcache/varnish-cache/blob/master/doc/changes.rst varnishd ======== Parameters ~~~~~~~~~~ A new :ref:`ref_param_pipe_sess_max` parameter allows to limit the number of concurrent pipe transactions. The default value is zero and means unlimited, for backwards compatibility. Other changes in varnishd ~~~~~~~~~~~~~~~~~~~~~~~~~ The transferred bytes accounting for HTTP/2 sessions is more accurate: ``ReqAcct`` log records no longer report a full delivery if a stream did not complete. The meaning of VCL temperature changed for the ``auto`` state: it used to automatically cool down a VCL transitioning from active to available, but that VCL would remain ``cold``. It now works in both directions, and such a cold VCL keeps the ``auto`` state and may be used or labelled immediately without an explicit change of state to ``warm``. As a result, a VCL with the ``cold`` state will no longer warm up automatically. The management of counters, and in particular dynamic counters (for example appearing or disappearing when a VCL is loaded or discarded), has seen significant performance improvements and setups involving a large amount of backends should be more responsive. Changes to VCL ============== VCL variables ~~~~~~~~~~~~~ The :ref:`ref_param_timeout_idle` parameter can be overridden in VCL using the ``sess.timeout_idle`` variable. Other changes to VCL ~~~~~~~~~~~~~~~~~~~~ A new ``error`` transition to ``vcl_backend_error`` allows to purposely move to that subroutine. It is similar to the ``synth`` transition and can optionally take arguments. The three following statements are equivalent:: return (error); return (error(503)); return (error(503, "Service Unavailable")); The ``error`` transition is available in :ref:`vcl_backend_fetch` and :ref:`vcl_backend_response`. Using an explicit ``error`` transition in ``vcl_backend_fetch`` does not increment the ``MAIN.fetch_failed`` counter. It is possible to import the same VMOD twice, as long as the two imports are identical and wouldn't otherwise conflict. This allows for example included VCL files to import the VMODs they need without preventing the including VCL to also perform the same import. Similarly, it is now possible to include a VMOD under a different name:: import directors as dir; sub vcl_init { new rr = dir.round_robin(); } This can be useful for VMODs with a long name, or VMODs that could use a more expressive name in VCL code. The built-in VCL turns the ``Host`` header lowercase in ``vcl_recv`` to improve its hashing later in ``vcl_hash`` since domain names are case insensitive. VMODs ===== ``std.ip()`` now takes an optional ``STRING`` argument to specify a port number or service name. See: :ref:`std.ip()` vsl-query(7) ============ The syntax for VSL queries, mainly available via the ``-q`` option with :ref:`varnishlog(1)` and similar tools, has slightly changed. Previously and end of line in a query would be treated as a simple token separator so in a script you could for example write this:: varnishlog -q ' tag operator operand or tag operator operand or tag operator operand ' -g request ... From now on, a query ends at the end of the line, but multiple queries can be specified in which case it acts as if the ``or`` operator was used to join all the queries. With this change in the syntax, the following query:: varnishlog -q ' query1 query2 ' is equivalent to:: varnishlog -q '(query1) or (query2)' In other words, if you are using a Varnish utility to process transactions for several independent reasons, you can decompose complex queries into simpler ones by breaking them into separate lines, and for the most complex queries possibly getting rid of parenthesis you would have needed in a single query. If your query is complex and long, but cannot appropriately be broken down into multiple queries, you can still break it down into multiple lines by using a backslash-newline sequence:: tag operator operand and \ tag operator operand and \ tag operator operand See :ref:`vsl-query(7)` for more information about this change. With this new meaning for an end of line in a query it is now possible to add comments in a query. If you run into the situation where again you need to capture transactions for multiple reasons, you may document it directly in the query:: varnishlog -q ' # catch varnish errors *Error # catch client errors BerespStatus >= 400 and BerespStatus < 500 # catch backend errors BerespStatus >= 500 ' -g request This way when you later revisit a complex query, comments may help you maintain an understanding of each individual query. This can become even more convenient when the query is stored in a file. varnishlog(1), varnishncsa(1) and others ======================================== Our collection of log-processing tools gained the ability to specify multiple ``-q`` options. While previously only the last ``-q`` option would prevail you may now pass multiple individual queries and filtering will operate as if the ``or`` operator was used to join all the queries. A new ``-Q`` option allows you to read the query from a file instead. It can also be used multiple times and adds up to any ``-q`` option specified. Similar to ``-c`` or ``-b`` for client or backend transactions, ``varnishncsa(1)`` can take a ``-E`` option to include ESI transactions. ``BackendStart`` log records are no longer used, but newer versions of log utilities should still recognize deprecated records. It remains possible to read logs written to a file with an older version of ``varnishlog(1)``, and that backward compatibility officially goes as far as Varnish 6.0.0 even though it *may* be possible to read logs saved from older releases. ``Debug`` records are no longer logged by default and can be removed from the :ref:`ref_param_vsl_mask` parameter to appear in the logs. Since such records are not meant for production they are only automatically enabled by ``varnishtest(1)``. varnishstat =========== A new ``MAIN.n_pipe`` gauge keeps track of the number of ongoing pipe transactions. A new ``MAIN.pipe_limited`` counter keeps track of how many times a transaction failed to turn into a pipe because of the :ref:`ref_param_pipe_sess_max` parameter. varnishtest =========== A ``client`` can now use the ``-method`` action for ``txreq`` commands to specify the request method. This used to be done with ``-req`` which remains as an alias for compatibility. A ``client`` or ``server`` may use the ``-bodyfrom`` action for respectively ``txreq`` or ``txresp`` commands to send a body from a file. An HTTP/2 ``client`` or ``server`` can work with gzip content encoding and has access to ``-gzipbody`` and ``-gziplen``. Changes for developers and VMOD authors ======================================= The most notable change for VMOD developers is the deprecation of string lists in favor of strands. As usual, new functions were added to VRT, and others were changed or removed. See ``vrt.h`` for a list of changes since the 6.2.0 release. We continue to remove functions from VRT that weren't meant to be used by VMOD authors and were only part of the VMOD infrastructure code. *eof* varnish-7.5.0/doc/sphinx/whats-new/changes-6.4.rst000066400000000000000000000137251457605730600216640ustar00rootroot00000000000000.. Copyright (c) 2020 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _whatsnew_changes_6.4: %%%%%%%%%%%%%%%%%%%%%%%% Changes in Varnish 6.4.0 %%%%%%%%%%%%%%%%%%%%%%%% For information about updating your current Varnish deployment to the new version, see :ref:`whatsnew_upgrading_6.4`. A more detailed and technical account of changes in Varnish, with links to issues that have been fixed and pull requests that have been merged, may be found in the `change log`_. .. _change log: https://github.com/varnishcache/varnish-cache/blob/master/doc/changes.rst varnishd ======== bugs ~~~~ Numerous bugs have been fixed. Generic Parameter Handling ~~~~~~~~~~~~~~~~~~~~~~~~~~ Some parameters have dependencies and those are better documented now. For example :ref:`ref_param_thread_pool_min` can't be increased above :ref:`ref_param_thread_pool_max`, which is now indicated as such in the manual. On a running Varnish instance the ``param.show`` command will display the actual minimum or maximum, but an attempt to ``param.set`` a parameter above or below its dynamic maximum or minimum will mention the failure's cause in the error message:: varnish> param.show thread_pool_reserve 200 thread_pool_reserve Value is: 0 [threads] (default) Maximum is: 95 [...] varnish> param.show thread_pool_min 200 thread_pool_min Value is: 100 [threads] (default) Maximum is: 5000 [...] varnish> param.set thread_pool_reserve 100 106 Must be no more than 95 (95% of thread_pool_min) (attempting to set param 'thread_pool_reserve' to '100') Expect further improvements in future releases. Parameters ~~~~~~~~~~ * Raised the minimum for the :ref:`ref_param_vcl_cooldown` parameter to 1 second. Changes in behavior ~~~~~~~~~~~~~~~~~~~ * The ``if-range`` header is now handled, allowing clients to conditionally request a range based on a date or an ETag. * Output VCC warnings also for VCLs loaded via the ``varnishd -f`` option Changes to VCL ============== * New syntax for "no backend":: backend dummy none; sub vcl_recv { set req.backend_hint = dummy; } It can be used whenever a backend is needed for syntactical reasons. The ``none`` backend will fail any attempt to use it. The other purpose is to avoid the declaration of a dummy backend when one is not needed: for example an active VCL only passing requests to other VCLs with the ``return (vcl(...))`` syntax or setups relying on dynamic backends from a VMOD. * ``std.rollback(bereq)`` is now safe to use, see :ref:`vmod_std(3)` for details. * Deliberately closing backend requests through ``return(abandon)``, ``return(fail)`` or ``return(error)`` is no longer accounted as a fetch failure. * Numerical expressions can now be negative or negated as in ``set resp.http.ok = -std.integer("-200");``. * The ``+=`` operator is now available for headers and response bodies:: set resp.http.header += "string"; VCL variables ~~~~~~~~~~~~~ * Add more vcl control over timeouts with the ``sess.timeout_linger``, ``sess.send_timeout`` and ``sess.idle_send_timeout`` variables corresponding the parameters by the same names. VMODs ===== * Imported :ref:`vmod_cookie(3)` from `varnish_modules`_ The previously deprecated function ``cookie.filter_except()`` has been removed during import. It was replaced by ``cookie.keep()`` varnishlog ========== * A ``Notice`` VSL tag has been added. * Log records can safely have empty fields or fields containing blanks if they are delimited by "double quotes". This was applied to ``SessError`` and ``Backend_health``. varnishadm ========== * New ``pid`` command in the Varnish CLI, to get the master and optionally cache process PIDs, for example from ``varnishadm``. varnishstat =========== * Add vi-style CTRL-f / CTRL-b for page down/up to interactive ``varnishstat``. * The ``MAIN.sess_drop`` counter is gone. * Added ``rx_close_idle`` counter for separate accounting when ``sess.timeout_idle`` / :ref:`ref_param_timeout_idle` is reached. * ``sess.send_timeout`` / :ref:`ref_param_send_timeout` being reached is no longer reported as ``MAIN.sc_rem_close``, but as ``MAIN.sc_tx_error``. Changes for developers and VMOD authors ======================================= General ~~~~~~~ * New configure switch: ``--with-unwind``. Alpine linux appears to offer a ``libexecinfo`` implementation that crashes when called by Varnish, this offers the alternative of using ``libunwind`` instead. * The option ``varnishtest -W`` is gone, the same can be achieved with ``varnishtest -p debug=+witness``. A ``witness.sh`` script is available in the source tree to generate a graphviz dot file and detect potential lock cycles from the test logs. * Introduced ``struct reqtop`` to hold information on the ESI top request and ``PRIV_TOP``. * New or improved Coccinelle semantic patches that may be useful for VMOD or utilities authors. * Added ``VSLs()`` and ``VSLbs()`` functions for logging ``STRANDS`` to VSL. * Added ``WS_VSB_new()`` / ``WS_VSB_finish()`` for VSBs on workspaces. * added ``v_dont_optimize`` attribute macro to instruct compilers (only gcc as of this release) to not optimize a function. * Added ``VSB_tofile()`` to ``libvarnishapi``. VMODs ~~~~~ * It is now possible for VMOD authors to customize the connection pooling of a dynamic backend. A hash is now computed to determine uniqueness and a backend declaration can contribute arbitrary data to influence the pool. * ``VRB_Iterate()`` signature has changed. * ``VRT_fail()`` now also works from director code. * ``body_status`` and ``req_body_status`` have been collapsed into one type. In particular, the ``REQ_BODY_*`` enums now have been replaced with ``BS_*``. * Added ``VRT_AllocStrandsWS()`` as a utility function to allocate STRANDS on a workspace. *eof* .. _varnish_modules: https://github.com/varnish/varnish-modules varnish-7.5.0/doc/sphinx/whats-new/changes-6.5.rst000066400000000000000000000206471457605730600216660ustar00rootroot00000000000000.. Copyright (c) 2020 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _whatsnew_changes_6.5: %%%%%%%%%%%%%%%%%%%%%%%% Changes in Varnish 6.5.0 %%%%%%%%%%%%%%%%%%%%%%%% For information about updating your current Varnish deployment to the new version, see :ref:`whatsnew_upgrading_6.5`. A more detailed and technical account of changes in Varnish, with links to issues that have been fixed and pull requests that have been merged, may be found in the `change log`_. .. _change log: https://github.com/varnishcache/varnish-cache/blob/master/doc/changes.rst varnishd ======== Access Control Lists (ACLs) ~~~~~~~~~~~~~~~~~~~~~~~~~~~ The VCL compiler now emits warnings if network numbers used in ACLs do not have an all-zero host part (as, for example, ``"192.168.42.42"/24``). By default, such ACL entries are fixed to all-zero and that fact logged with the ``ACL`` VSL tag. Parameters ~~~~~~~~~~ A new ``vcc_acl_pedantic`` parameter, when set to ``true``, turns the ACL warnings documented above into errors for the case where an ACL entry includes a network prefix, but host bits aren't all zeroes. The ``solaris`` jail has been improved and can reduce privileges even further. There is now a new optional ``-j solaris,worker=...`` argument which allows to extend the effective privilege set of the worker (cache) process. Other changes in varnishd ~~~~~~~~~~~~~~~~~~~~~~~~~ Some error messages improved in the VCL compiler. Changes to VCL ============== VCL variables ~~~~~~~~~~~~~ A new ``obj.can_esi`` variable has been added to identify whether the response can be ESI processed. Once ``resp.filters`` is explicitly set, trying to set a ``resp.do_*`` field results in a VCL failure. The same rule applies to ``beresp.filters`` and ``beresp.do_*`` fields. The ``BACKEND`` VCL type now has a ``.resolve()`` method to find the effective backend directly from VCL. When a director is selected, the resolution would otherwise be delayed until after returning from ``vcl_backend_fetch`` or ``vcl_pipe``:: # eager backend selection set bereq.backend = bereq.backend.resolve(); It is now possible to manually set a ``Connection: close`` header in ``beresp`` to signal that the backend connection shouldn't be recycled. This might help dealing with backends that would under certain circumstances have trouble managing their end of the connection, for example for certain kinds of resources. Care should be taken to preserve other headers listed in the connection header:: sub vcl_backend_response { if (beresp.backend == faulty_backend) { if (beresp.http.Connection) { set beresp.http.Connection += ", close"; } else { set beresp.http.Connection = "close"; } } } Other changes to VCL ~~~~~~~~~~~~~~~~~~~~ A failure in ``vcl_recv`` could resume execution in ``vcl_hash`` before effectively ending the transaction, this has been corrected. A failure in ``vcl_recv`` is now definitive. There is a new syntax for ``BLOB`` literals: ``::``. This syntax is also used to automatically cast a blob into a string. Behavior for 304 responses was changed not to update the ``Content-Encoding`` response header of the stored object. VMODs ===== A new ``std.blobread()`` function similar to ``std.fileread()`` was added to work with binary files. The shard director's ``.add_backend()`` method has a new optional ``weight`` parameter. An error when a backend is added or removed now fails the transaction (or the ``vcl.load`` command in ``vcl_init``), but an invalid weight does not result in a hard failure. The shard director no longer outputs the (unused) ``canon_point`` property in ``backend.list`` commands. varnishlog ========== The ``BackendReuse`` log record has been retired. It was named inconsistently with other places like stat counters where we use the words reuse and recycle (it should have been named ``BackendRecycle`` if anything). The ``BackendOpen`` record can now tell whether the connection to the backend was opened or reused from the pool, and the ``BackendClose`` record will tell whether the connection was effectively closed or recycled into the pool. varnishadm ========== The ``backend.set_health`` command can be used to force a specific state between sick and healthy or restore the automatic behavior, which depends on the presence of a probe. While forcing a backend to be sick would prevent it from being selected by a director, a straight selection of the backend from VCL would still attempt a connection. This has been fixed, and the command's documentation was clarified. varnishstat =========== A help screen is now available in interactive mode via the ``h`` key. Again in interactive mode, the initial verbosity is now chosen such that fields selected via the ``-f`` or ``-I`` options are actually displayed without manually increasing the verbosity level. Filtering using the ``-f`` option is now deprecated in favor of ``-I`` and ``-X`` options that are treated in order. While still present, the ``-f`` option now also works in order instead of exclusive filters first and then inclusive filters. It was also wrongly documented as inclusive first. The JSON output slightly changed to more easily be consumed with programming languages that may map JSON objects to types. See upgrade notes for more details. There are two new ``MAIN.beresp_uncacheable`` and ``MAIN.beresp_shortlived`` counters. varnishtest =========== The ``process -expect-text`` command will wait an order of magnitude longer for the text to appear. It used to be too sensitive to any kind of timing disruption. Changes for developers and VMOD authors ======================================= Build System ~~~~~~~~~~~~ VMOD authors who would like to generate VCC files can now use the ``VARNISH_VMODS_GENERATED()`` macro from ``varnish.m4`` for autotools builds. .. _whatsnew_changes_6.5_workspace: Workspace API ~~~~~~~~~~~~~ The workspace API saw a number of changes in anticipation of a future inclusion in VRT. The deprecated ``WS_Reserve()`` function was finally removed, after the functions ``WS_ReserveSize()`` and ``WS_ReserveAll()`` were introduced in Varnish Cache 6.3.0. On the topic of workspace reservation, the ``WS_Front()`` function is now deprecated in favor of ``WS_Reservation()``. The two functions behave similarly, but the latter ensures that it is only ever called during a reservation. There was no legitimate reason to access the workspace's front outside of a reservation. In a scenario where a reservation is made in a part of the code, but used somewhere else, it is possible to later query the size with the new ``WS_ReservationSize()`` function. The return value for ``WS_Printf()`` is now a constant string. Other VRT / cache.h changes ~~~~~~~~~~~~~~~~~~~~~~~~~~~ * Added ``VRT_DirectorResolve()`` to resolve a director * Added ``VRT_BLOB_string()`` for the default BLOB folding documented above .. _whatsnew_changes_6.5_vsc: libvarnishapi ~~~~~~~~~~~~~ There are three new VSC arguments that can be set with the ``VSC_Arg()`` function: - ``'I'`` to include counters matching a glob pattern - ``'X'`` to exclude counters matching a glob pattern - ``'R'`` to include required counters regardless of ``'I'`` and ``'X'`` The ``'f'`` argument is now deprecated and emulated with ``'I'`` and ``'X'``. Filtering with ``'f'`` used to check exclusions first and then inclusions, they are all tested in order and the first to match determines the outcome. The ``'R'`` argument takes precedence over regular filtering and can be used to ensure that some counters are present regardless of user configuration. .. _whatsnew_changes_6.5_libvarnish: libvarnish ~~~~~~~~~~ A ``VSA_BuildFAP()`` function has been added as a convenience to build a ``struct suckaddr`` (aka ``VCL_IP``) from a Family, Address and Protocol components. We added ``VRE_quote()`` to facilitate building literal string matches with regular expressions. It can be used to ensure that a user-defined string literal put inside a regular expression may not accidentally change the behavior of the overall expression. The varnish binary heap implementation has been added with the ``VBH_`` prefix for use with VMODs (via include of ``vbh.h``). VSB support for dynamic vs. static allocations has been changed: For dynamic allocations use:: VSB_new_auto() + VSB_destroy() For preexisting buffers use:: VSB_init() + VSB_fini() ``VSB_new()`` + ``VSB_delete()`` are now deprecated. *eof* varnish-7.5.0/doc/sphinx/whats-new/changes-6.6.rst000066400000000000000000000357071457605730600216720ustar00rootroot00000000000000.. Copyright 2021 UPLEX Nils Goroll Systemoptimierung SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _whatsnew_changes_6.6: %%%%%%%%%%%%%%%%%%%%%% Changes in Varnish 6.6 %%%%%%%%%%%%%%%%%%%%%% For information about updating your current Varnish deployment to the new version, see :ref:`whatsnew_upgrading_6.6`. A more detailed and technical account of changes in Varnish, with links to issues that have been fixed and pull requests that have been merged, may be found in the `change log`_. .. _change log: https://github.com/varnishcache/varnish-cache/blob/master/doc/changes.rst varnishd ======== Arguments ~~~~~~~~~ * ``varnishd`` now supports the ``-b none`` argument to start with only the builtin VCL and no backend at all. Parameters ~~~~~~~~~~ * The ``validate_headers`` parameter has been added to control `header validation `_. * The ``ban_cutoff`` parameter now refers to the overall length of the ban list, including completed bans, where before only non-completed ("active") bans were counted towards ``ban_cutoff``. * The ``vary_notice`` parameter has been added to control the threshold for the new `Vary Notice `_. ``feature`` Flags ~~~~~~~~~~~~~~~~~ * The ``busy_stats_rate`` feature flag has been added to ensure statistics updates (as configured using the ``thread_stats_rate`` parameter) even in scenarios where worker threads never run out of tasks and may remain forever busy. .. _whatsnew_changes_6.6_accounting: Accounting ~~~~~~~~~~ Body bytes accounting has been fixed to always represent the number of body bytes moved on the wire, exclusive of protocol-specific overhead like HTTP/1 chunked encoding or HTTP/2 framing. This change affects counters like - ``MAIN.s_req_bodybytes``, - ``MAIN.s_resp_bodybytes``, - ``VBE.*.*.bereq_bodybytes`` and - ``VBE.*.*.beresp_bodybytes`` as well as the VSL records - ``ReqAcct``, - ``PipeAcct`` and - ``BereqAcct``. .. _whatsnew_changes_6.6_sc_close: Session Close Reasons ~~~~~~~~~~~~~~~~~~~~~ The connection close reason has been fixed to properly report ``SC_RESP_CLOSE`` / ``resp_close`` where previously only ``SC_REQ_CLOSE`` / ``req_close`` was reported. For failing PROXY connections, ``SessClose`` now provides more detailed information on the cause of the failure. The session close reason logging/statistics for HTTP/2 connections have been improved. .. _whatsnew_changes_6.6_vary_notice: Vary Notice ~~~~~~~~~~~ A log (VSL) ``Notice`` record is now emitted whenever more than ``vary_notice`` variants are encountered in the cache for a specific hash. The new ``vary_notice`` parameter defaults to 10. Changes to VCL ============== .. _whatsnew_changes_6.6_header_validation: Header Validation ~~~~~~~~~~~~~~~~~ Unless the new ``validate_headers`` feature is disabled, all newly set headers are now validated to contain only characters allowed by RFC7230. A (runtime) VCL failure is triggered if not. VCL variables ~~~~~~~~~~~~~ * The ``client.identity`` variable is now accessible on the backend side. * The variables ``bereq.is_hitpass`` and ``bereq.is_hitmiss`` have been added to the backend side matching ``req.is_hitpass`` and ``req.is_hitmiss`` on the client side. * The ``bereq.xid`` variable is now also available in ``vcl_pipe {}`` * The ``resp.proto`` variable is now read-only as it should have been for long, like the other ``*.proto`` variables. Other changes to VCL ~~~~~~~~~~~~~~~~~~~~ * Long strings in VCL can now also be denoted using ``""" ... """`` in addition to the existing ``{" ... "}``. * The ``ban()`` builtin is now deprecated and should be replaced with `std.ban() `_. * Trying to use ``std.rollback()`` from ``vcl_pipe`` now results in VCL failure. * The modulus operator ``%`` has been added to VCL. * ``return(retry)`` from ``vcl_backend_error {}`` now correctly resets ``beresp.status`` and ``beresp.reason``. * The builtin VCL has been reworked: VCL code has been split into small subroutines, which custom VCL can prepend custom code to. This allows for better integration of custom VCL and the built-in VCL and better reuse. VMODs ===== ``directors.shard()`` ~~~~~~~~~~~~~~~~~~~~~ * The shard director now supports reconfiguration (adding/removing backends) of several instances without any special ordering requirement. * Calling the shard director ``.reconfigure()`` method is now optional. If not called explicitly, any shard director backend changes are applied at the end of the current task. * Shard director ``Error`` log messages with ``(notice)`` have been turned into ``Notice`` log messages. * All shard ``Error`` and ``Notice`` messages now use the unified prefix ``vmod_directors: shard %s``. ``std.set_ip_tos()`` ~~~~~~~~~~~~~~~~~~~~ The ``set_ip_tos()`` function from the bundled ``std`` vmod now sets the IPv6 Taffic Class (TCLASS) when used on an IPv6 connection. .. _whatsnew_changes_6.6_ban: ``std.ban()`` and ``std.ban_error()`` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The ``std.ban()`` and ``std.ban_error()`` functions have been added to the ``std`` vmod, allowing VCL to check for ban errors. A typical usage pattern with the new interface is:: if (std.ban(...)) { return(synth(200, "Ban added")); } else { return(synth(400, std.ban_error())); } .. _whatsnew_changes_6.6_cookie: ``cookie`` functions ~~~~~~~~~~~~~~~~~~~~ The ``filter_re``, ``keep_re`` and ``get_re`` functions from the bundled ``cookie`` vmod have been changed to take the ``VCL_REGEX`` type. This implies that their regular expression arguments now need to be literal, whereas before they could be taken from some other variable or function returning ``VCL_STRING``. Note that these functions never actually handled *dynamic* regexen, the string passed with the first call was compiled to a regex, which was then used for the lifetime of the respective VCL. varnishlog ========== * See `Accounting `_ for changes to accounting-related VSL records. * See `Session Close Reasons `_ for a change affecting ``SessClose``. * Three new ``Timestamp`` VSL records have been added to backend request processing: - The ``Process`` timestamp after ``return(deliver)`` or ``return(pass(x))`` from ``vcl_backend_response``, - the ``Fetch`` timestamp before a backend connection is requested and - the ``Connected`` timestamp when a connection to a regular backend (VBE) is established, or when a recycled connection was selected for reuse. * The ``FetchError`` log message ``Timed out reusing backend connection`` has been renamed to ``first byte timeout (reused connection)`` to clarify that it is emit for effectively the same reason as ``first byte timeout``. * ``ExpKill`` log (VSL) records are now masked by default. See the ``vsl_mask`` parameter. * Comparisons of numbers in VSL queries have been improved to match better the behavior which is likely expected by users who have not read the documentation in all detail. * See `Vary Notice `_ for information on a newly added ``Notice`` log (VSL) record. varnishncsa =========== * The ``%{X}T`` format has been added to ``varnishncsa``, which generalizes ``%D`` and ``%T``, but also support milliseconds (``ms``) output. * The ``varnishncsa`` ``-E`` argument to show ESI requests has been changed to imply ``-c`` (client mode). This behavior is now shared by all log utilities, and ``-c`` no longer includes ESI requests. varnishadm ========== * The ``vcl.discard`` CLI command can now be used to discard more than one VCL with a single command, which succeeds only if all given VCLs could be discarded (atomic behavior). * The ``vcl.discard`` CLI command now supports glob patterns for vcl names. * The ``vcl.deps`` CLI command has been added to output dependencies between VCLs (because of labels and ``return(vcl)`` statements). * ``varnishadm`` now has the ``-p`` option to disable readline support for use in scripts and as a generic CLI connector. varnishstat =========== * See `Accounting `_ for changes to accounting-related counters. * See `Session Close Reasons `_ for a change affecting ``MAIN.sc_*`` counters. * The ``MAIN.esi_req`` counter has been added as a statistic of the number of ESI sub requests created. * The ``MAIN.s_bgfetch`` counter has been added as a statistic on the number of background fetches issued. .. _whatsnew_changes_6.6_varnishstat_raw: * ``varnishstat`` now avoids display errors of gauges which previously could underflow to negative values, being displayed as extremely high positive values. The ``-r`` option and the ``r`` key binding have been added to return to the previous behavior. When raw mode is active in ``varnishstat`` interactive (curses) mode, the word ``RAW`` is displayed at the right hand side in the lower status line. varnishtest =========== Various improvements have been made to the ``varnishtest`` facility: - the ``loop`` keyword now works everywhere - HTTP/2 logging has been improved - Default HTTP/2 parameters have been tweaked - Varnish listen address information is now available by default in the macros ``${vNAME_addr}``, ``${vNAME_port}`` and ``${vNAME_sock}``. Macros by the names ``${vNAME_SOCKET_*}`` contain the address information for each listen socket as created with the ``-a`` argument to ``varnishd``. - Synchronization points for counters (VSCs) have been added as ``varnish vNAME -expect PATTERN OP PATTERN`` - varnishtest now also works with IPv6 setups - ``feature ipv4`` and ``feature ipv6`` can be used to control execution of test cases which require one or the other protocol. - haproxy arguments can now be externally provided through the ``HAPROXY_ARGS`` variable. - logexpect now supports alternatives with the ``expect ? ...`` syntax and negative matches with the ``fail add ...`` and ``fail clear`` syntax. - The overall logexpect match expectation can now be inverted using the ``-err`` argument. - Numeric comparisons for HTTP headers have been added: ``-lt``, ``-le``, ``-eq``, ``-ne``, ``-ge``, ``-gt`` - ``rxdata -some`` has been fixed. Other Changes to Varnish Utilities ================================== All varnish tools using the VUT library utilities for argument processing now support the ``--optstring`` argument to return a string suitable for use with ``getopts`` from shell scripts. .. _whatsnew_changes_6.6_vmod: Developer: Changes for VMOD authors =================================== VMOD/VCL interface ~~~~~~~~~~~~~~~~~~ * The ``VCL_REGEX`` data type is now supported for VMODs, allowing them to use regular expression literals checked and compiled by the VCL compiler infrastructure. Consequently, the ``VRT_re_init()`` and ``VRT_re_fini()`` functions have been removed, because they are not required and their use was probably wrong anyway. * The ``VCL_SUB`` data type is now supported for VMODs to save references to subroutines to be called later using ``VRT_call()``. Calls from a wrong context (e.g. calling a subroutine accessing ``req`` from the backend side) and recursive calls fail the VCL. See `VMOD - Varnish Modules`_ in the Reference Manual. .. _VMOD - Varnish Modules: https://varnish-cache.org/docs/trunk/reference/vmod.html VMOD functions can also return the ``VCL_SUB`` data type for calls from VCL as in ``call vmod.returning_sub();``. * ``VRT_check_call()`` can be used to check if a ``VRT_call()`` would succeed in order to avoid the potential VCL failure in case it would not. It returns ``NULL`` if ``VRT_call()`` would make the call or an error string why not. * ``VRT_handled()`` has been added, which is now to be used instead of access to the ``handling`` member of ``VRT_CTX``. * ``vmodtool.py`` has been improved to simplify Makefiles when many VMODs are built in a single directory. General API ~~~~~~~~~~~ * ``VRT_ValidHdr()`` has been added for VMODs to conduct the same check as the `whatsnew_changes_6.6_header_validation`_ feature, for example when headers are set by VMODs using the ``cache_http.c`` Functions like ``http_ForceHeader()`` from untrusted input. * Client and backend finite state machine internals (``enum req_step`` and ``enum fetch_step``) have been removed from ``cache.h``. * The ``verrno.h`` header file has been removed and merged into ``vas.h`` * The ``pdiff()`` function declaration has been moved from ``cache.h`` to ``vas.h``. VSA ~~~ * The ``VSA_getsockname()`` and ``VSA_getpeername()`` functions have been added to get address information of file descriptors. Private Pointers ~~~~~~~~~~~~~~~~ * The interface for private pointers in VMODs has been changed: - The ``free`` pointer in ``struct vmod_priv`` has been replaced with a pointer to ``struct vmod_priv_methods``, to where the pointer to the former free callback has been moved as the ``fini`` member. - The former free callback type has been renamed from ``vmod_priv_free_f`` to ``vmod_priv_fini_f`` and as gained a ``VRT_CTX`` argument * The ``VRT_priv_task_get()`` and ``VRT_priv_top_get()`` functions have been added to VRT to allow vmods to retrieve existing ``PRIV_TASK`` / ``PRIV_TOP`` private pointers without creating any. Backends ~~~~~~~~ * The VRT backend interface has been changed: - ``struct vrt_endpoint`` has been added describing a UDS or TCP endpoint for a backend to connect to. Endpoints also support a preamble to be sent with every new connection. - This structure needs to be passed via the ``endpoint`` member of ``struct vrt_backend`` when creating backends with ``VRT_new_backend()`` or ``VRT_new_backend_clustered()``. * ``VRT_Endpoint_Clone()`` has been added to facilitate working with endpoints. Filters (VDP/VFP) ~~~~~~~~~~~~~~~~~ * Many filter (VDP/VFP) related signatures have been changed: - ``vdp_init_f`` - ``vdp_fini_f`` - ``vdp_bytes_f`` - ``VDP_bytes()`` as well as ``struct vdp_entry`` and ``struct vdp_ctx`` ``VFP_Push()`` and ``VDP_Push()`` are no longer intended for VMOD use and have been removed from the API. * The VDP code is now more strict about ``VDP_END``, which must be sent down the filter chain at most once. Care should be taken to send ``VDP_END`` together with the last payload bytes whenever possible. Stevedore API ~~~~~~~~~~~~~ * The stevedore API has been changed: - ``OBJ_ITER_FINAL`` has been renamed to ``OBJ_ITER_END`` - ``ObjExtend()`` signature has been changed to also cover the ``ObjTrimStore()`` use case and - ``ObjTrimStore()`` has been removed. Developer: Changes for Authors of Varnish Utilities =================================================== libvarnishapi ~~~~~~~~~~~~~ * The ``VSC_IsRaw()`` function has been added to ``libvarnishapi`` to query if a gauge is being returned raw or adjusted (see `varnishstat -r option `_). *eof* varnish-7.5.0/doc/sphinx/whats-new/changes-7.0.rst000066400000000000000000000224461457605730600216610ustar00rootroot00000000000000.. Copyright 2021 Varnish Software SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _whatsnew_changes_7.0: %%%%%%%%%%%%%%%%%%%%%% Changes in Varnish 7.0 %%%%%%%%%%%%%%%%%%%%%% For information about updating your current Varnish deployment to the new version, see :ref:`whatsnew_upgrading_7.0`. A more detailed and technical account of changes in Varnish, with links to issues that have been fixed and pull requests that have been merged, may be found in the `change log`_. .. _change log: https://github.com/varnishcache/varnish-cache/blob/master/doc/changes.rst PCRE2 ===== One major change for this release is the migration of the regular expression engine from PCRE to PCRE2. This change should be mostly transparent anywhere regular expressions are used, like VCL, ban expressions, VSL queries etc. There were some parameters changes, see the upgrade notes for more details. RFC8941 - Structured Fields =========================== It will come as no surprise to VCL writers that HTTP headers use what can charitably be described as "heterogenous syntax". In 2016, on the train back from the HTTP Workshop in Stockholm, and in response to proposals to allow JSON in HTTP headers, I started an effort which culminated with the publication of `RFC 8941 Structured Fields`_. The syntax in Structured Fields is distilled from current standardized headers, which means that the vast majority of existing HTTP headers are already covered. There are unfortunate exceptions, most notably the Cookie headers. Starting with this release, we are gently migrating VCL towards the Structured Field semantics. The first change is that it is now possible to include BLOBs in VCL, by using the RFC 8941 syntax of:: ':' + + ':' For example, the VCL string ``"helloworld"`` can be represented as the BLOB literal ``:aGVsbG93b3JsZA==:`` without involving a VMOD. See :ref:`vmod_blob(3)`. The second and likely more significant change is numbers in VCL now conform to RFC8941 as well: Up to 15 digits and at most 3 decimal places, and "scientific notation" is no longer allowed. (These restrictions were chosen after much careful deliberation, to ensure that no overflows would occur, even when HTTP headers are processed in languages where numbers are represented by IEEE-754 64 binary floating point variables.) .. _RFC 8941 Structured Fields: https://www.rfc-editor.org/rfc/rfc8941.html varnishd ======== Parameters ~~~~~~~~~~ There were changes to the parameters: - new ``pcre2_jit_compilation`` boolean defaulting to on - the default value increased to 16kB for ``vsl_buffer`` - the default value increased to 96kB for ``workspace_client`` - the default value increased to 96kB for ``workspace_backend`` - the minimum value increased to 384B for ``workspace_session`` - the minimum value increased to 65535B for ``h2_initial_window_size`` - the default value increased to 80kB for ``thread_pool_stack`` - the default value increased to 64kB for ``thread_pool_stack`` on 32bit systems - ``vcc_acl_pedantic`` was removed, see upgrade notes for more details. - ``pcre_match_limit`` was renamed to ``pcre2_match_limit`` - ``pcre_match_limit_recursion`` was renamed to ``pcre2_depth_limit`` - new ``h2_rxbuf_storage`` defaulting to ``Transient``, see upgrade notes for more details. Other changes in varnishd ~~~~~~~~~~~~~~~~~~~~~~~~~ For pass transactions, ``varnishd`` no longer strips ``Range`` headers from client requests or ``Accept-Range`` and ``Content-Range`` headers from backend responses to allow partial delivery directly from the backend. When ``http_range_support`` is on (the default) a consistency check is performed on the backend response and malformed or inconsistent responses are treated as fetch errors. There is a new buffer for HTTP/2 request bodies allocated from storage. See upgrade notes for more details on both topics. Changes to VCL ============== VCL variables ~~~~~~~~~~~~~ A new ``req.hash_ignore_vary`` flag allows to skip vary header checks during a lookup. This can be useful when only the freshness of a resource is relevant, and not a slight difference in representation. For interoperability purposes, it is now possible to quote header names that aren't valid VCL symbols, but valid HTTP header names, for example:: req.http."dotted.name" This is rarely observed and should only be needed to better integrate with the specific needs of certain clients or servers. Some global VCL symbols can be referenced before their declaration, this was extended to all global VCL symbols for the following keywords: - ``acl`` - ``backend`` - ``probe`` - ``sub`` Consider the following example:: sub vcl_recv { set req.backend_hint = b; } backend b { .host = "example.org"; } It used to fail the VCL compilation with "Symbol not found: 'b'" in ``vcl_recv``, and is now supported. Bit flags ~~~~~~~~~ There is a new bit flag syntax for certain VCL keywords:: keyword +flag -other ... Similarly to bit flag ``varnishd`` parameters ``debug``, ``feature`` and ``vsl_mask``, a ``+`` prefix means that a flag is raised and a ``-`` prefix that a flag is cleared. The ``acl`` keyword supports the following flags: - ``log`` - ``pedantic`` (enabled by default) - ``table`` For example:: acl +log -pedantic { ... } See :ref:`vcl-acl`. The ``include`` keyword supports a ``glob`` flag. For example:: include +glob "example.org/*.vcl"; See :ref:`vcl-include`. See upgrade notes for more details. VMODs ===== New ``BASE64CF`` encoding scheme in ``vmod_blob``. It is similar to ``BASE64URL``, with the following changes to ``BASE64``: - ``+`` replaced with ``-`` - ``/`` replaced with ``~`` - ``_`` as the padding character It is used by a certain CDN provider who also inspired the name. See the ``vmod_blob`` manual (:ref:`vmod_blob-base64`). varnishlog ========== If a cache hit occurs on a streaming object, an object that is still being fetched, ``Hit`` records contain progress of the fetch task. This should help troubleshooting when cache hits appear to be slow, whether or not the backend is still serving the response. See :ref:`vcl(7)`. By default ``VCL_acl`` records are no longer emitted. They can be brought back by adding a ``+log`` flag to the ACL declaration. See :ref:`vcl-acl`. varnishncsa =========== New ``%{...}t`` time formats: - ``sec`` - ``msec`` - ``usec`` - ``msec_frac`` - ``usec_frac`` See the ``varnishncsa`` manual (:ref:`ncsa-format`) for more information. varnishadm ========== The ``-t`` option sets up the timeout for both attaching to a running ``varnishd`` instance and individual commands sent to that instance. Command completion should be more accurate in interactive mode. varnishtest =========== Test cases should be generally more reactive, whether it is detecting a ``varnishd`` startup failure, waiting for ``varnishd`` to stop, or when tests fail and there are barriers waiting for a synchronization. Clients and servers can have up to 64 headers in requests and responses. The ``feature`` command allows to skip gracefully test cases that are missing specific requirements. It is now possible to skip a test based on the presence of a feature. For example, for test cases targeting 32bit environment with a working DNS setup:: feature dns !64bit There are new feature checks: - ``coverage`` - ``asan`` - ``msan`` - ``tsan`` - ``ubsan`` - ``sanitizer`` - ``workspace_emulator`` The undocumented ``pcre_jit`` feature check is gone. See the VTC manual (:ref:`vtc-feature`) for more details. There is a new ``tunnel`` command that acts as a proxy between two peers. A tunnel can pause and control how much data goes in each direction, and can be used to trigger socket timeouts, possibly in the middle of protocol frames, without having to change how the peers are implemented. See the VTC manual (:ref:`vtc-tunnel`) for more details. There is a new dynamic macro ``${string,repeat,,}`` to avoid very long lines or potential mistakes when maintained by hand. For example, the two following strings are equivalent:: "AAA" "${string,repeat,3,A}" See the VTC manual (:ref:`vtc-macros`) for more details. There were also various improvements to HTTP/2 testing, and more should be expected. Changes for developers and VMOD authors ======================================= Varnish now comes with a second workspace implementation called the workspace emulator. It needs to be enabled during the build with the configure flag ``--enable-workspace-emulator``. The workspace emulator performs sparse allocations and can help VMOD authors interested in fuzzing, especially when the Address Sanitizer is enabled as well. In order to make the emulator possible, some adjustments were needed for the workspace API. Deprecated functions ``WS_Front()`` and ``WS_Inside()`` were removed independently of the emulator. The ``STRING_LIST`` type is gone in favor of ``STRANDS``. All the VRT symbols related to ``STRING_LIST`` are either gone or changed. Convenience constants ``vrt_null_strands`` and ``vrt_null_blob`` were added. The migration to PCRE2 also brought many changes to the VRE API. The VRT functions handling ``REGEX`` arguments didn't change. The VNUM API also changed substantially for structured field number parsing. The deprecated functions ``VSB_new()`` and ``VSB_delete()`` were removed. See upgrade notes for more information. *eof* varnish-7.5.0/doc/sphinx/whats-new/changes-7.1.rst000066400000000000000000000176161457605730600216650ustar00rootroot00000000000000.. _whatsnew_changes_7.1: %%%%%%%%%%%%%%%%%%%%%% Changes in Varnish 7.1 %%%%%%%%%%%%%%%%%%%%%% For information about updating your current Varnish deployment to the new version, see :ref:`whatsnew_upgrading_7.1`. A more detailed and technical account of changes in Varnish, with links to issues that have been fixed and pull requests that have been merged, may be found in the `change log`_. .. _change log: https://github.com/varnishcache/varnish-cache/blob/master/doc/changes.rst varnishd ======== Parameters ~~~~~~~~~~ A new kind of parameters exists: deprecated aliases. Their documentation is minimal, mainly referring to the actual symbols they alias. They are not listed in the CLI, unless referred to explicitly. There is no deprecated alias yet, but some are already planned for future releases. Alias parameters have next to no overhead when used directly. The deprecated ``vsm_space`` parameter was removed. A new ``cc_warnings`` parameter contains a subset of the compiler flags extracted from ``cc_command``, which in turn grew new expansions: - ``%d``: the raw default ``cc_command`` - ``%D``: the expanded default ``cc_command`` - ``%w``: the ``cc_warnings`` parameter - ``%n``: the working directory (``-n`` option) This should facilitate the creation of wrapper scripts around VCL compilation. There is a new ``experimental`` parameter that is identical to the ``feature`` parameter, except that it guards features that may not be considered complete or stable. An experimental feature may be promoted to a regular feature or dropped without being considered a breaking change. Command line options ~~~~~~~~~~~~~~~~~~~~ The deprecated sub-argument of the ``-l`` option was removed, it is now a shorthand for the ``vsl_space`` parameter only. The ``-T``, ``-M`` and ``-P`` command line options can be used multiple times, instead of retaining only the last occurrence. When there is no active VCL, the first loaded VCL was always implicitly used too. This is now only true for VCLs loaded with either the ``-f`` or ``-b`` options, since they imply a ``vcl.use``. VCL loaded through the Varnish CLI (``vcl.load`` or ``vcl.inline``) via a CLI script loaded through the ``-I`` command line option require an explicit ``vcl.use``. Other changes in varnishd ~~~~~~~~~~~~~~~~~~~~~~~~~ ESI includes now support the ``onerror="continue"`` attribute. However, in order to take effect a new ``+esi_include_onerror`` feature flag needs to be raised. Changes to VCL ============== It is now possible to assign a ``BLOB`` value to a ``BODY`` variable, in addition to ``STRING`` as before. VCL variables ~~~~~~~~~~~~~ New VCL timestamp variables have been added to track the point in time when HTTP messages were created: - ``req.time`` - ``req_top.time`` - ``resp.time`` - ``bereq.time`` - ``beresp.time`` - ``obj.time`` The new ``req.transport`` variable returns "HTTP/1" or "HTTP/2" as appropriate. Other changes to VCL ~~~~~~~~~~~~~~~~~~~~ Where a regular expression literal is expected, it is now possible to have a concatenation of constant strings. It can be useful when part of the expression comes from an environment-specific include, or to break a long expression into multiple lines. (introduced with 7.0.1) Similarly to ``varnishd`` parameters, it is now possible to have deprecated aliases of VCL variables. Although there are none so far, aliases will allow some symbols to be renamed without immediately breaking existing VCL code. Deprecated VCL aliases have no runtime overhead, they are reified at VCL compile time. VMODs ===== New :ref:`std.strftime()` function for UTC formatting. It is now possible to declare deprecated aliases of VMOD functions and object methods, just like VCL aliases. The ``cookie.format_rfc1123()`` function was renamed to :ref:`cookie.format_date()`, and the former was retained as a deprecated alias of the latter for compatibility. Deprecated VMOD aliases have no runtime overhead, they are reified at VCL compile time. varnishlog ========== It is now possible to write to the standard output with ``-w -``, to be on par with the ability to read from the standard input with ``-r -``. This is not possible in daemon mode. In a pipe scenario, the backend transaction emits a Start timestamp and both client and backend transactions emit the Process timestamp. varnishncsa =========== It is now possible to write to the standard output with ``-w -``, to be on par with the ability to read from the standard input with ``-r -``. This is not possible in daemon mode. varnishadm ========== When ``vcl.show`` is invoked without a parameter, it defaults to the active VCL. The ``param.set`` command accepts a ``-j`` option. In this case the JSON output is the same as ``param.show -j`` of the updated parameter. A new ``debug.shutdown.delay`` command is available in the Varnish CLI for testing purposes. It can be useful for testing purposes to see how its environment (service manager, container orchestrator, etc) reacts to a ``varnishd``'s child process taking significant time to ``stop``. varnishtest =========== The ``SO_RCVTIMEO_WORKS`` feature check is gone. (introduced with 7.0.1) The reporting of ``logexpect`` events was rearranged for readability. The ``abort`` command in the ``logexpect`` facility of ``varnishtest`` can now be used to trigger an ``abort()`` to help debugging the vsl client library code. The ``vtc.barrier_sync()`` VMOD function can be used in ``vcl_init`` from now on. Changes for developers and VMOD authors ======================================= The ``SO_RCVTIMEO`` and ``SO_SNDTIMEO`` socket options are now required at build time since their absence would otherwise prevent some timeouts to take effect. We no longer check whether they effectively work, hence the removal of the ``SO_RCVTIMEO_WORKS`` feature check in ``varnishtest``. (introduced with 7.0.1) Varnish will use libunwind by default when available at configure time, the ``--without-unwind`` configure flag can prevent this and fall back to libexecinfo to generate backtraces. There is a new debug storage backend for testing purposes. So far, it can only be used to ensure that allocation attempts return less space than requested. There are new C macros for ``VCL_STRANDS`` creation: ``TOSTRAND()`` and ``TOSTRANDS()`` are available in ``vrt.h``. New utility macros ``vmin[_t]``, ``vmax[_t]`` and ``vlimit[_t]`` available in ``vdef.h``. The fetch and delivery filters should now be registered and unregistered with ``VRT_AddFilter()`` and ``VRT_RemoveFilter()``. Dynamic backends are now reference-counted, and VMOD authors must explicitly track assignments with ``VRT_Assign_Backend()``. The ``vtc.workspace_reserve()`` VMOD function will zero memory from now on. When the ``+workspace`` debug flag is raised, workspace logs are no longer emitted as raw logs disconnected from the task. Having workspace logs grouped with the rest of the task should help workspace footprint analysis. It is now possible to generate arbitrary log lines with ``vtc.vsl()`` and ``vtc.vsl_replay()``, which can help testing log processing utilities. It is also possible to tweak the VXID cache chunk size per thread pool with the ``debug.xid`` command for the Varnish CLI, which can also help testing log processing utilities. ``http_IsHdr()`` is now exposed as part of the strict ABI for VMODs. Platform Support ================ CentOS ~~~~~~ With the End of Life of CentOS 8, we will build el8 packages on almalinux from now on. This means that we will always target the oldest el8 branch. For example a package built for el8.5 is not guaranteed to work on el8.1 even though the latter may still be supported by Red Hat. systemd ~~~~~~~ The kill mode of the varnish service was changed from ``process`` to ``mixed`` to ensure that the cache process is killed if the manager process is timed out by systemd. Otherwise, a race exists with the cache process where a restart is carried on before the old cache process exits, creating conflict on resources such as listen ports. *eof* varnish-7.5.0/doc/sphinx/whats-new/changes-7.2.rst000066400000000000000000000132201457605730600216510ustar00rootroot00000000000000.. _whatsnew_changes_7.2: %%%%%%%%%%%%%%%%%%%%%% Changes in Varnish 7.2 %%%%%%%%%%%%%%%%%%%%%% For information about updating your current Varnish deployment to the new version, see :ref:`whatsnew_upgrading_7.2`. A more detailed and technical account of changes in Varnish, with links to issues that have been fixed and pull requests that have been merged, may be found in the `change log`_. .. _change log: https://github.com/varnishcache/varnish-cache/blob/master/doc/changes.rst varnishd ======== Extensions ~~~~~~~~~~ From the very first days of Varnish, we have been talking about having an extension points for "more advanced stuff" and we did, by and large, keep a place ready for it in the overall architecture. Now a credible use-case finally appeared, and we have implemented "Varnish Extensions" (VTLA: "VEXT"), which can both be used to load ambient VMODs and to implement entirely new functionaly, for instance stevedores. See :ref:`ref-vext` in the reference manual for more information. Parameters ~~~~~~~~~~ Duration values (with a unit in seconds) can optionally take a duration unit with the same syntax as VCL. For example, the default values of ``default_ttl``, ``default_grace`` and ``default_keep`` were changed respectively from ``120.000``, ``10.000`` and ``0.000`` to ``2m``, ``10s`` and ``0s``. The platform-dependent ``tcp_keepalive_time`` parameter is supported on MacOS. The new ``vcc_feature`` bits parameter replaces previous ``vcc_*`` boolean parameters. The latter still exist as deprecated aliases. Other changes in varnishd ~~~~~~~~~~~~~~~~~~~~~~~~~ The metadata VMODs exposes to Varnishd has changed to a non-binary format, and it is incompatible with all previous releases. That makes it possible for the VCC (compilation) process to avoid opening the VMODs with ``dlopen(3)``, which is both faster and safer. Background fetch tasks are no longer queued as this could result in slow grace hits subject to indefinite delays when thread pools are saturated. Changes to VCL ============== VCL variables ~~~~~~~~~~~~~ ESI sub-requests can no longer inherit a ``req.http.transfer-encoding`` header since the request body is strictly handled by the top request. The ``resp.http.via`` header generated by Varnish uses ``server.identity`` which defaults to the host name. A ``req.http.via`` header is generated also before entering ``vcl_recv``. If a client request or backend response already had a Via header, it is now appended to instead of overwritten. A ``resp.http.via`` header is no longer overwritten by varnish, but rather appended to. The ``server.identity`` variable is guaranteed to be a single token as defined in the HTTP grammar, to safely be used as either a host name or pseudonym in Via headers. The ``now`` variable remains constant in a VCL subroutine. This was already the case, but is now (pun intended) formally defined behavior. It keeps the same value even if the execution blocks for a significant time, for example while calling a VMOD function. Bundled VMODs ============= For a real time timestamp, the function ``std.now()`` can be used instead. There is also a new ``std.timed_call()`` to measure the execution time of a subroutine. Cookie headers generated by vmod_cookie no longer have a spurious trailing semi-colon (``';'``) at the end of the string. varnishlog ========== The ``Begin`` log records may contain a 4th field with the sub-level of sub-tasks. The ``Begin[4]`` field is used by the ``-E`` option (or lack thereof) in log utilities to include sub-tasks or not. Internally, only ESI tasks are subject to this filtering, but it can apply to tasks spawned by VMODs too. Similarly, the ``Link`` record has the same optional 4th field. .. XXX: any reason against ``varnish{hist,top} -k``? The ``-k`` option from ``varnishlog`` is now available in ``varnishncsa``. varnishstat =========== The unused counter ``MAIN.fetch_no_thread`` was repurposed and renamed to ``MAIN.bgfetch_no_thread`` to signal when background fetch tasks fail to be scheduled because thread pools are saturated. To help estimate the rate of ``vsl_space`` consumption, the new counter ``MAIN.shm_bytes`` was added. It offers a finer-grained metric than the existing ``MAIN.shm_cycles`` that depends on the ``vsl_space`` setting. A new contribution script called ``varnishstatdiff`` can be used to compare the output of two ``varnishstat -1`` executions with a friendly diff format for ``varnishstat``'s specific output. varnishtest =========== New macros ``${pkg_version}`` and ``${pkg_branch}`` expanding respectively to ``7.2.0`` and ``7.2`` for the current release. It is possible to match the text on screen against a regular expression with the new ``process -match`` command. The new ``filewrite [-a]`` command can put or append text into a file. A Varnish instance name in a VTC is used by default as the server identity for predictable Via headers. For example:: varnish v1 -vcl+backend { ... } The expected Via header is:: Via: 1.1 v1 (Varnish/7.2) The instance name can still be set to a different value using the ``-arg`` command to change the ``varnishd -i`` option. Changes for developers and VMOD authors ======================================= The ``varnishtest -i`` option only works from a Varnish source tree, in which case the new macro ``${topsrc}`` is available in addition to the old ``${topbuild}`` macro. The functions ``VRT_AddVDP()``, ``VRT_AddVFP()``, ``VRT_RemoveVDP()`` and ``VRT_RemoveVFP()`` are deprecated. The ``VCS_String()`` function can take the string ``"B"`` for the package branch. The ``vnum.h`` functions are exposed to VMOD and VEXT authors. The termination rules for ``WRK_BgThread()`` were relaxed to allow VMODs to use it. *eof* varnish-7.5.0/doc/sphinx/whats-new/changes-7.3.rst000066400000000000000000000141041457605730600216540ustar00rootroot00000000000000.. _whatsnew_changes_7.3: %%%%%%%%%%%%%%%%%%%%%% Changes in Varnish 7.3 %%%%%%%%%%%%%%%%%%%%%% For information about updating your current Varnish deployment to the new version, see :ref:`whatsnew_upgrading_7.3`. A more detailed and technical account of changes in Varnish, with links to issues that have been fixed and pull requests that have been merged, may be found in the `change log`_. .. _change log: https://github.com/varnishcache/varnish-cache/blob/master/doc/changes.rst varnishd ======== Parameters ~~~~~~~~~~ There is a new parameter ``transit_buffer`` disabled by default to limit the amount of storage used for uncacheable responses. This is useful in situations where slow clients may consume large but uncacheable objects, to prevent them from filling up storage too fast at the expense of cacheable resources. When transit buffer is enabled, a client request will effectively hold its backend connection open until the client response delivery completes. ESI processing changes ---------------------- Response status codes other than 200 and 204 are now considered errors for ESI fragments. Previously, any ``ESI:include`` object would be included, no matter what the status of it were, 200, 503, didn't matter. From now on, by default, only objects with 200 and 204 status will be included and any other status code will fail the parent ESI request. If objects with other status should be delivered, they should have their status changed to 200 in VCL, for instance in ``sub vcl_backend_error{}``, ``vcl_synth{}`` or ``vcl_deliver{}``. If ``param.set feature +esi_include_onerror`` is used, and the ```` tag has a ``onerror="continue"`` attribute, any and all ESI:include objects will be delivered, no matter what their status might be, and not even a partial delivery of them will fail the parent ESI request. To be used with great caution. Other changes in varnishd ~~~~~~~~~~~~~~~~~~~~~~~~~ In addition to classic Unix-domain sockets, Varnish now supports abstract sockets. If the operating system supports them, as does any fairly recent Linux kernel, abstract sockets can be specified using the commonplace ``@`` notation for accept sockets, e.g.:: varnishd -a @kandinsky Weak ``Last-Modified`` headers whose timestamp lies within one second of the corresponding ``Date`` header are no longer candidates for revalidation. This means that a subsequent fetch will not, when a stale object is available, include an ``If-Modified-Since`` header. A weak ``Last-Modified`` header does not prevent ``Etag`` revalidation. A cache hit on an object being streamed no longer prevents delivery of partial responses (status code 206) to range requests. Changes to VCL ============== VCL variables ~~~~~~~~~~~~~ The variables ``req.xid``, ``bereq.xid`` and ``sess.xid`` are now integers instead of strings, but should remain usable without a VCL change in a string context. Transit buffer can be controlled per fetch with the ``beresp.transit_buffer`` variable. Other changes to VCL ~~~~~~~~~~~~~~~~~~~~ Backends have a new ``.via`` attribute optionally referencing another backend:: backend detour { .host = "..."; } backend destination { .host = "..."; .via = detour; } Attempting a connection for ``destination`` connects to ``detour`` with a PROXYv2 protocol header targeting ``destination``'s address. Optionally, the ``destination`` backend could use the other new ``.authority`` attribute to define an authority TLV in the PROXYv2 header. Backends can connect to abstract sockets on linux:: backend miro { .path = "@miro"; } This is the same syntax as the ``varnishd -a`` command line option. Probes have a new ``.expect_close`` attribute defaulting to ``true``, matching the current behavior. Setting it to ``false`` will defer final checks until after the probe times out. varnishlog ========== The in-memory and on-disk format of VSL records changed to allow 64bit VXID numbers. The new binary format is **not compatible** with previous versions, and log dumps performed with a previous Varnish release are no longer readable from now on. Consequently, unused log tags have been removed. The VXID range is limited to ``VRT_INTEGER`` to fit in VCL the variables ``req.xid``, ``bereq.xid`` and ``sess.xid``. A ``ReqStart`` record is emitted for bad requests, allowing ``varnishncsa`` to find the client IP address. varnishadm ========== The ``debug.xid`` command generally used by ``varnishtest`` now sets up the next VXID directly. varnishtest =========== It is now possible to send special keys NPAGE, PPAGE, HOME and END to a process. The ``-nolen`` option is implied for ``txreq`` and ``txresp`` when either ``Content-Length`` or ``Transfer-Encoding`` headers are present. A new ``stream.peer_window`` variable similar to ``stream.window`` is available for HTTP/2 checks. Changes for developers and VMOD authors ======================================= There is a new convenience macro ``WS_TASK_ALLOC_OBJ()`` to allocate objects from the current tasks' workspace. The ``NO_VXID`` macro can be used to explicitly log records outside of a transaction. Custom backend implementations are now in charge of printing headers, which avoids duplicates when a custom implementation relied on ``http_*()`` that would also log the headers being set up. The ``VRT_new_backend*()`` functions take an additional backend argument, the optional via backend. It can not be a custom backend implementation, but it can be a director resolving a native backend. There is a new ``authority`` field for via backends in ``struct vrt_backend``. There is a new ``exp_close`` field in ``struct vrt_backend_probe``. Directors which take and hold references to other directors via ``VRT_Assign_Backend()`` (typically any director which has other directors as backends) are now expected to implement the new ``.release`` callback of type ``void vdi_release_f(VCL_BACKEND)``. This function is called by ``VRT_DelDirector()``. The implementation is expected drop any backend references which the director holds (again using ``VRT_Assign_Backend()`` with ``NULL`` as the second argument). *eof* varnish-7.5.0/doc/sphinx/whats-new/changes-7.4.rst000066400000000000000000000101461457605730600216570ustar00rootroot00000000000000.. _whatsnew_changes_7.4: %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Changes in Varnish **7.4** %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% For information about updating your current Varnish deployment to the new version, see :ref:`whatsnew_upgrading_7.4`. A more detailed and technical account of changes in Varnish, with links to issues that have been fixed and pull requests that have been merged, may be found in the `change log`_. .. _change log: https://github.com/varnishcache/varnish-cache/blob/master/doc/changes.rst varnishd ======== HTTP/2 header field validation is now more strict with respect to allowed characters. The :ref:`vcl-step(7)` manual page has been added to document the VCL state machines. VCL Tracing ~~~~~~~~~~~ VCL tracing now needs to be explicitly activated by setting the ``req.trace`` or ``bereq.trace`` VCL variables, which are initialized from the ``feature +trace`` flag. Only if the trace variables are set will ``VCL_trace`` log records be generated. Consequently, ``VCL_trace`` has been removed from the default ``vsl_mask``, so any trace records will be emitted by default. ``vsl_mask`` can still be used to filter ``VCL_trace`` records. To trace ``vcl_init {}`` and ``vcl_fini {}``, set the ``feature +trace`` flag while the vcl is loaded/discarded. Parameters ~~~~~~~~~~ The ``startup_timeout`` parameter now specifically replaces ``cli_timeout`` for the initial startup only. Changes to VCL ============== The ``Content-Length`` and ``Transfer-Encoding`` headers are now protected. For the common use case of ``unset (be)req.http.Content-Length`` to dismiss a body, ``unset (be)req.body`` should be used. varnishlog ========== Object creation failures by the selected storage engine are now logged under the ``Error`` tag as ``Failed to create object object from %s %s``. varnishadm ========== Tabulation of the ``vcl.list`` CLI output has been modified slightly. varnishstat =========== The counter ``MAIN.http1_iovs_flush`` has been added to track the number of premature ``writev()`` calls due to an insufficient number of IO vectors. This number is configured through the ``http1_iovs`` parameter for client connections and implicitly defined by the amount of free workspace for backend connections. varnishtest =========== The basename of the test directory is now available as the ``vtcid`` macro to serve as a unique string across concurrently running tests. The ``varnishd_args_prepend`` and ``varnishd_args_append`` macros have been added to allow addition of arguments to ``varnishd`` invocations before and after those added by ``varnishtest`` by default. ``User-Agent`` request and ``Server`` response headers are now created by default, containing the respective client and server name. The ``txreq -nouseragent`` and ``txresp -noserver`` options disable addition of these headers. Changes for developers and VMOD authors ======================================= Call sites of VMOD functions and methods can now be restricted to built-in subroutines using the ``$Restrict`` stanza in the VCC file. ``.vcc`` files of VMODs are now installed to ``/usr/share/varnish/vcc`` (or equivalent) to enable re-use by other tools like code editors. API Changes ~~~~~~~~~~~ The ``varnishapi`` version has been increased to 3.1 and the ``VSHA256_*``, ``VENC_Encode_Base64()`` and ``VENC_Decode_Base64()`` functions are now exposed. In ``struct vsmwseg`` and ``struct vsm_fantom``, the ``class`` member has been renamed to ``category``. The ``VSB_quote_pfx()`` (and, consequently, ``VSB_quote()``) function no longer produces ``\v`` for a vertical tab. This improves compatibility with JSON. Additions to varnish C header files ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The ``PTOK()`` macro has been added to ``vas.h`` to simplify error checking of ``pthread_*`` POSIX functions. The ``v_cold`` macro has been added to add ``__attribute__((cold))`` on compilers supporting it. It is used for ``VRT_fail()`` to mark failure code paths as cold. The utility macros ``ALLOC_OBJ_EXTRA()`` and ``ALLOC_FLEX_OBJ()`` have been added to ``miniobj.h`` to simplify allocation of objects larger than a struct and such with a flexible array. *eof* varnish-7.5.0/doc/sphinx/whats-new/changes-7.5.rst000066400000000000000000000276321457605730600216700ustar00rootroot00000000000000.. _whatsnew_changes_7.5: %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Changes in Varnish **7.5** %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% For information about updating your current Varnish deployment to the new version, see :ref:`whatsnew_upgrading_7.5`. A more detailed and technical account of changes in Varnish, with links to issues that have been fixed and pull requests that have been merged, may be found in the `change log`_. .. _change log: https://github.com/varnishcache/varnish-cache/blob/master/doc/changes.rst Security ======== CVE-2023-44487 ~~~~~~~~~~~~~~ Also known as the HTTP/2 Rapid Reset Attack, or `VSV 13`_, this vulnerability is addressed with two mitigations introducing several changes since the 7.4.0 release of Varnish Cache. The first one detects and stops Rapid Reset attacks and the second one interrupts the processing of HTTP/2 requests that are no longer open (stream reset, client disconnected etc). .. _VSV 13: https://varnish-cache.org/security/VSV00013.html CVE-2023-43622 ~~~~~~~~~~~~~~ Another denial of service attack vector received a CVE number in the aftermath of the Rapid Reset debacle. `VSV 14`_ is called the HTTP/2 Broke Window attack and can be summarized as the ability for clients to hold a server still by not crediting the control flow window of HTTP/2 streams. .. _VSV 14: https://varnish-cache.org/security/VSV00014.html varnishd ======== Parameters ~~~~~~~~~~ The default value of ``cli_limit`` has been increased from 48KB to 64KB to avoid truncating the ``param.show -j`` output for common use cases. A new ``pipe_task_deadline`` specifies the maximum duration of a pipe transaction. The default value is the special value "never" to align with the former lack of such timeout:: # both equivalent for now param.set pipe_task_deadline never param.reset pipe_task_deadline All the timeout parameters that can be disabled accept the "never" value: - ``between_bytes_timeout`` - ``cli_timeout`` - ``connect_timeout`` - ``first_byte_timeout`` - ``idle_send_timeout`` - ``pipe_task_deadline`` - ``pipe_timeout`` - ``send_timeout`` - ``startup_timeout`` The :ref:`varnishd(1)` manual advertises the ``timeout`` flag for these parameters. The following parameters address the HTTP/2 Rapid Reset attach: - ``h2_rapid_reset`` (duration below which a reset is considered rapid) - ``h2_rapid_reset_limit`` (maximum number of rapid resets per period) - ``h2_rapid_reset_period`` (the sliding period to track rapid resets) The new ``h2_window_timeout`` parameter defines how long an HTTP/2 stream can stall its delivery waiting for a control flow window update. A stream without any credits is considered broke, and if all streams are broke when the new timeout triggers the entire connection is considered bankrupt. A new bit flag ``vcl_req_reset`` for the ``feature`` parameter interrupts client request tasks during VCL transitions when an HTTP/2 stream is no longer open. The result is equivalent to a ``return (fail);`` statement and can save significant server resources. It can also break setups expecting requests to always be fully processed, even when they are not delivered. Bits parameters ~~~~~~~~~~~~~~~ In Varnish 7.1.0 the ``param.set`` command grew a new ``-j`` option that displays the same output as ``param.show -j`` for the parameter that is successfully updated. The goal was to atomically change a value and communicate how a subsequent ``param.show`` would render it. This could be used for consistency checks, to ensure that a parameter was not changed by a different party. Collecting how the parameter is displayed can be important for example for floating-point numbers parameters that could be displayed with different resolutions, or parameters that can take units and have multiple representations. Here is a concrete example:: $ varnishadm param.set -j workspace_client 16384 | jq '.[3].value' 16384 $ varnishadm param.set -j workspace_client 128k | jq '.[3].value' 131072 However, this could not work with bits parameters:: $ varnishadm param.set -j feature +http2 | jq -r '.[3].value' +http2,+validate_headers If the ``feature`` parameter is changed, reusing the output of ``param.set`` cannot guarantee the restoration that exact value:: # third party intervention $ varnishadm param.set feature +no_coredump | jq -r '.[3].value' # attempt at restoring the captured value $ varnishadm param.set -j feature +http2,+validate_headers | jq -r '.[3].value' +http2,+no_coredump,+validate_headers To fill this gap, bits parameters are now displayed as absolute values, relative to none of the bits being set. A list of bits can start with the special value ``none`` to clear all the bits, followed by specific bits to raise:: # atomically update and capture feature flags $ varnishadm param.set -j feature +http2 | jq -r '.[3].value' none,+http2,+validate_headers # very insistent systems administrator $ varnishadm param.set feature +no_coredump # successful attempt at restoring the captured value $ varnishadm param.set -j feature none,+http2,+validate_headers | jq -r '.[3].value' none,+http2,+validate_headers The output of ``param.show`` and ``param.set`` is now idempotent for bits parameters, and can be used by a consistency check system to restore a parameter to its desired value. Almost all bits parameters are displayed as bits set relative to a ``none`` value. The notable exception is ``vsl_mask`` that is expressed with bits cleared. For this purpose the ``vsl_mask`` parameter is now displayed as bits cleared relative to an ``all`` value:: $ varnishadm param.set -j vsl_mask all,-Debug | jq -r '.[3].value' all,-Debug The special value ``default`` for bits parameters was deprecated in favor of the generic ``param.reset`` command. It might be removed in a future release. Other changes in varnishd ~~~~~~~~~~~~~~~~~~~~~~~~~ The CLI script specified with the ``-I`` option must end with a new line character or ``varnishd`` will fail to start. Previously, an incomplete last line would be ignored. Changes to VCL ============== VCL variables ~~~~~~~~~~~~~ A new ``bereq.task_deadline`` variable is available in ``vcl_pipe`` to override the ``pipe_task_deadline`` parameter. All the timeouts that can be overridden in VCL can be unset as well: - ``bereq.between_bytes_timeout`` - ``bereq.connect_timeout`` - ``bereq.first_byte_timeout`` - ``bereq.task_deadline`` - ``sess.idle_send_timeout`` - ``sess.send_timeout`` - ``sess.timeout_idle`` - ``sess.timeout_linger`` They are unset by default, and if they are read unset, the parameter value is returned. If the timeout parameter was disabled with the "never" value, it is capped in VCL to the maximum decimal number (999999999999.999). It is not possible to disable a timeout in VCL. ESI ~~~ In the 7.3.0 release a new error condition was added to ESI fragments. A fragment is considered valid only for the response status code 200 and 204. However, when introduced it also changed the default behavior of the feature flag ``esi_include_onerror`` in an inconsistent way. The behavior is reverted to the traditional Varnish handling of ESI, and the effect of the feature flag is clarified: - by default, fragments are always included, even errors - the feature flag ``esi_include_onerror`` enable processing of the ``onerror`` attribute of the ```` tag - ``onerror="continue"`` allows a parent request to resume its delivery after a sub-request failed - when streaming is disabled for the sub-request, the ESI fragment is omitted as mandated by the ESI specification See :ref:`users-guide-esi` for more information. Other changes to VCL ~~~~~~~~~~~~~~~~~~~~ The new ``+fold`` flag for ACLs merges adjacent subnets together and optimize out subnets for which there exist another all-encompassing subnet. VMODs ===== A new :ref:`vmod_h2(3)` can override the ``h2_rapid_reset*`` parameters on a per-session basis. varnishlog ========== The ``SessClose`` record may contain the ``RAPID_RESET`` reason. This can be used to monitor attacks successfully mitigated or detect false positives. When the ``feature`` flag ``vcl_req_reset`` is raised, an interrupted client logs a ``Reset`` timestamps, and the response status code 408 is logged. When a ``BackendClose`` record includes a reason field, it now shows the reason tag (for example ``RX_TIMEOUT``) instead of its description (Receive timeout) to align with ``SessClose`` records. See :ref:`vsl(7)`. The ``ExpKill`` tag can be used to troubleshoot a cache policy. It is masked by default because it is very verbose and requires a good understanding of Varnish internals in the expiry vicinity. A new field with the number of hits is present in the ``EXP_Expired`` entry of an object. Objects removed before they expired are now logged a new entry ``EXP_Removed``, removing a blind spot. Likewise, purged objects are no longer logged as expired, but removed instead. The ``EXP_expire`` entry formerly undocumented was renamed to ``EXP_Inspect`` for clarity and consistency. A new ``VBF_Superseded`` entry explains which object is evicting another one. varnishncsa =========== A new custom format ``%{Varnish:default_format}x`` expands to the output format when nothing is specified. This allows enhancing the default format without having to repeat it:: varnishncsa -F ``%{Varnish:default_format}x %{Varnish:handling}x`` varnishstat =========== A new ``MAIN.sc_rapid_reset`` counter counts the number of HTTP/2 connections closed because the number of rapid resets exceed the limit over the configured period. Likewise, ``MAIN.sc_bankrupt`` counts the number of HTTP/2 connections closed because all streams ran out of credits and ``h2_window_timeout`` triggered. Their ``MAIN.req_reset`` counterpart counts the number of time a client task was prematurely failed because the HTTP/2 stream it was processing was no longer open and the feature flag ``vcl_req_reset`` was raised. A new counter ``MAIN.n_superseded`` adds visibility on how many objects are inserted as the replacement of another object in the cache. This can give insights regarding the nature of churn in a cache. varnishtest =========== When an HTTP/2 stream number does not matter and the stream is handled in a single block, the automatic ``next`` identifier can be used:: server s1 { stream next { rxreq txresp } -run } -start It is now possible to include other VTC fragments:: include common-server.vtc common-varnish.vtc An include command takes at least one file name and expands it in place of the include command itself. There are no guards against recursive includes. Changes for developers and VMOD authors ======================================= The ``VSB_tofile()`` function can work with VSBs larger than ``INT_MAX`` and tolerate partial writes. The semantics for ``vtim_dur`` changed so that ``INFINITY`` is interpreted as never timing out. A zero duration that was used in certain scenarios as never timing out is now interpreted as non-blocking or when that is not possible, rounded up to one millisecond. A negative value in this context is considered an expired deadline as if zero was passed, giving a last chance for operations to succeed before timing out. To support this use case, new functions convert ``vtim_dur`` to other values: - ``VTIM_poll_tmo()`` computes a timeout for ``poll(2)`` - ``VTIM_timeval_sock()`` creates a ``struct timeval`` for ``setsockopt(2)`` The value ``NAN`` is used to represent unset timeouts in VCL with one notable exception. The ``struct vrt_backend`` duration fields cannot be initialized to ``NAN`` and zero was the unset value, falling back to parameters. Zero will disable a timeout in a backend definition (which can be overridden by VCL variables) and a negative value will mean unset. This is an API breakage of ``struct vrt_backend`` and its consumers. Likewise, VMODs creating their own lock classes with ``Lck_CreateClass()`` must stop using zero an indefinite ``Lck_CondWaitTimeout()``. *eof* varnish-7.5.0/doc/sphinx/whats-new/index.rst000066400000000000000000000041771457605730600210570ustar00rootroot00000000000000.. Copyright (c) 2013-2023 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _whats-new-index: %%%%%%%%%%%%%%%%%%%%%% What's new / Upgrading %%%%%%%%%%%%%%%%%%%%%% This section describes the changes and improvements between different versions of Varnish, and what upgrading between the different versions entail. Varnish **7.5** ------------------------- **Note: These are working documents for a future release, with running updates for changes in the development branch. For changes in the released versions of Varnish, see the chapters listed below.** .. toctree:: :maxdepth: 2 changes-7.5 upgrading-7.5 Varnish 7.4 ----------- .. toctree:: :maxdepth: 2 changes-7.4 upgrading-7.4 Varnish 7.3 ----------- .. toctree:: :maxdepth: 2 changes-7.3 upgrading-7.3 Varnish 7.2 ----------- .. toctree:: :maxdepth: 2 changes-7.2 upgrading-7.2 Varnish 7.1 ----------- .. toctree:: :maxdepth: 2 changes-7.1 upgrading-7.1 Varnish 7.0 ----------- .. toctree:: :maxdepth: 2 changes-7.0 upgrading-7.0 Varnish 6.6 ----------- .. toctree:: :maxdepth: 2 changes-6.6 upgrading-6.6 Varnish 6.5 ----------- .. toctree:: :maxdepth: 2 changes-6.5 upgrading-6.5 Varnish 6.4 ----------- .. toctree:: :maxdepth: 2 changes-6.4 upgrading-6.4 Varnish 6.3 ----------- .. toctree:: :maxdepth: 2 changes-6.3 upgrading-6.3 Varnish 6.2 ----------- .. toctree:: :maxdepth: 2 changes-6.2 upgrading-6.2 Varnish 6.1 ----------- .. toctree:: :maxdepth: 2 changes-6.1 upgrading-6.1 Varnish 6.0 ----------- .. toctree:: :maxdepth: 2 changes-6.0 upgrading-6.0 Varnish 5.2 ----------- .. toctree:: :maxdepth: 2 changes-5.2 upgrading-5.2 Varnish 5.1 ----------- .. toctree:: :maxdepth: 2 changes-5.1 upgrading-5.1 Varnish 5.0 ----------- .. toctree:: :maxdepth: 2 relnote-5.0 changes-5.0 upgrading-5.0 Varnish 4.1 ----------- .. toctree:: :maxdepth: 2 changes-4.1 upgrading-4.1 Varnish 4.0 ----------- .. toctree:: :maxdepth: 2 upgrading-4.0 varnish-7.5.0/doc/sphinx/whats-new/relnote-5.0.rst000066400000000000000000000114761457605730600217200ustar00rootroot00000000000000.. Copyright (c) 2016-2017 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _whatsnew_relnote_5.0: Varnish 5.0 Release Note ======================== This is the first Varnish release after the Varnish Project moved out of Varnish Software's basement, so to speak, and it shows. But it is also our 10 year anniversary release, `Varnish 1.0 was released`_ on September 20th 2006. That also means that we have been doing this for 10 years without any bad security holes. So yeah... 5.0 is not entirely what we had hoped it would be, but we are as proud as one can possibly be anyway. To keep this release note short(er), we have put the purely technical stuff in two separate documents: * :ref:`whatsnew_changes_5.0` * :ref:`whatsnew_upgrading_5.0` How to get Varnish 5.0 ---------------------- `Source download `_ Packages for mainstream operating systems should appear in as soon as they trickle through the machinery. Reasons to upgrade to Varnish 5.0 --------------------------------- The separate VCL/VCL labels feature can probably help you untangle your VCL code if it has become too complex. Upgrading from 4.1 to get that feature should be a no-brainer. The HTTP/2 code is not mature enough for production, and if you want to start to play with H2, you should not upgrade to 5.0, but rather track -trunk from github and help us find all the bugs before the next release. The Shard director is new in the tree, but it has a lot of live hours out of tree. Upgrading from 4.1 to 5.0 to get that should also be a no-brainer. We have also fixed at lot of minor bugs, and improved many details here and there, See :ref:`whatsnew_upgrading_5.0` for more of this. Reasons not to upgrade to Varnish 5.0 ------------------------------------- None that we know of at this time. Only in very special cases should you need to modify your VCL. Next release ------------ Next release is scheduled for March 15th 2017, and will most likely be Varnish 5.1. The obligatory thank-you speech ------------------------------- This release of Varnish Cache is brought to you by the generous support and donations of money and manpower from four companies: * Fastly * Varnish Software * UPLEX * The company which prefers to simply be known as "ADJS" Without them, this release, and for that matter all the previous ones, would not have happened. Even though they are all employees of those very same companies, these developers merit personal praise: * Martin - HTTP/2 HPACK header compression code, stevedore API, VSL * Nils & Geoff - Shard backend director, ban-lurker improvements * Guillame - HTTP/2 support for varnishtest * Dridi - Backend temperatures etc. * Federico - Too many fixes and ideas to count * Lasse - Our tireless release-manager * Devon - Performance insights and critical review. * The rest of the V-S crew - Too many things to list. We need more money ------------------ Until now Varnish Software has done a lot of work for the Varnish Cache project, but for totally valid reasons, they are scaling that back and the project either needs to pick up the slack or drop some of those activities. It is important that people understand that Free and Open Source Software isn't the same as gratis software: Somebody has to pay the developers mortgages and student loans. A very large part of the Varnish development is funded through the `Varnish Moral License`_, which enables Poul-Henning Kamp to have Varnish as his primary job, but right now he is underfunded to the tune of EUR 2000-3000 per month. Please consider if your company makes enough money using Varnish Cache, to spare some money, or employee-hours for its future maintenance and development. We also need more manpower -------------------------- First and foremost, we could really use a Postmaster to look after our mailman mailing lists, including the increasingly arcane art of anti-spam techniques and invocations. We also need to work more on our documentation, it is in bad need of one or more writers which can actually write text rather than code. We could also use more qualified content for our new project homepage, so a webmaster is on our shopping list as well. Finally, we can always use C-developers, we have more ideas than we have coders, and since we have very high standards for quality things take time to write. The best way to get involved is to just jump in and do stuff that needs done. Here is the `Varnish Cache github page `_. And here is the `Varnish Projects homepage on github `_. Welcome on board! *phk* .. _Varnish Moral License: http://phk.freebsd.dk/VML .. _Varnish 1.0 was released: https://sourceforge.net/p/varnish/news/2006/09/varnish-10-released/ varnish-7.5.0/doc/sphinx/whats-new/upgrading-4.0.rst000066400000000000000000000161001457605730600222140ustar00rootroot00000000000000.. Copyright (c) 2016 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _whatsnew_upgrading_4_0: %%%%%%%%%%%%%%%%%%%%%%%% Upgrading to Varnish 4.0 %%%%%%%%%%%%%%%%%%%%%%%% Changes to VCL ============== The backend fetch parts of VCL have changed in Varnish 4. We've tried to compile a list of changes needed to upgrade here. Version statement ~~~~~~~~~~~~~~~~~ To make sure that people have upgraded their VCL to the current version, Varnish now requires the first line of VCL to indicate the VCL version number:: vcl 4.0; req.request is now req.method ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To align better with RFC naming, `req.request` has been renamed to `req.method`. vcl_fetch is now vcl_backend_response ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Directors have been moved to the vmod_directors ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To make directors (backend selection logic) easier to extend, the directors are now defined in loadable VMODs. Setting a backend for future fetches in `vcl_recv` is now done as follows:: sub vcl_init { new cluster1 = directors.round_robin(); cluster1.add_backend(b1, 1.0); cluster1.add_backend(b2, 1.0); } sub vcl_recv { set req.backend_hint = cluster1.backend(); } Note the extra `.backend()` needed after the director name. Use the hash director as a client director ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Since the client director was already a special case of the hash director, it has been removed, and you should use the hash director directly:: sub vcl_init { new h = directors.hash(); h.add_backend(b1, 1); h.add_backend(b2, 1); } sub vcl_recv { set req.backend_hint = h.backend(client.identity); } vcl_error is now vcl_backend_error ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To make a distinction between internally generated errors and VCL synthetic responses, `vcl_backend_error` will be called when varnish encounters an error when trying to fetch an object. error() is now synth() ~~~~~~~~~~~~~~~~~~~~~~ And you must explicitly return it:: return (synth(999, "Response")); Synthetic responses in vcl_synth ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Setting headers on synthetic response bodies made in vcl_synth are now done on resp.http instead of obj.http. The synthetic keyword is now a function:: if (resp.status == 799) { set resp.status = 200; set resp.http.Content-Type = "text/plain; charset=utf-8"; synthetic("You are " + client.ip); return (deliver); } obj in vcl_error replaced by beresp in vcl_backend_error ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To better represent a the context in which it is called, you should now use `beresp.*` vcl_backend_error, where you used to use `obj.*` in `vcl_error`. hit_for_pass objects are created using beresp.uncacheable ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Example:: sub vcl_backend_response { if (beresp.http.X-No-Cache) { set beresp.uncacheable = true; set beresp.ttl = 120s; return (deliver); } } req.* not available in vcl_backend_response ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ req.* used to be available in `vcl_fetch`, but after the split of functionality, you only have 'bereq.*' in `vcl_backend_response`. vcl_* reserved ~~~~~~~~~~~~~~ Any custom-made subs cannot be named 'vcl_*' anymore. This namespace is reserved for builtin subs. req.backend.healthy replaced by std.healthy(req.backend_hint) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Remember to import the std module if you're not doing so already. client.port, and server.port replaced by respectively std.port(client.ip) and std.port(server.ip) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ `client.ip` and `server.ip` are now proper data types, which renders as an IP address by default. You need to use the `std.port()` function to get the port number. Invalidation with purge ~~~~~~~~~~~~~~~~~~~~~~~ Cache invalidation with purges is now done via `return(purge)` from `vcl_recv`. The `purge;` keyword has been retired. obj is now read-only ~~~~~~~~~~~~~~~~~~~~ `obj` is now read-only. `obj.last_use` has been retired. Some return values have been replaced ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Apart from the new `synth` return value described above, the following has changed: - `vcl_recv` must now return `hash` instead of `lookup` - `vcl_hash` must now return `lookup` instead of `hash` - `vcl_pass` must now return `fetch` instead of `pass` Backend restarts are now retry ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In 3.0 it was possible to do `return(restart)` after noticing that the backend response was wrong, to change to a different backend. This is now called `return(retry)`, and jumps back up to `vcl_backend_fetch`. This only influences the backend fetch thread, client-side handling is not affected. default/builtin VCL changes ~~~~~~~~~~~~~~~~~~~~~~~~~~~ The VCL code that is appended to user-configured VCL automatically is now called the builtin VCL. (previously default.vcl) The builtin VCL now honors Cache-Control: no-cache (and friends) to indicate uncacheable content from the backend. The `remove` keyword is gone ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Replaced by `unset`. X-Forwarded-For is now set before vcl_recv ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In many cases, people unintentionally removed X-Forwarded-For when implementing their own vcl_recv. Therefore it has been moved to before vcl_recv, so if you don't want an IP added to it, you should remove it in vcl_recv. Changes to existing parameters ============================== session_linger ~~~~~~~~~~~~~~ `session_linger` has been renamed to `timeout_linger` and it is in seconds now (previously was milliseconds). sess_timeout ~~~~~~~~~~~~ `sess_timeout` has been renamed to `timeout_idle`. sess_workspace ~~~~~~~~~~~~~~ In 3.0 it was often necessary to increase `sess_workspace` if a lot of VMODs, complex header operations or ESI were in use. This is no longer necessary, because ESI scratch space happens elsewhere in 4.0. If you are using a lot of VMODs, you may need to increase either `workspace_backend` and `workspace_client` based on where your VMOD is doing its work. thread_pool_purge_delay ~~~~~~~~~~~~~~~~~~~~~~~ `thread_pool_purge_delay` has been renamed to `thread_pool_destroy_delay` and it is in seconds now (previously was milliseconds). thread_pool_add_delay and thread_pool_fail_delay ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ They are in seconds now (previously were milliseconds). New parameters since 3.0 ======================== vcc_allow_inline_c ~~~~~~~~~~~~~~~~~~ You can now completely disable inline C in your VCL, and it is disabled by default. Other changes ============= New log filtering ~~~~~~~~~~~~~~~~~ The logging framework has a new filtering language, which means that the -m switch has been replaced with a new -q switch. See :ref:`vsl-query(7)` for more information about the new query language. varnish-7.5.0/doc/sphinx/whats-new/upgrading-4.1.rst000066400000000000000000000040601457605730600222170ustar00rootroot00000000000000.. Copyright (c) 2016 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _whatsnew_upgrading_4_1: %%%%%%%%%%%%%%%%%%%%%%%% Upgrading to Varnish 4.1 %%%%%%%%%%%%%%%%%%%%%%%% Changes to VCL ============== Data type conversion functions now take a fallback ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Data type conversion functions in the std vmod now takes an additional argument *fallback*, which is returned if the conversion does not succeed. Version statement is kept ~~~~~~~~~~~~~~~~~~~~~~~~~ The VCL syntax has not chanced significantly, and as such the Varnish 4.0 version marker is kept for Varnish 4.1. One of the initial lines in a Varnish 4.1 VCL should read:: vcl 4.0; Remote address accessors ~~~~~~~~~~~~~~~~~~~~~~~~ New in 4.1 is the `local.ip` and `remote.ip` representing the (local) TCP connection endpoints. With PROXY listeners the `server.ip` and `client.ip` are set from the PROXY preamble. On normal HTTP listeners the behaviour is unchanged. Management interface ==================== The management interface enabled with ``-M`` previously supported the telnet protocol. Support for telnet control sequences have been retired. Replacement clients like netcat or (preferred) ``varnishadm`` should be used instead. Runtime users and groups ======================== With the new jail support, an additional runtime user (`vcache`) should be used for the Varnish worker child process. Additionally, the ``varnishlog``, ``varnishncsa`` and other Varnish shared log utilities must now be run in a context with `varnish` group membership. Changes to parameters ===================== `vcl_cooldown` is new, and decides how long time a VCL is kept warm after being replaced as the active VCL. The following parameters have been retired: * `group` (security revamp) * `group_cc` (security revamp) * `listen_address` (security revamp) * `pool_vbc` * `timeout_req` - merged with `timeout_idle`. * `user` (security revamp) Minor changes of default values on `workspace_session` and `vsl_mask`. varnish-7.5.0/doc/sphinx/whats-new/upgrading-5.0.rst000066400000000000000000000060511457605730600222210ustar00rootroot00000000000000.. Copyright (c) 2016 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _whatsnew_upgrading_5.0: %%%%%%%%%%%%%%%%%%%%%%%% Upgrading to Varnish 5.0 %%%%%%%%%%%%%%%%%%%%%%%% Changes to VCL ============== * All VCL Objects should now be defined before used * in particular, this is now required for ACLs. The error message for ACLs being used before being defined is confusing - see PR #2021:: Name is a reserved name * VCL names are restricted to alphanumeric characters, dashes (-) and underscores (_). In addition, the first character should be alphabetic. That is, the name should match "[A-Za-z][A-Za-z0-9\_-]*". * Like strings, backends and integers can now be used as boolean expressions in if statements. See ``vcl(7)`` for details. * Add support to perform matches in assignments, obtaining a boolean as result:: set req.http.foo = req.http.bar ~ "bar"; * Returned values from functions and methods' calls can be thrown away. backends ~~~~~~~~ * Added support for the PROXY protocol via ``.proxy_header`` attribute. Possible values are 1 and 2, corresponding to the PROXY protocol version 1 and 2, respectively. vcl_recv ~~~~~~~~ * Added ``return (vcl(label))`` to switch to the VCL labelled `label`. * The ``rollback`` function has been retired. vcl_hit ~~~~~~~ * Replace ``return (fetch)`` with ``return (miss)``. vcl_backend_* ~~~~~~~~~~~~~ * Added read access to ``remote.ip``, ``client.ip``, ``local.ip`` and ``server.ip``. vcl_backend_fetch ~~~~~~~~~~~~~~~~~ * Added write access to ``bereq.body``, the request body. Only ``unset`` is supported at this time. * We now send request bodies by default (see :ref:`whatsnew_changes_5.0_reqbody`). To keep the previous behaviour add the following code before any ``return (..)`` statement in this subroutine:: if (bereq.method == "GET") { unset bereq.body; } vcl_backend_error ~~~~~~~~~~~~~~~~~ * Added write access to ``beresp.body``, the response body. This may replace ``synthetic()`` in future releases. vcl_deliver ~~~~~~~~~~~ * Added read access to ``obj.ttl``, ``obj.age``, ``obj.grace`` and ``obj.keep``. vcl_synth ~~~~~~~~~ * Added write access to ``resp.body``, the response body. This may replace ``synthetic()`` in future releases. Management interface ==================== * To disable CLI authentication use ``-S none``. * ``n_waitinglist`` statistic removed. Changes to parameters ===================== * Added ``ban_lurker_holdoff``. * Removed ``session_max``. This parameter actually had no effect since 4.0 but might come back in a future release. * ``vcl_path`` is now a colon-separated list of directories, replacing ``vcl_dir``. * ``vmod_path`` is now a colon-separated list of directories, replacing ``vmod_dir``. Other changes ============= * ``varnishstat(1)`` -f option accepts a ``glob(7)`` pattern. * Cache-Control and Expires headers for uncacheable requests (i.e. passes) will not be parsed. As a result, the RFC variant of the TTL VSL tag is no longer logged. varnish-7.5.0/doc/sphinx/whats-new/upgrading-5.1.rst000066400000000000000000000274741457605730600222360ustar00rootroot00000000000000.. Copyright (c) 2017-2019 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _whatsnew_upgrading_5.1: %%%%%%%%%%%%%%%%%%%%%%%% Upgrading to Varnish 5.1 %%%%%%%%%%%%%%%%%%%%%%%% varnishd command-line options ============================= If you have to change anything at all for version 5.1, then most likely the command-line options for `varnishd` in your start scripts, because we have tightened restrictions on which options may be used together. This has served mainly to clarify the use of options for testing purposes, for example using ``varnishd -C`` to check a VCL source for syntactic correctness. We have also added some new options. The details are given in :ref:`varnishd(1)`, but here's a summary: * Added ``-I clifile`` to run CLI commands at startup, before the worker process starts. See :ref:`whatsnew_clifile`. * More than one ``-f`` option is now permitted, to pre-load VCL instances at startup. The last of these becomes the "boot" instance that is is active at startup. * Either ``-b`` or one or more ``-f`` options must be specified, but not both, and they cannot both be left out, unless ``-d`` is used to start `varnishd` in debugging mode. If the empty string is specified as the sole ``-f`` option, then `varnishd` starts without starting the worker process, and the management process will accept CLI commands. * Added ``-?`` to print the usage message, which is only printed for this option. * Added the ``-x`` option to print certain kinds of documentation and exit. When ``-x`` is used, it must be the only option. * Only one of ``-F`` or ``-d`` may be used, and neither of these can be used with ``-C``. * Added the ``workuser`` parameter to the ``-j`` option. varnishd parameters =================== * The size of the shared memory log is now limited to 4G-1b (4294967295 bytes). This places upper bounds on the ``-l`` command-line option and on the ``vsl_space`` and ``vsm_space`` parameters. * Added ``clock_step``, ``thread_pool_reserve`` and ``ban_cutoff`` (see :ref:`ref_param_clock_step`, :ref:`ref_param_thread_pool_reserve`, :ref:`ref_param_ban_cutoff`). * ``thread_pool_stack`` is no longer considered experimental, and is more extensively documented, see :ref:`ref_param_thread_pool_stack`. * ``thread_queue_limit`` only applies to queued client requests, see :ref:`ref_param_thread_queue_limit`. * ``vcl_dir`` and ``vmod_dir`` are deprecated and will be removed from a future release, use ``vcl_path`` and ``vmod_path`` instead (see :ref:`ref_param_vcl_path`, :ref:`ref_param_vmod_path`). * All parameters are defined on every platform, including those that that are not functional on every platform. Most of these involve features of the TCP stack, such as ``tcp_keepalive_intvl``, ``tcp_keepalive_probes``, ``accept_filter`` and ``tcp_fastopen``. The unavailability of a parameter is documented in the output of the ``param.show`` command. Setting such a parameter is not an error, but has no effect. Changes to VCL ============== VCL written for Varnish 5.0 will very likely work without changes in version 5.1. We have added some new elements and capabilities to the language (which you might like to start using), clarified some matters, and deprecated some little-used language elements. Type conversions ~~~~~~~~~~~~~~~~ We have put some thought to the interpretation of the ``+`` and ``-`` operators for various combinations of operands with differing data types, many of which count as corner cases (what does it mean, for example, to subtract a string from an IP address?). Recall that ``+`` denotes addition for numeric operands, and string concatenation for string operands; operands may be converted to strings and concatenated, if a string is expected and there is no sensible numeric interpretation. The semantics have not changed in nearly all cases, but the error messages for illegal combinations of operands have improved. Most importantly, we have taken the time to review these cases, so this will be the way VCL works going forward. To summarize: * If both operands of ``+`` or ``-`` are one of BYTES, DURATION, INT or REAL, then the result has the same data type, with the obvious numeric interpretation. If such an expression is evaluated in a context that expects a STRING (for example for assignment to a header), then the arithmetic is done first, and the result it converted to a STRING. * INTs and REALs can be added or subtracted to yield a REAL. * A DURATION can be added to or subtracted from a TIME to yield a TIME. * No other combinations of operand types are legal with ``-``. * When a ``+`` expression is evaluated in a STRING context, then for all other combinations of operand data types, the operands are converted to STRINGs and concatenated. * If a STRING is not expected for the ``+`` expression, then no other combination of data types is legal. Other notes on data types: * When ``bereq.backend`` is set to a director, then it returns an actual backend on subsequent reads if the director resolves to a backend immediately, or the director otherwise. If ``bereq.backend`` was set to a director, then ``beresp.backend`` references the backend to which it was set for the fetch. When either of these is used in string context, it returns the name of the director or of the resolved backend. * Comparisons between symbols of type BACKEND now work properly:: if (bereq.backend == foo.backend()) { # do something specific to the foo backends } * DURATION types may be used in boolean contexts, and are evaluated as false when the duration is less than or equal to zero, true otherwise. * INT, DURATION and REAL values can now be negative. Response codes ~~~~~~~~~~~~~~ Response codes 1000 or greater may now be set in VCL internally. ``resp.status`` is delivered modulo 1000 in client responses. IP address comparison ~~~~~~~~~~~~~~~~~~~~~ IP addresses can now be compared for equality:: if (client.ip == remote.ip) { call do_if_equal; } The objects are equal if they designate equal socket addresses, not including the port number. IPv6 addresses are always unequal to IPv4 addresses (the comparison cannot consider v4-mapped IPv6 addresses). The STEVEDORE type and storage objects ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The data type STEVEDORE for storage backends is now available in VCL and for VMODs. Storage objects with names of the form ``storage.SNAME`` will exist in a VCL instance, where ``SNAME`` is the name of a storage backend provided with the ``varnishd`` command-line option ``-s``. If no ``-s`` option is given, then ``storage.s0`` denotes the default storage. The object ``storage.Transient`` always exists, designating transient storage. See :ref:`guide-storage`, and the notes about ``beresp.storage`` and ``req.storage`` below. All VCL subroutines (except ``vcl_fini``) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ * Added ``return(fail)`` to immediately terminate VCL processing. In all cases but ``vcl_synth``, control is directed to ``vcl_synth`` with ``resp.status`` and ``resp.reason`` set to 503 and "VCL failed", respectively. ``vcl_synth`` is aborted on ``return(fail)``. ``vcl_fini`` is executed when a VCL instance in unloaded (enters the COLD state) and has no failure condition. * VCL failure is invoked on any attempt to set one of the fields in the the first line of a request or response to the empty string, such as ``req.url``, ``req.proto``, ``resp.reason`` and so forth. Client-side VCL subroutines ~~~~~~~~~~~~~~~~~~~~~~~~~~~ * ``req.ttl`` is deprecated, see :ref:`vcl(7)`. vcl_recv ~~~~~~~~ * Added ``req.storage``, which tells Varnish which storage backend to use if you choose to save the request body (see :ref:`std.cache_req_body()`). * ``return(vcl(LABEL))`` may not be called after a restart. It can only be called from the active VCL instance. vcl_backend_response ~~~~~~~~~~~~~~~~~~~~ * Added ``return(pass(DURATION))`` to set an object to hit-for-pass, see :ref:`whatsnew_changes_5.1_hitpass`. * The object ``beresp.storage`` of type STEVEDORE should now be used to set a storage backend; ``beresp.storage_hint`` is deprecated and will be removed in a future release. Setting ``beresp.storage_hint`` to a valid storage will set ``beresp.storage`` as well. If the storage is invalid, ``beresp.storage`` is left untouched. For the case where multiple storage backends have been defined with the ``-s`` command-line option for varnishd, but none is explicitly set in ``vcl_backend_response``, storage selection and the use of the nuke limit has been reworked (see :ref:`ref_param_nuke_limit`). Previously, a storage backend was tried first with a nuke limit of 0, and retried on failure with the limit configured as the ``-p`` parameter ``nuke_limit``. When no storage was specified, Varnish went through every one in round-robin order with a nuke limit of 0 before retrying. Now ``beresp.storage`` is initialized with a storage backend before ``vcl_backend_response`` executes, and the storage set in ``beresp.storage`` after its execution will be used. The configured nuke limit is used in all cases. vmod_std ~~~~~~~~ * Added :ref:`std.getenv()`. * Added :ref:`std.late_100_continue()`. Other changes ============= * The storage backend type umem, long in disuse, has been retired. * ``varnishstat(1)``: * Added the ``cache_hitmiss`` stat to count hits on hit-for-miss objects. * The ``cache_hitpass`` stat now only counts hits on hit-for-pass objects. * ``fetch_failed`` is incremented for any kind of fetch failure -- when there is a failure after ``return(deliver)`` from ``vcl_backend_response``, or when control is diverted to ``vcl_backend_error``. * Added the ``n_test_gunzip`` stat, which is incremented when Varnish verifies a compressed response from a backend -- this operation was previously counted together with ``n_gunzip``. * Added the ``bans_lurker_obj_killed_cutoff`` stat to count the number of objects killed by the ban lurker to keep the number of bans below ``ban_cutoff``. * ``varnishlog(1)``: * Hits on hit-for-miss and hit-for-pass objects are logged with the ``HitMiss`` and ``HitPass`` tags, respectively. In each case, the log payload is the VXID of the previous transaction in which the object was saved in the cache (as with ``Hit``). * An entry with the ``TTL`` tag and the prefix ``HFP`` is logged to record the duration set for hit-for-pass objects. * Added ``vxid`` as a lefthand side token for VSL queries, allowing for queries that search for transaction IDs in the log. See :ref:`vsl-query(7)`. * ``varnishncsa(1)``: * Clarified the meaning of the ``%r`` formatter, see NOTES in :ref:`varnishncsa(1)`. * Clarified the meaning of the ``%{X}i`` and ``%{X}o`` formatters when the header X appears more than once (the last occurrence is is used). * ``varnishtest(1)``: * Added the ``setenv`` and ``write_body`` commands, see :ref:`vtc(7)`. * ``-reason`` replaces ``-msg`` to set the reason string for a response (default "OK"). * Added ``-cliexpect`` to match expected CLI responses to regular expressions. * Added the ``-match`` operator for the ``shell`` command. * Added the ``-hdrlen`` operator to generate a header with a given name and length. * The ``err_shell`` command is deprecated, use ``shell -err -expect`` instead. * The ``${bad_backend}`` macro can now be used for a backend that is always down, which does not require a port definition (as does ``${bad_ip}`` in a backend definition). * ``varnishtest`` can be stopped with the ``TERM``, ``INT`` of ``KILL`` signals, but not with ``HUP``. These signals kill the process group, so that processes started by running tests are stopped. * Added the ``vtest.sh`` tool to automate test builds, see :ref:`whatsnew_changes_5.1_vtest`. varnish-7.5.0/doc/sphinx/whats-new/upgrading-5.2.rst000066400000000000000000000220711457605730600222230ustar00rootroot00000000000000.. Copyright (c) 2017-2019 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _whatsnew_upgrading_5.2: %%%%%%%%%%%%%%%%%%%%%%%% Upgrading to Varnish 5.2 %%%%%%%%%%%%%%%%%%%%%%%% Varnish statistics and logging ============================== There are extensive changes under the hood with respect to statistics counters, but these should all be transparent at the user-level. varnishd parameters =================== The ``vsm_space`` and ``cli_buffer`` parameters are now deprecated and ignored. They will be removed in a future major release. The updated shared memory implementation manages space automatically, so it no longer needs ``vsm_space``. Memory for the CLI command buffer is now dynamically allocated. We have updated the documentation for :ref:`ref_param_send_timeout`, :ref:`ref_param_idle_send_timeout`, :ref:`ref_param_timeout_idle` and :ref:`ref_param_ban_cutoff`. Added the debug bit ``vmod_so_keep``, see :ref:`ref_param_debug` and the notes about changes for developers below. Changes to VCL ============== We have added a few new variables and clarified some matters. VCL written for Varnish 5.1 should run without changes on 5.2. Consistent symbol names ~~~~~~~~~~~~~~~~~~~~~~~ VCL symbols originate from various parts of Varnish: there are built-in variables, subroutines, functions, and the free-form headers. Symbols may live in a namespace denoted by the ``'.'`` (dot) character as in ``req.http.Cache-Control``. When you create a VCL label, a new symbol becomes available, named after the label. Storage backends always have a name, even if you don't specify one, and they can also be accessed in VCL: for example ``storage.Transient``. Because headers and VCL names could contain dashes, while subroutines or VMOD objects couldn't, this created an inconsistency. All symbols follow the same rules now and must follow the same (case-insensitive) pattern: ``[a-z][a-z0-9_-]*``. You can now write code like:: sub my-sub { new my-obj = my_vmod.my_constuctor(storage.my-store); } sub vcl_init { call my-sub; } As you may notice in the example above, it is not possible yet to have dashes in a vmod symbol. Long storage backend names used to be truncated due to a limitation in the VSC subsystem, this is no longer the case. VCL variables ~~~~~~~~~~~~~ ``req.hash`` and ``bereq.hash`` ------------------------------- Added ``req.hash`` and ``bereq.hash``, which contain the hash value computed by Varnish for cache lookup in the current transaction, to be used in client or backend context, respectively. Their data type is BLOB, and they contain the raw binary hash. You can use :ref:`vmod_blob(3)` to work with the hashes:: import blob; sub vcl_backend_fetch { # Send the transaction hash to the backend as a hex string set bereq.http.Hash = blob.encode(HEX, blob=bereq.hash); } sub vcl_deliver { # Send the hash in a response header as a base64 string set resp.http.Hash = blob.encode(BASE64, blob=req.hash); } ``server.identity`` ------------------- If the ``-i`` option is not set in the invocation of ``varnishd``, then ``server.identity`` is set to the host name (as returned by ``gethostname(3)``). Previously, ``server.identity`` defaulted to the value of the ``-n`` option (or the default instance name if ``-n`` was not set). See :ref:`varnishd(1)`. ``bereq.is_bgfetch`` -------------------- Added ``bereq.is_bgfetch``, which is readable in backend contexts, and is true if the fetch takes place in the background. That is, it is true if Varnish found a response in the cache whose TTL was expired, but was still in grace time. Varnish returns the stale cached response to the client, and initiates the background fetch to refresh the cache object. ``req.backend_hint`` -------------------- We have clarified what happens to ``req.backend_hint`` on a client restart -- it gets reset to the default backend. So you might want to make sure that the backend hint gets set the way you want in that situation. vmod_std ~~~~~~~~ Added :ref:`std.file_exists()`. New VMODs in the standard distribution ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ See :ref:`vmod_blob(3)`, :ref:`vmod_purge(3)` and :ref:`vmod_vtc(3)`. Read about them in :ref:`whatsnew_new_vmods`. Bans ~~~~ We have clarified the interpretation of a ban when a comparison in the ban expression is attempted against an unset field, see :ref:`vcl(7)_ban` in :ref:`vcl(7)`. Other changes ============= * ``varnishd(1)``: * The total size of the shared memory space for logs and counters no longer needs to be configured explicitly and therefore the second subargument to ``-l`` is now ignored. * The default value of ``server.identity`` when the ``-i`` option is not set has been changed as noted above. * Also, ``-i`` no longer determines the ``ident`` field used by ``syslog(3)``; now Varnish is always identified by the string ``varnishd`` in the syslog. * On a system that supports ``setproctitle(3)``, the Varnish management process will appear in the output of ``ps(1)`` as ``Varnish-Mgt``, and the child process as ``Varnish-Child``. If the ``-i`` option has been set, then these strings in the ps output are followed by ``-i`` and the identity string set by the option. * The ``-f`` option for a VCL source file now honors the ``vcl_path`` parameter if a relative file name is used, see :ref:`varnishd(1)` and :ref:`ref_param_vcl_path`. * The ``-a`` option can now take a name, for example ``-a admin=127.0.0.1:88`` to identify an address used for administrative requests but not regular client traffic. Otherwise, a default name is selected for the listen address (``a0``, ``a1`` and so forth). Endpoint names appear in the log output, as noted below, and may become accessible in VCL in the future. * ``varnishstat(1)``: * In curses mode, the top two lines showing uptimes for the management and child processes show the text ``Not Running`` if one or both of the processes are down. * The interpretation of multiple ``-f`` options in the command line has changed slightly, see :ref:`varnishstat(1)`. * The ``type`` and ``ident`` fields have been removed from the XML and JSON output formats, see :ref:`varnishstat(1)`. * The ``MAIN.s_req`` statistic has been removed, as it was identical to ``MAIN.client_req``. * Added the counter ``req_dropped``. Similar to ``sess_dropped``, this is the number of times an HTTP/2 stream was refused because the internal queue is full. See :ref:`varnish-counters(7)` and :ref:`ref_param_thread_queue_limit`. * ``varnishlog(1)``: * The ``Hit``, ``HitMiss`` and ``HitPass`` log records grew an additional field with the remaining TTL of the object at the time of the lookup. While this should greatly help troubleshooting, it might break tools relying on those records to get the VXID of the object hit during lookup. Instead of using ``Hit``, such tools should now use ``Hit[1]``, and the same applies to ``HitMiss`` and ``HitPass``. The ``Hit`` record also grew two more fields for the grace and keep periods. This should again be useful for troubleshooting. See :ref:`vsl(7)`. * The ``SessOpen`` log record displays the name of the listen address instead of the endpoint in its 3rd field. See :ref:`vsl(7)`. * The output format of ``VCL_trace`` log records, which appear if you have switched on the ``VCL_trace`` flag in the VSL mask, has changed to include the VCL configuration name. See :ref:`vsl(7)` and :ref:`ref_param_vsl_mask`. * ``varnishtest(1)`` and ``vtc(7)``: * When varnishtest is invoked with ``-L`` or ``-l``, Varnish instances started by a test do not clean up their copies of VMOD shared objects when they stop. See the note about ``vmod_so_keep`` below. * Added the feature switch ``ignore_unknown_macro`` for test cases, see :ref:`vtc(7)`. * ``varnishncsa(1)`` * Field specifiers (such as the 1 in ``Hit[1]``) are now limited to to 255, see :ref:`varnishncsa(1)`. * The ``-N`` command-line option, which was previously available for ``varnishlog(1)``, ``varnishstat(1)``, ``varnishncsa(1)`` and ``varnishhist(1)``, is not compatible with the changed internal logging API, and has been retired. * Changes for developers: * The VSM and VSC APIs for shared memory and statistics have changed, and may necessitate changes in client applications, see :ref:`whatsnew_vsm_vsc_5.2`. * Added the ``$ABI`` directive for VMOD vcc declarations, see :ref:`whatsnew_abi`. * There have been some minor changes in the VRT API, which may be used for VMODs and client apps, see :ref:`whatsnew_vrt_5.2`. * The VUT API (for Varnish UTilities), which facilitates the development of client apps, is now publicly available, see :ref:`whatsnew_vut_5.2`. * The debug bit ``vmod_so_keep`` instructs Varnish not to clean up its copies of VMOD shared objects when it stops. This makes it possible for VMOD authors to load their code into a debugger after a varnishd crash. See :ref:`ref_param_debug`. *eof* varnish-7.5.0/doc/sphinx/whats-new/upgrading-6.0.rst000066400000000000000000001057261457605730600222330ustar00rootroot00000000000000.. Copyright (c) 2018-2019 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _whatsnew_upgrading_6.0: %%%%%%%%%%%%%%%%%%%%%%%% Upgrading to Varnish 6.0 %%%%%%%%%%%%%%%%%%%%%%%% .. _upd_6_0_uds_acceptor: Unix domain sockets as listen addresses ======================================= The ``varnishd -a`` command-line argument now has this form, where the ``address`` may be a Unix domain socket, identified as such when it begins with ``/`` (see varnishd :ref:`ref-varnishd-options`):: -a [name=][address][:port][,PROTO][,user=][,group=][,mode=] For example:: varnishd -a /path/to/listen.sock,PROXY,user=vcache,group=varnish,mode=660 That means that an absolute path must always be specified for the socket file. The socket file is created when Varnish starts, and any file that may exist at that path is unlinked first. You can use the optional ``user``, ``group`` and ``mode`` sub-arguments to set permissions of the new socket file; use names for ``user`` and ``group`` (not numeric IDs), and a 3-digit octal number for ``mode``. This is done by the management process, so creating the socket file and setting permissions are done with the privileges of the management process owner. There are some platform-specific restrictions on the use of UDSen to which you will have to conform. Here are some things we know of, but this list is by no means authoritative or exhaustive; always consult your platform documentation (usually in ``man unix``): * There is a maximum permitted length of the path for a socket file, considerably shorter than the maximum for the file system; usually a bit over 100 bytes. * On FreeBSD and other BSD-derived systems, the permissions of the socket file do not restrict which processes can connect to the socket. * On Linux, a process connecting to the socket must have write permissions on the socket file. On any system, a process connecting to the socket must be able to access the socket file. So you can reliably restrict access by restricting permissions on the directory containing the socket (but that must be done outside of the Varnish configuration). When UDS listeners are in use, VCL >= 4.1 will be required for all VCL programs loaded by Varnish. If you attempt to load a VCL source with ``vcl 4.0;``, the load will fail with a message that the version is not supported. If you continue using only IP addresses in your ``-a`` arguments, you won't have to change them, and you can continue using VCL 4.0. .. _upd_6_0_uds_backend: Unix domain sockets as backend addresses ======================================== A backend declaration may now have the ``.path`` field to specify a Unix domain socket to which Varnish connects:: backend my_uds_backend { .path = "/path/to/backend.sock"; } One of the fields ``.host`` or ``.path`` must be specified for a backend (but not both). The value of ``.path`` must be an absolute path (beginning with ``/``), and the file at that path must exist and be accessible to Varnish at VCL load time; and it must be a socket. The platform-specific restrictions on UDSen mentioned above apply of course to backends as well; but in this case your deployment of the peer component listening at the socket file must fulfill those conditions, otherwise Varnish may not be able to connect to the backend. The path of a socket file may also be specified in the ``varnishd -b`` command-line option (see varnishd :ref:`ref-varnishd-options`):: $ varnishd -b /path/to/backend.sock The value of ``-b`` must fulfill the same conditions as the ``.path`` field in a backend declaration. Backends with the ``.path`` specification require VCL 4.1, as do paths with the ``-b`` argument. If you don't use UDS backends, you can continue using VCL 4.0. varnishd parameters =================== The ``cli_buffer`` parameter, which was deprecated as of Varnish 5.2, is now retired. :ref:`ref_param_max_restarts` now works more correctly -- it is the number of ``return(restart)`` calls permitted per request. (It had been one less than the number of permitted restarts.) The parameters :ref:`ref_param_tcp_keepalive_intvl`, :ref:`ref_param_tcp_keepalive_probes` and :ref:`ref_param_tcp_keepalive_time` are silently ignored for listen addresses that are Unix domain sockets. The parameters :ref:`ref_param_accept_filter` and :ref:`ref_param_tcp_fastopen` (which your platform may or may not support in the first place) almost certainly have no effect on a UDS. It is not an error to use any of these parameters with a UDS; you may get error messages in the log for ``accept_filter`` or ``tcp_fastopen`` (with the VSL tag ``Error`` in raw grouping), but they are harmless. :ref:`ref_param_workspace_thread` is now used for IO buffers during the delivery of the client response. This space had previously been taken from :ref:`ref_param_workspace_client`. If you need to reduce memory footprint, consider reducing ``workspace_client`` by the amount in ``workspace_thread``. Added `ref_param_esi_iovs`. tl;dr: Don't touch it, unless advised to do so by someone familiar with the innards of Varnish. Changes to VCL ============== VCL 4.0 and 4.1 ~~~~~~~~~~~~~~~ The first line of code in a VCL program may now be either ``vcl 4.0;`` or ``vcl 4.1;``, establishing the version of the language for that instance of VCL. Varnish 6.0 supports both versions. The VCL version mainly affects which variables may be used in your VCL program, or in some cases, whether the variable is writable or read-only. Only VCL 4.1 is permitted when Unix domain sockets are in use. For details, see :ref:`vcl_variables`, and the notes in the present document. VCL variables ~~~~~~~~~~~~~ ``local.socket`` and ``local.endpoint`` --------------------------------------- These read-only variables are available as of VCL 4.1, and provide information about the listener address over which the current client request was received. ``local.socket`` is the name provided in the ``-a`` command-line argument for the current listener, which defaults to ``a0``, ``a1`` and so on (see varnishd :ref:`ref-varnishd-options`). ``local.endpoint`` is the value of the ``address[:port]`` or ``path`` field provided as the ``-a`` value for the current listener, exactly as given on the command line. For example:: # When varnishd is invoked with these -a arguments ... $ varnishd -a foo=12.34.56.78:4711 -a bar=/path/to/listen.sock # ... then in VCL, for requests received over the first listener: local.socket == "foo" local.endpoint == "12.34.56.78:4711" # ... and for requests received over the second listener: local.socket == "bar" local.endpoint == "/path/to/listen.sock" # With this invocation ... $ varnishd -a :80 -a 87.65.43.21 # ... then for requests received over the first listener: local.socket == "a0" local.endpoint == ":80" # ... and for the second listener local.socket == "a1" local.endpoint == "87.65.43.21" So if you have more than one listener and need to tell them apart in VCL, for example a listener for "regular" client traffic and another one for "admin" requests that you must restrict to internal systems, these two variables can help you do so. ``local.socket`` and ``local.endpoint`` are available on both the client and backend sides. But the values on the backend side are not necessarily the same as they were on the side of the client request that initiated the backend request. This is because of the separation of client and backend threads -- a backend thread may be re-used that was initiated by a client request over another listener, and ``local.socket`` and ``local.endpoint`` on that thread retain the values for the original listener. So if, in your backend VCL code, you need to be sure about the listener that was used on the client side of the same transaction, assign ``local.socket`` and/or ``local.endpoint`` to a client request header, and retrieve the value from a backend request header:: sub vcl_miss { set req.http.X-Listener = local.socket; } sub vcl_backend_fetch { if (bereq.http.X-Listener == "a0") { # ... } } ``sess.xid`` ------------ This is the unique ID assigned by Varnish to the current session, which stands for the "conversation" with a single client connection that comprises one or more request/response transactions. It is the same XID shown in the log for session transactions (with ``-g session`` grouping). ``sess.xid`` is read-only and is available as of VCL 4.1. Variable changes in VCL 4.0 and 4.1 ----------------------------------- The ``*.proto`` variables (``req.proto``, ``resp.proto``, ``bereq.proto`` and ``beresp.proto``) are read-only as of VCL 4.1, but are still writable in VCL 4.0. ``req.esi`` is available in VCL 4.0, but no longer in 4.1. In its place, ``resp.do_esi`` has been introduced in VCL 4.1. Set ``resp.do_esi`` to false in ``vcl_deliver`` if you want to selectively disable ESI processing for a client response (even though ``beresp.do_esi`` was true during fetch). ``beresp.backend.ip`` and ``beresp.storage_hint`` are discontinued as of VCL 4.1, but are still available in 4.0. Note that ``beresp.storage_hint`` has been deprecated since Varnish 5.1; you should use ``beresp.storage`` instead. Client-side variable access --------------------------- ``req.storage``, ``req.hash_ignore_busy`` and ``req.hash_always_miss`` are now accessible from all of the client side subroutines (previously only in ``vcl_recv{}``). Unix domain sockets and VCL ~~~~~~~~~~~~~~~~~~~~~~~~~~~ We have made an effort to adapt the support of Unix domain sockets in VCL so that you may not have to change anything in your VCL deployment at all, other than changing the version to 4.1. The short story is that where VCL requires an IP value, the value is ``0.0.0.0:0`` for a connection that was addressed as a UDS -- the "any IPv4" address with port 0. So your use of IP-valued elements in VCL will continue to work and may not have to change, but there are some consequences that you should consider, covered in the following. As we shall see, for a variety of reasons you get the best results if the component forwarding to Varnish via UDS uses the PROXY protocol, which sets ``client.ip`` and ``server.ip`` to the addresses sent in the PROXY header. If you don't use UDSen, then nothing about VCL changes with respect to network addressing. UDS support requires version 4.1, so if you are keeping your VCL level at 4.0 (and hence are staying with IP addresses), then none of the following is of concern. ``client.ip``, ``server.ip``, ``local.ip`` and ``remote.ip`` ------------------------------------------------------------ These variables have the value ``0.0.0.0`` for a connection that was addressed as a UDS. If you are using the PROXY protocol, then ``client.ip`` and ``server.ip`` have the "real" IP address values sent via PROXY, but ``local.ip`` and ``remote.ip`` are always ``0.0.0.0`` for a UDS listener. If you have more than one UDS listener (more than one ``-a`` command-line argument specifying a socket path), then you may not be able to use the ``*.ip`` variables to tell them apart, especially since ``local.ip`` will be ``0.0.0.0`` for all of them. If you need to distinguish such addresses in VCL, you can use ``local.socket``, which is the name given for the ``-a`` argument (``a0``, ``a1`` etc. by default), or ``local.endpoint``, which in the case of UDS is the path given in the ``-a`` argument. You can, for example, use string operations such as regex matching on ``local.endpoint`` to determine properties of the path address:: # admin requests allowed only on the listener whose path ends in # "admin.sock" if (req.url ~ "^/admin") { if (local.endpoint !~ "admin.sock$") { # wrong listener, respond with "403 Forbidden" return( synth(403) ); } else { # process the admin request ... } } # superadmin requests only allowed on the "superadmin.sock" listener if (req.url ~ "^/superadmin") { if (local.endpoint !~ "superadmin.sock$") { return( synth(403) ); } else { # superadmin request ... } } ACLs ---- As before, ACLs can only specify ranges of IP addresses, and matches against ACLs can only be run against IP-valued elements. This means that if a ``*.ip`` variable whose value is ``0.0.0.0`` due to the use of UDS is matched against an ACL, the match can only succeed if the ACL includes ``0.0.0.0``. If you currently have a security requirement that depends on IP addresses *not* matching an ACL unless they belong to a specified range, then that will continue to work with a UDS listener (since you almost certainly have not included ``0.0.0.0`` in that range). Recall again that ``client.ip`` and ``server.ip`` are set by the PROXY protocol. So if you have a UDS listener configured to use PROXY and are using an ACL to match against one of those two variables, the matches will continue working against the "real" IPs sent via PROXY. You can of course define an ACL to match in the UDS case, by including ``0.0.0.0``:: # matches local.ip and remote.ip when the listener is UDS acl uds { "0.0.0.0"; } But such an ACL cannot distinguish different UDS listeners, if you have more than one. For that, you can achieve a similar effect by inspecting ``local.socket`` and/or ``local.endpoint``, as discussed above. ``client.identity`` and the hash and shard directors ---------------------------------------------------- As before, ``client.identity`` defaults to ``client.ip``; that is, if its value has not been explicitly set in VCL, then it returns the same value as ``client.ip`` when it is read. A common use of ``client.identity`` is to configure the hash and shard directors (see :ref:`vmod_directors(3)`). This is a way to achieve "client-sticky" distribution of requests to backends -- requests from the same clients are always sent to the same backends. Such a configuration will almost certainly not do what you want if: * The listener is set to a UDS address. * PROXY is not used to set ``client.ip``. * ``client.identity`` is not set to a distinct value before it is used to configure the director. Since ``client.identity`` defaults to ``client.ip``, which is always ``0.0.0.0`` under these conditions, the result will be that the director sends all requests to just one backend, and no requests to any other backend. To avoid that result, change one of the conditions listed above -- use PROXY to set distinct values for ``client.ip``, or set ``client.identity`` to distinct values before it is used. ``server.ip`` and default hashing for the cache ----------------------------------------------- The default algorithm for computing a hash value for the cache (the implementation of ``vcl_hash`` in ``builtin.vcl``) mixes ``req.url`` and the Host header (``req.http.Host``) into the hash data. If there is no Host header, then ``server.ip`` is used instead. Considering the Host header or ``server.ip`` is a way of achieving a kind of "virtual hosting" -- if your site receives requests with different Host headers or at distinct server addresses, then requests for the same URL will not hit the same cached response, if the requests are different in those other respects. If you have UDS listeners and are not using PROXY to set distinct values of ``server.ip``, then requests without a Host header will have the same value of ``server.ip == 0.0.0.0`` mixed into the hash. In that case, requests with the same URL will result in the same hash value, and hit the same cached responses. That doesn't matter, of course, if you don't need the "virtual hosting" effect -- you only have one listener, you never receive different host headers, or you never receive the same URL for what should lead to distinct responses. But if you need to avoid that result, then you can make one or more of these changes: * Use the PROXY protocol to set distinct ``server.ip`` values. * Write your own implementation of ``vcl_hash``, for example to mix ``local.socket`` or ``local.endpoint`` into the hash. * Set ``req.http.Host`` to a distinct value if it is absent before ``vcl_hash`` is entered. X-Forwarded-For --------------- Varnish automatically appends the value of ``client.ip`` to the ``X-Forwarded-For`` request header that is passed on to backends, or it creates the header with that value if it is not already present in the client request. If the client request is received over a UDS listener and the PROXY protocol is not used, then ``0.0.0.0`` will be added to ``X-Forwarded-For``. If you prefer, you can change that in VCL:: sub vcl_backend_fetch { # Assuming that server.identity has been set to an IP # address with the -i command-line argument. set bereq.http.X-Forwarded-For = regsub(bereq.http-X-Forwarded-For, "0.0.0.0$", server.identity); # ... } Again, this is probably not a concern if ``client.ip`` is set via the PROXY protocol. UDS backends and the Host header -------------------------------- By default, Varnish forwards the Host header from a client request to the backend. If there is no Host header in the client request, and the ``.host_header`` field was set in the backend declaration, then that value is used for the backend Host header. For backends declared with the ``.host`` field (with a domain name or IP address), then if there is neither a client Host header nor a ``.host_header`` declaration, the value of ``.host`` is set as the Host header of the backend request. If the backend was declared with ``.path`` for a socket path, then the backend Host header is set to ``0.0.0.0`` under those conditions. To re-state that: * If the backend was declared with ``.path`` to connect to a Unix domain socket, ... * and ``.host_header`` was not set in the backend declaration, ... * and there is no Host header in the client request, ... * then the Host header in the backend request is set to ``0.0.0.0``. If you want to avoid that, set a ``.host_header`` value for the backend, or set a value for the Host header in VCL. VMOD std -------- :ref:`std.port()` always returns 0 when applied to a ``*.ip`` variable whose value is set to ``0.0.0.0`` because the listener is UDS. :ref:`std.set_ip_tos()` is silently ignored when the listener is UDS. The ``shard`` director ---------------------- The ``alg`` argument of the shard director's ``.reconfigure()`` and ``.key()`` methods has been removed. The choice of hash algorithms was experimental, and we have settled on SHA256 as providing the best dispersal. If you have been using other choices of ``alg`` for ``.reconfigure()``, then after upgrading and removing ``alg``, the sharding of requests to backends will change once and only once. If you have been using other values of ``alg`` for ``.key()`` and need to preserve the previous behavior, see the `change log `_ for advice on how to do so. With the ``resolve=LAZY`` argument of the ``.backend()`` method, the shard director will now defer the selection of a backend to when a backend connection is actually made, which is how all other bundled directors work as well. In ``vcl_init``, ``resolve=LAZY`` is default and enables layering the shard director below other directors -- you can now use something like ``mydirector.add_backend(myshard.backend())`` to set the shard director as a backend for another director. Use of ``resolve=LAZY`` on the client side is limited to using the default or associated parameters. The shard director now provides a ``shard_param`` object that serves as a store for a set of parameters for the director's ``.backend()`` method. This makes it possible to re-use a set of parameter values without having to restate them in every ``.backend()`` call. The ``.backend()`` method has an argument ``param`` whose value, if it is used, must be returned from the ``shard_param.use()`` method. Because of these changes, support for positional arguments of the shard director ``.backend()`` method had to be removed. In other words, all parameters to the shard director ``.backend()`` method now need to be named. See :ref:`vmod_directors(3)` for details. Restarts ~~~~~~~~ Restarts now leave all of the properties of the client request unchanged (all of the ``req.*`` variables, including the headers), except for ``req.restarts`` and ``req.xid``, which change by design. If you need to reset the client request headers to their original state (before changes in VCL), call :ref:`std.rollback()`. ``return(restart)`` can now be called from ``vcl_recv{}``. New VMODs ~~~~~~~~~ VMOD unix --------- :ref:`vmod_unix(3)` provides functions to determine the credentials of the peer process (user and group of the process owner) that connected to Varnish over a listener at a Unix domain socket. You can use this, for example, to impose tighter restrictions on who can access certain resources:: import unix; sub vcl_recv { # Return "403 Forbidden" if the connected peer is # not running as the user "trusteduser". if (unix.user() != "trusteduser") { return( synth(403) ); } This is not available on every platform. As always, check the documentation and test the code before you attempt something like this in production. VMOD proxy ---------- :ref:`vmod_proxy(3)` provides functions to extract TLV attributes that may be optionally sent over a PROXYv2 connection to a Varnish listener. Most of these are properties of the peer component's TLS connection:: import proxy; # Get the authority attribute -- corresponds to the SNI of a TLS # connection. set req.http.Authority = proxy.authority(); Not all implementations send TLV attributes, and those that do don't necessarily support all of them; test your code to see what works in your configuration. See the `PROXY v2 specification `_ for more information about TLV attributes. Packaging changes ================= Supported platforms ~~~~~~~~~~~~~~~~~~~ Official Varnish packages went through major changes for this release, and target Debian 9, Ubuntu 16.04 LTS and (Red Hat) Enterprise Linux 7. Ubuntu 14.04 LTS will likely reach its end of life before Varnish 6 and the venerable Enterprise Linux 6 is getting too old and forced time-consuming workarounds so for these reasons we dropped community support for those platforms. Services ~~~~~~~~ As a result we ended up with systemd-only platforms for the official packages. The old services are still available as we archived them in the ``pkg-varnish-cache`` source tree. This was the occasion to remove differences between Red Hat and Debian derivatives since there's no more reasons to have them diverge: we initially inherited packaging support from downstream package maintainers, and they deserve many thanks for that. The biggest change that resulted in unifying our systemd setup across all official packages is that the ``varnish.params`` file available on Red Hat derivatives is gone. We noticed that using an environment file to hide the fact that ``varnishd`` is configured via command line arguments misled some people into thinking that what was proposed in the file was your only set of configuration options. For example, you can specify multiple listen addresses using multiple ``-a`` options but you only get variables for one address and a catch-all variable ``DAEMON_OPTS`` for anything not fitting in the template. In addition using an environment file pollutes the process's environment. Another big difference between Red Hat and Debian derivatives was the way we handled VCL reloads via the service manager. We introduced a new ``varnishreload`` script that operates on top of ``varnishadm`` to perform hot reloads of one VCL configuration or label at a time. All you need is enough privileges to contact ``varnishd``'s command line interface, which should not be a problem for package managers. Once the ``varnish`` package is installed, you can learn more:: varnishreload -h Again, many thanks to downstream maintainers and some early adopters for their help in testing the new script. To stay on the topic of the command line interface, packages no longer create a secret file for the CLI, and services omit ``-S`` and ``-T`` options on the ``varnishd`` command. This means that out of the box, you can only connect to the CLI locally with enough privileges to read a secret generated randomly. This means less noise in our packages, and you need to change the service configuration to enable remote access to the CLI. With previous packages, you also needed to change your configuration because the CLI would only listen to the loopback interface anyway. To change how ``varnishd`` is started, please refer to the systemd documentation. Virtual provides ~~~~~~~~~~~~~~~~ Last but not least in the packaging space, we took a first step towards improving dependency management between official ``varnish`` packages and VMODs built on top of them. RPMs and Deb packages now provide the strict and VRT ABIs from ``varnishd`` and the goal is to ultimately prevent a package installation or upgrade that would prevent a VMOD from being loaded. For Deb packages:: Provides: varnishd-abi-SHA1, varnishd-vrt (= x.y) And for RPMs:: Provides: varnishd(abi)(x86-64) = SHA1 Provides: varnishd(vrt)(x86-64) = x.y For VMOD authors or downstream distributors, this means that depending on the ``$ABI`` stanza in the VMOD descriptor, they can either tie their backend manually to the git hash Varnish was built from or to the VRT version used at the time. For example, a VMOD RPM built against Varnish 6.0.0 could have:: Requires: varnishd(vrt)%{?_isa} >= 7.0 Requires: varnishd(vrt)%{?_isa} < 8 Future plans include the ability to automate this for out-of-tree VMODs and remove manual steps. To learn more about the history behind this change, it was formalized via the Varnish Improvement Process: https://github.com/varnishcache/varnish-cache/wiki/VIP20%3A-Varnish-ABI-and-packaging Another thing available only to RPM packages as of 6.0.0 is virtual provides for VMODs. Instead of showing shared objects that aren't even in the dynamic linker's default path:: Provides: libvmod_std.so(64bit) Provides: libvmod_directors.so(64bit) [...] You get virtual VMOD provides with a version:: Provides: vmod(std)(x86-64) = 6.0.0-1 Provides: vmod(directors)(x86-64) = 6.0.0-1 [...] With the same mechanism it becomes possible to require a VMOD directly and let it bring along its dependencies, like ``varnish``. As this is currently not automated for out-of-tree VMODs, consider this a preview of what you will be able to do once VIP 20 is completed. Other changes ============= * ``varnishd(1)``: * The ``umem`` storage allocator, which was removed as of Varnish 5.1, has been restored and is now the default on a system where ``libumem`` is available (SunOS and descendants). * ``varnishlog(1)``: * Added a third field to the ``ReqStart`` log record that contains the name of the listener address over which the request was received, see :ref:`vsl(7)`. * ``0.0.0.0`` and port ``0`` appear in the logs where an IP and port otherwise appear, when the connection in question was addressed as a Unix domain socket. This affects ``ReqStart``, ``SessOpen``, ``BackendStart`` and ``BackendOpen``. If you have more than one UDS listener, they can be distinguished with the "listener name" field -- the third field for both ``ReqStart`` and ``SessOpen``. If you have more than one UDS backend, they can be distinguished with the backend name field -- the second field in ``BackendOpen``. * The byte counters logged with ``ReqAcct`` now report the numbers returned from the operating system telling us how many bytes were actually sent in a request and response, rather than what Varnish thought it was going to send. This gives a more accurate account when there are errors, for example when a client hung up early without receiving the entire response. The figures also include any overhead in a request or response body, for example due to chunked encoding. * Debugging logs for the PROXY protocol are turned off by default. They can be turned on with the ``protocol`` flag of the varnishd :ref:`ref_param_debug` parameter (``-p debug=+protocol``). * ``varnishstat(1)`` * Added the counter ``cache_hit_grace`` -- how often objects in the cache were hit when their TTL had expired, but they were still in grace. * ``varnishncsa(1)`` * The ``%h`` formatter (remote host) gets its value from ``ReqStart`` for client requests and ``BackendStart`` for backend requests. The value will be ``0.0.0.0`` for client requests when the listener is UDS, and for backend requests when the backend is UDS. * The ``%r`` formatter (first line of the request) is reconstructed in part from the Host request header. For UDS backends, Host may be ``0.0.0.0`` for the reasons explained above (no client Host header and no ``.host_header`` setting for the backend), so that may appear in the output for ``%r``. You can avoid that with the measures discussed above. * If you have more than one UDS listener and/or more than one UDS backend, and you want to tell them apart in the ``varnishncsa`` output (rather than just see ``0.0.0.0``), use the ``%{VSL}x`` formatter to capture the listener name and the backend name. For the listener name, use ``%{VSL:ReqStart[3]}x`` for client logs (the third field of ``ReqStart`` logs). For the backend name, use ``%{VSL:BackendOpen[2]}x`` for backend logs. * varnishncsa does not accept output format strings (from the ``-F`` command-line argument or a configuration file) if they specify tags for log entries whose payloads may contain control or binary characters. * ``varnishtest(1)`` and ``vtc(7)``: * The ``client -connect`` and ``server -listen`` commands in vtc scripts now allow Unix domain sockets as addresses, recognized when the argument begins with a ``/``. A client attempts the connection immediately, so the socket file must exist at the given path when the client is started, and the client must be able to access it. The ``server -listen`` command must be able to create the socket file when it executes ``bind(2)``. To make it easier for other processes to connect to the socket, the server's umask is temporarily set to 0 before the listen is attempted, to minimize issues with permissions. No further attempt is made to set the socket's permissions. To test a Varnish instance listening at a UDS, just use the ``varnish -arg`` command with the appropriate settings for the ``-a`` command line argument, see :ref:`varnishd(1)`. The ``varnish -vcl+backend`` command now works to include backend definitions for server objects that are listening at UDS. Backend declarations are implicitly included for such servers with the appropriate ``.path`` setting. A convenient location for socket files to be used in a test is the temporary directory created by ``varnishtest`` for each test, whose path is held in the macro ``${tmpdir}``. So this is a common idiom for tests that involve UDSen:: server s1 -listen "${tmpdir}/s1.sock" { ... } -start varnish v1 -arg "-a ${tmpdir}/v1.sock" -vcl+backend { ... } -start client c1 -connect "${tmpdir}/v1.sock" { ... } -run When a Varnish instance in a vtc test is listening at a UDS, then its ``vN_*`` macros are set like this: * ``v1_addr``: ``/path/to/socket`` * ``v1_port``: ``-`` (hyphen) * ``v1_sock``: ``/path/to/socket -`` When a server ``s1`` is listening at a UDS: * ``s1_addr``: ``0.0.0.0`` * ``s1_port``: ``0`` * ``s1_sock``: ``/path/to/socket`` The vtc variables ``remote.ip`` and ``remote.port``, which can be used in ``expect`` expressions for both server and client scripts, are set to ``0.0.0.0`` and ``0``, respectively, when the peer address is a UDS. We have added the variable ``remote.path`` as a counterpart to the other two. Its value is the path when the peer address is a UDS, and NULL otherwise (matching ```` in the latter case). * Changes for developers: * The VRT API version has been bumped to 7.0, and comprises a variety of new additions and changes. See ``vrt.h`` and the `change log `_ for details. * There are new rules about including API headers -- some may only be included once, others must included in a specific order. Only ``cache.h`` *or* ``vrt.h`` may be included (``cache.h`` includes ``vrt.h``). See the ``#error`` directives in the headers. * VMOD authors can use the ``VRT_VSC_*()`` series of functions and the new ``vsctool`` to create statistics for a VMOD that will be displayed by varnishstat. Varnish uses the same technique to create its counters, so you can look to the core code to see how it's done. * The ``VCL_INT`` and ``VCL_BYTES`` types are now defined to be strictly 64 bit (rather than leave it to whatever your platform defines as ``long``). But you may not get that full precision, for reasons discussed in the `change log `_. * As part of VRT version 7.0, the ``path`` field has been added to to ``struct vrt_backend``, which a VMOD can use with ``VRT_new_backend()`` to create a dynamic backend with a UDS address (see ``vrt.h``). If ``path`` is non-NULL, then both of the IPv4 and IPv6 addresses must be NULL. If ``path`` is NULL, then (as before) one or both of the IP addresses must be non-NULL. The ``dyn_uds`` object in VMOD debug (available in the source tree) illustrates how this can be done. * VMOD vcc sources may now include a directive ``$Prefix``, whose value is the string prepended to the names of C objects and functions in the generated C interface (in ``vcc_if.h``). So you may choose another prefix besides ``vmod_``, if so desired. * vcc sources may also include a directive ``$Synopsis`` whose value may be ``auto`` or ``manual``, default ``auto``. When ``$Synopsis`` is ``auto``, the vmodtool generates a more comprehensive ``SYNOPSIS`` section in the documentation than in previous versions -- an overview of the objects, methods and functions in your VMOD, with their type signatures. When ``$Synopsis`` is ``manual``, the ``SYNOPSIS`` is left out of the generated docs altogether; so you can write the ``SYNOPSIS`` section yourself, if you prefer. * Support for a new declaration of optional arguments in vcc files has been added: ``[ argname ]`` can be used to mark *argname* as optional. If this declaration is used for any argument, _all_ user arguments and ``PRIV_*`` pointers (no object pointers) to the respective function/method will be passed in a ``struct`` *funcname*\ ``_arg`` specific to this function which contains the arguments by their name (or the name ``arg``\ *n* for unnamed arguments, *n* being the argument position starting with 1) plus ``valid_``\ *argname* members for optional arguments which are being set to non-zero iff the respective *argname* was provided. Argument presence is determined at VCC time, so it is not possible to pass an unset argument from another function call. *eof* varnish-7.5.0/doc/sphinx/whats-new/upgrading-6.1.rst000066400000000000000000000406471457605730600222340ustar00rootroot00000000000000.. Copyright (c) 2018-2019 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _whatsnew_upgrading_6.1: %%%%%%%%%%%%%%%%%%%%%%%% Upgrading to Varnish 6.1 %%%%%%%%%%%%%%%%%%%%%%%% A configuration for Varnish 6.0.x will run for version 6.1 without changes. There has been a subtle change in the interpretation of the VCL variable ``beresp.keep`` under specific circumstances, as discussed below. Other than that, the changes in 6.1 are new features, described in the following. varnishd parameters =================== We have added the :ref:`ref_param_max_vcl` parameter to set a threshold for the number of loaded VCL programs, since it is a common error to let previous VCL instances accumulate without discarding them. The remnants of undiscarded VCLs take the form of files in the working directory of the management process. Over time, too many of these may take up significant storage space, and administrative operations such as ``vcl.list`` may become noticeably slow, or even time out, when Varnish has to iterate over many files. The default threshold in :ref:`ref_param_max_vcl` is 100, and VCL labels are not counted against the total. The :ref:`ref_param_max_vcl_handling` parameter controls what happens when you reach the limit. By default you just get a warning from the VCL compiler, but you can set it to refuse to load more VCLs, or to ignore the threshold. Added the :ref:`ref_param_backend_local_error_holddown` and :ref:`ref_param_backend_remote_error_holddown` parameters. These define delays for new attempts to connect to backends when certain classes of errors have been encountered, for which immediate re-connect attempts are likely to be counter-productive. See the parameter documentation for details. Changes to VCL ============== VCL variables ~~~~~~~~~~~~~ ``req.ttl``, ``req.grace`` and keep ----------------------------------- ``req.grace`` had been previously removed, but was now reintroduced, since there are use cases that cannot be solved without it. Similarly, ``req.ttl`` used to be deprecated and is now fully supported again. ``req.ttl`` and ``req.grace`` limit the ttl and grace times that are permitted for the current request. If ``req.ttl`` is set, then cache objects are considered fresh (and may be cache hits) only if their remaining ttl is less than or equal to ``req.ttl``. Likewise, ``req.grace`` sets an upper bound on the time an object has spent in grace to be considered eligible for grace mode (which is to deliver this object and fetch a fresh copy in the background). A common application is to set shorter TTLs when the backend is known to be healthy, so that responses are fresher when all is well. But if the backend is unhealthy, then use cached responses with longer TTLs to relieve load on the troubled backend:: sub vcl_recv { # ... if (std.healthy(req.backend_hint)) { # Get responses no older than 70s for healthy backends set req.ttl = 60s; set req.grace = 10s; } # If the backend is unhealthy, then permit cached responses # that are older than 70s. } The evaluation of the ``beresp.keep`` timer has changed a bit. ``keep`` sets a lifetime in the cache in addition to TTL for objects that can be validated by a 304 "Not Modified" response from the backend to a conditional request (with ``If-None-Match`` or ``If-Modified-Since``). If an expired object is also out of grace time, then ``vcl_hit`` will no longer be called, so it is impossible to deliver the "keep" object in this case. Note that the headers ``If-None-Match`` and ``If-Modified-Since``, together with the 304 behavior, are handled automatically by Varnish. If you, for some reason, need to explicitly disable this for a backend request, then you need do this by removing the headers in ``vcl_backend_fetch``. The documentation in :ref:`users-guide-handling_misbehaving_servers` has been expanded to discuss these matters in greater depth, look there for more details. ``beresp.filters`` and support for backend response processing with VMODs ------------------------------------------------------------------------- The ``beresp.filters`` variable is readable and writable in ``vcl_backend_response``. This is a space-separated list of modules that we call VFPs, for "Varnish fetch processors", that may be applied to a backend response body as it is being fetched. In default Varnish, the list may include values such as ``gzip``, ``gunzip``, and ``esi``, depending on how you have set the ``beresp.do_*`` variables. This addition makes it possible for VMODs to define VFPs to filter or manipulate backend response bodies, which can be added by changing the list in ``beresp.filters``. VFPs are applied in the order given in ``beresp.filters``, and you may have to ensure that a VFP is positioned correctly in the list, for example if it can only apply to uncompressed response bodies. This is a new capability, and at the time of release we only know of test VFPs implemented in VMODs. Over time we hope that an "ecology" of VFP code will develop that will enrich the features available to Varnish deployments. ``obj.hits`` ------------ Has been fixed to return the correct value in ``vcl_hit`` (it had been 0 in ``vcl_hit``). Other changes to VCL ~~~~~~~~~~~~~~~~~~~~ * The ``Host`` header in client requests is mandatory for HTTP/1.1, as proscribed by the HTTP standard. If it is missing, then ``builtin.vcl`` causes a synthetic 400 "Bad request" response to be returned. * You can now provide a string argument to ``return(fail("Foo!"))``, which can be used in ``vcl_init`` to emit an error message if the VCL load fails due to the return. * Additional ``import`` statements of an already imported vmod are now ignored. VMODs ===== Added the :ref:`std.fnmatch()` function to :ref:`vmod_std(3)`, which you can use for shell-style wildcard matching. Wildcard patterns may be a good fit for matching URLs, to match against a pattern like ``/foo/*/bar/*``. The patterns can be built at runtime, if you need to do that, since they don't need the pre-compile step at VCL load time that is required for regular expressions. And if you are simply more comfortable with the wildcard syntax than with regular expressions, you now have the option. :ref:`vmod_unix(3)` is now supported for SunOS and descendants. This entails changing the privilege set of the child process while the VMOD is loaded, see the documentation. Other changes ============= * ``varnishd(1)``: * Some VCL compile-time error messages have been improved, for example when a symbol is not found or arguments to VMOD calls are missing. * Varnish now won't rewrite the ``Content-Length`` header when responding to any HEAD request, making it possible to cache responses to HEAD requests independently from the GET responses (previously a HEAD request had to be a pass to avoid this rewriting). * If you have set ``.proxy_header=1`` (to use the PROXYv1 protocol) for a backend addressed as a Unix domain socket (with a ``.path`` setting for the socket file), and have also defined a probe for the backend, then then the address family ``UNKNOWN`` is sent in the proxy header for the probe request. If you have set ``.proxy_header=2`` (for PROXYv2) for a UDS backend with a probe, then ``PROXY LOCAL`` is sent for the probe request. * ``varnishlog(1)`` and ``vsl(7)``: * The contents of ``FetchError`` log entries have been improved to give better human-readable diagnostics for certain classes of backend fetch failures. In particular, http connection (HTC) errors are now reported symbolically in addition to the previous numerical value. * Log entries under the new ``SessError`` tag now give more diagnostic information about session accept failures (failure to accept a client connection). These must be viewed in raw grouping, since accept failures are not part of any request/response transaction. * When a backend is unhealthy, ``Backend_health`` now reports some diagnostic information in addition to the HTTP response and timing information. * The backend name logged for ``Backend_health`` is just the backend name without the VCL prefix (as appears otherwise for backend naming). * Added the log entry tag ``Filters``, which gives a list of the filters applied to a response body (see ``beresp.filters`` discussed above). * ``varnishadm(1)`` and ``varnish-cli(7)`` * For a number of CLI commands, you can now use the ``-j`` argument to get a JSON response, which may help in automation. These include: * ``ping -j`` * ``backend.list -j`` * ``help -j`` A JSON response in the CLI always includes a timestamp (epoch time in seconds with millisecond precision), indicating the time at which the response was generated. * The ``backend.list`` command now lists both directors and backends, with their health status. The command now has a ``-v`` option for verbose output, in which detailed health states for each backend/director are displayed. * ``varnishstat(1)`` and ``varnish-counters(7)``: * We have added a number of counters to the ``VBE.*`` group to help better diagnose error conditions with backends: * ``VBE.*.unhealthy``: the number of fetches that were not attempted because the backend was unhealthy * ``.busy``: number of fetches that were not attempted because the ``.max_connections`` limit was reached * ``.fail``: number of failed attempts to open a connection to the backend. Detailed reasons for the failures are given in the ``.fail_*`` counters (shown at DIAG level), and in the log entry ``FetchError``. ``.fail`` is the sum of the values in the ``.fail_*`` counters. * ``.fail_eaccess``, ``.fail_eaddrnotavail``, ``.fail_econnrefused``, ``.fail_enetunreach`` and ``.fail_etimedout``: these are the number of attempted connections to the backend that failed with the given value of ``errno(3)``. * ``.fail_other``: number of connections to the backend that failed for reasons other than those given by the other ``.fail_*`` counters. For such cases, details on the failure can be extracted from the varnish log as described above for ``FetchError``. * ``.helddown``: the number of connections not attempted because the backend was in the period set by one of the parameters :ref:`ref_param_backend_local_error_holddown` or :ref:`ref_param_backend_remote_error_holddown` * Similarly, we have added a series of counters for better diagnostics of session accept failures (failure to accept a connection from a client). As before, the ``sess_fail`` counter gives the total number of accept failures, and it is now augmented with the ``sess_fail_*`` counters. ``sess_fail`` is the sum of the values in ``sess_fail_*``. * ``sess_fail_econnaborted``, ``sess_fail_eintr``, ``sess_fail_emfile``, ``sess_fail_ebadf`` and ``sess_fail_enomem``: the number of accept failures with the indicated value of ``errno(3)``. The :ref:`varnish-counters(7)` man page, and the "long descriptions" shown by ``varnishstat``, give possible reasons why each of these may happen, and what might be done to counter the problem. * ``sess_fail_other``: number of accept failures for reasons other than those given by the other ``sess_fail_*`` counters. More details may appear in the ``SessError`` entry of the log (:ref:`varnish-counters(7)` shows a ``varnishlog`` invocation that may help). * In curses mode, the information in the header lines (uptimes and cache hit rates) is always reported, even if you have defined a filter that leaves them out of the stats table. * Ban statistics are now reported more accurately (they had been subject to inconsistencies due to race conditions). * ``varnishtest(1)`` and ``vtc(7)``: * ``varnishtest`` and the ``vtc`` test script language now support testing for haproxy as well as Varnish. The ``haproxy`` directive in a test can be used to define, configure, start and stop a haproxy instance, and you can also script messages to send on the haproxy CLI connection, and define expectations for the responses. See the ``haproxy`` section in :ref:`vtc(7)` for details. * Related to haproxy support, you can now define a ``syslog`` instance in test scripts. This defines a syslog server, and allows you to test expectations for syslog output from a haproxy instance. * Added the ``-keepalive`` argument for client and server scripts to be used with the ``-repeat`` directive, which causes all test iterations to run on the same connection, rather than open a new connection each time. This makes the test run faster and use fewer ephemeral ports. * Added the ``-need-bytes`` argument for the ``process`` command, see :ref:`vtc(7)`. * ``varnishhist(1)``: * The ``-P min:max`` command-line parameters are now optional, see :ref:`varnishhist(1)`. * For all of the utilities that access the Varnish log -- ``varnishlog(1)``, ``varnishncsa(1)``, ``varnishtop(1)`` and ``varnishhist(1)`` -- it was already possible to set multiple ``-I`` and ``-X`` command-line arguments. It is now properly documented that you can use multiple include and exclude filters that apply regular expressions to selected log records. * Changes for developers: * As mentioned above, VMODs can now implement VFPs that can be added to backend response processing by changing ``beresp.filters``. The interface for VFPs is defined in ``cache_filters.h``, and the debug VMOD included in the distribution shows an example of a VFP for rot13. * The Varnish API soname version (for libvarnishapi.so) has been bumped to 2.0.0. * The VRT version has been bumped to 8.0. See ``vrt.h`` for details on the changes since 7.0. * Space required by varnish for maintaining the ``PRIV_TASK`` and ``PRIV_TOP`` parameters is now taken from the appropriate workspace rather than from the heap as before. For a failing allocation, a VCL failure is triggered. The net effect of this change is that in cases of a workspace shortage, the almost unavoidable failure will happen earlier. The amount of workspace required is slightly increased and scales with the number of vmods per ``PRIV_TASK`` and ``PRIV_TOP`` parameter. The VCL compiler (VCC) guarantees that if a vmod function is called with a ``PRIV_*`` argument, that argument value is set. There is no change with respect to the API the ``PRIV_*`` vmod function arguments provide. * ``VRT_priv_task()``, the function implementing the allocation of the ``PRIV_TASK`` and ``PRIV_TOP`` parameters as described above, is now more likely to return ``NULL`` for allocation failures for the same reason. Notice that explicit use of this function from within VMODs is considered experimental as this interface may change. * We have improved support for the ``STRANDS`` data type, which you may find easier to use than the varargs-based ``STRING_LIST``. See ``vrt.h`` for details. :ref:`vmod_blob(3)` has been refactored to use ``STRANDS``, so you can look there for an example. * We have fixed a bug that had limited the precision available for the ``INT`` data type, so you now get the full 64 bits. * Portions of what had previously been declared in ``cache_director.h`` have been moved into ``vrt.h``, constituting the public API for directors. The remainder in ``cache_director.h`` is not public, and should not be used by a VMOD intended for VRT ABI compatibility. * The director API in ``vrt.h`` differs from the previous interface. :ref:`ref-writing-a-director` has been updated accordingly. In short, the most important changes include: * ``struct director_methods`` is replaced by ``struct vdi_methods`` * signatures of various callbacks have changed * ``VRT_AddDirector()`` and ``VRT_DelDirector()`` are to be used for initialization and destruction. * ``vdi_methods`` callbacks are not to be called from vmods any more * ``VRT_Healthy()`` replaces calls to the ``healthy`` function * The admin health is not to be manipulated by vmods any more * director private state destruction is recommended to be implemented via a ``destroy`` callback. * Python 3 is now preferred in builds, and will likely be required in future versions. * We believe builds are now reproducible, and intend to keep them that way. *eof* varnish-7.5.0/doc/sphinx/whats-new/upgrading-6.2.rst000066400000000000000000000156641457605730600222360ustar00rootroot00000000000000.. Copyright (c) 2019 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _whatsnew_upgrading_2019_03: %%%%%%%%%%%%%%%%%%%%%%%% Upgrading to Varnish 6.2 %%%%%%%%%%%%%%%%%%%%%%%% .. _whatsnew_upgrading_vcl_2019_03: VCL === VCL programs for Varnish 6.1 can be expected to run without changes in the new version. A VCL load will now issue a warning, but does not fail as previously, if a backend declaration uses the ``.path`` field to specify a Unix domain socket, but the socket file does not exist or is not accessible at VCL load time. This makes it possible to start the peer component listening at the socket, or set its permissions, after Varnish starts or the VCL is loaded. Backend fetches fail if the socket is not accessible by the time the fetch is attempted. ``return(miss)`` from ``vcl_hit{}`` is now removed. An option for implementing similar functionality is: * ``return (restart)`` from ``vcl_hit{}`` * in ``vcl_recv{}`` for the restart (when ``req.restarts`` has increased), ``set req.hash_always_miss = true;``. .. _whatsnew_upgrading_params_2019_03: Runtime parameters ================== Some varnishd ``-p`` parameters that have been deprecated for some time have been removed. If you haven't changed them yet, you have to now. These are: * ``shm_reclen`` -- use :ref:`ref_param_vsl_reclen` instead * ``vcl_dir`` -- use :ref:`ref_param_vcl_path` instead * ``vmod_dir`` -- use :ref:`ref_param_vmod_path` instead The default value of :ref:`ref_param_thread_pool_stack` has been increased from 48k to 56k on 64-bit platforms and to 52k on 32-bit platforms. See the discussion under :ref:`whatsnew_changes_params_2019_03` in :ref:`whatsnew_changes_2019_03` for details. .. _whatsnew_upgrading_std_conversion_2019_03: Type conversion functions in VMOD std ===================================== The existing type-conversion functions in :ref:`vmod_std(3)` have been reworked to make them more flexible and easier to use. These functions now also accept suitable numeral or quantitative arguments. * :ref:`std.duration()` * :ref:`std.bytes()` * :ref:`std.integer()` * :ref:`std.real()` * :ref:`std.time()` These type-conversion functions should be fully backwards compatible, but the following differences should be noted: * The *fallback* is not required any more. A conversion failure in the absence of a *fallback* argument will now trigger a VCL failure. * A VCL failure will also be triggered if no or more than one argument (plus optional *fallback*) is given. * Conversion functions now only ever truncate if necessary (instead of rounding). * :ref:`std.round()` has been added for explicit rounding. The following functions are deprecated and should be replaced by the new conversion functions: * :ref:`std.real2integer()` * :ref:`std.real2time()` * :ref:`std.time2integer()` * :ref:`std.time2real()` They will be removed in a future version of Varnish. varnishadm and the CLI ====================== The ``-j`` option for JSON output has been added to a number of commands, see :ref:`whatsnew_changes_cli_json` in :ref:`whatsnew_changes_2019_03` and :ref:`varnish-cli(7)`. We recommend the use of JSON format for automated parsing of CLI responses (:ref:`varnishadm(1)` output). .. _whatsnew_upgrading_backend_list_2019_03: Listing backends ~~~~~~~~~~~~~~~~ ``backend.list`` has grown an additional column, the output has changed and fields are now of dynamic width: * The ``Admin`` column now accurately states ``probe`` only if a backend has some means of dynamically determining health state. * The ``Probe`` column has been changed to display ``X/Y``, where: * Integer ``X`` is the number of good probes in the most recent window; or if the backend in question is a director, the number of healthy backends accessed via the director or any other director-specific metric. * Integer ``Y`` is the window in which the threshold for overall health of the backend is defined (from the ``.window`` field of a probe, see :ref:`vcl(7)`); or in the case of a director, the total number of backends accessed via the director or any other director-specific metric. If there is no probe or the director does not provide details, ``0/0`` is output. * The ``Health`` column has been added to contain the dynamic (probe) health state and the format has been unified to just ``healthy`` or ``sick``. If there is no probe, ``Health`` is always given as ``healthy``. Notice that the administrative health as shown in the ``Admin`` column has precedence. In the ``probe_message`` field of ``backend.list -j`` output, the ``Probe`` and ``Health`` columns appear as the array ``[X, Y, health]``. See :ref:`varnish-cli(7)` for details. .. _whatsnew_upgrading_vcl_list_2019_03: Listing VCLs ~~~~~~~~~~~~ The non-JSON output of ``vcl.list`` has been changed: * The ``state`` and ``temperature`` fields appear in separate columns (previously combined in one column). * The optional column showing the relationships between labels and VCL configurations (when labels are in use) has been separated into two columns. See :ref:`varnish-cli(7)` for details. In the JSON output for ``vcl.list -j``, this information appears in separate fields. The width of columns in ``backend.list`` and ``vcl.list`` output (non-JSON) is now dynamic, to fit the width of the terminal window. For developers and authors of VMODs and API clients =================================================== Python 3.4 or later is now required to build Varnish, or use scripts installed along with Varnish, such as ``vmodtool.py`` to build VMODs or other Varnish artifacts. Python 2 is no longer supported, and this support will likely be dropped in a future 6.0 LTS release too. The VRT API has been bumped to version 9.0. Changes include: * Functions in the API have been added, and others removed. * The ``VCL_BLOB`` type is now implemented as ``struct vrt_blob``. * The ``req_bodybytes`` field of ``struct req`` has been removed, and should now be accessed as an object core attribute. See ``vrt.h``, the `change log`_ and :ref:`whatsnew_changes_director_api_2019_03` in :ref:`whatsnew_changes_2019_03` for details. .. _change log: https://github.com/varnishcache/varnish-cache/blob/master/doc/changes.rst The vmodtool has been changed significantly to avoid name clashes in the C identifiers declared in ``vcc_if.h``. This may necessitate changing names in your VMOD code. To facilitate renaming, ``vcc_if.h`` defines macros for prepending the vmod prefix, and for naming enums and argument structs. For details, see the `change log`_, and examine the contents of ``vcc_if.h`` after generation. Going forward, we will adhere to the principle that data returned by VMOD methods and functions are immutable. This is now enforced in some places by use of the ``const`` modifier. A VMOD is free to do as it sees fit within its own implementation, but if you attempt to change something returned by another VMOD, the results are undefined. *eof* varnish-7.5.0/doc/sphinx/whats-new/upgrading-6.3.rst000066400000000000000000000055451457605730600222340ustar00rootroot00000000000000.. Copyright (c) 2020 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _whatsnew_upgrading_6.3: %%%%%%%%%%%%%%%%%%%%%%%% Upgrading to Varnish 6.3 %%%%%%%%%%%%%%%%%%%%%%%% For users of many and/or labeled VCLs ===================================== Users of the advanced mechanics behind the ``vcl.state`` CLI command (most likely used via ``varnishadm``) should be aware of the following changes, which may require adjustments to (or, more likely, allow for simplifications of) scripts/programs interfacing with varnish: The VCL ``auto`` state has been streamlined. Conceptually, it used to be a variant of the ``warm`` state which would automatically cool the vcl. Yet, cooling did not only transition the temperature, but also the state, so ``auto`` only worked one way - except that ``vcl.use`` or moving a label (by labeling another vcl) would also set ``auto``, so a manual warm/cold setting would get lost. Now the ``auto`` state will remain no matter the actual temperature or labeling, so when a vcl needs to implicitly change temperature (due to being used or being labeled), an ``auto`` vcl will remain ``auto``, and a ``cold`` / ``warm`` vcl will change state, but never become ``auto`` implicitly. For developers and authors of VMODs and API clients =================================================== The Python 2 EOL is approaching and our build system now favors Python 3. In the 2020-03-15 release we plan to only support Python 3. The "vararg" ``VCL_STRING_LIST`` type is superseded by the array-based ``VCL_STRANDS`` type. The deprecated string list will eventually be removed entirely and VMOD authors are strongly encouraged to convert to strands. VRT functions working with string list arguments now take strands. More functions such as ``VRT_Vmod_Init()`` and ``VRT_Vmod_Unload()`` from the VRT namespace moved away to the Varnish Private Interface (VPI). Such functions were never intended for VMODs in the first place. The functions ``VRT_ref_vcl()`` and ``VRT_rel_vcl()`` were renamed to respectively ``VRT_VCL_Prevent_Discard()`` and ``VRT_VCL_Allow_Discard()``. Some functions taking ``VCL_IP`` arguments now take a ``VRT_CTX`` in order to fail in the presence of an invalid IP address. See ``vrt.h`` for a list of changes since the 6.2.0 release. We sometimes use Coccinelle_ to automate C code refactoring throughout the code base. Our collection of semantic patches may be used by VMOD and API clients authors and are available in the Varnish source tree in the ``tools/coccinelle/`` directory. .. _Coccinelle: http://coccinelle.lip6.fr/ The ``WS_Reserve()`` function is deprecated and replaced by two functions ``WS_ReserveAll()`` and ``WS_ReserveSize()`` to avoid ambiguous situations. Its removal is planned for the 2020-09-15 release. A ``ws_reserve.cocci`` semantic patch can help with the transition. *eof* varnish-7.5.0/doc/sphinx/whats-new/upgrading-6.4.rst000066400000000000000000000050571457605730600222330ustar00rootroot00000000000000.. Copyright (c) 2020 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _whatsnew_upgrading_6.4: %%%%%%%%%%%%%%%%%%%%%%%%%% Upgrading to Varnish 6.4.0 %%%%%%%%%%%%%%%%%%%%%%%%%% Upgrading to Varnish 6.4 from 6.3 should not require any changes to VCL. This document contains information about other relevant aspects which should be considered when upgrading. varnishd -------- * The hash algorithm of the ``hash`` director was changed, so backend selection will change once only when upgrading. Users of the ``hash`` director are advised to consider using the ``shard`` director instead, which, amongst other advantages, offers more stable backend selection through consistent hashing. See :ref:`vmod_directors(3)` for details. * We fixed a case where :ref:`ref_param_send_timeout` had no effect on HTTP/1 connections when streaming from a backend fetch, in other words, a connection might not have got closed despite the :ref:`ref_param_send_timeout` having been reached. HTTP/2 was not affected. When :ref:`ref_param_send_timeout` is reached on HTTP/1, the ``MAIN.sc_tx_error`` is increased. Users with long running backend fetches and HTTP/1 clients should watch out for an increase of that counter compared to before the deployment and consider increasing :ref:`ref_param_send_timeout` appropriately. The timeout can also be set per connection from VCL as ``sess.send_timeout``. .. actually H2 is really quite different and we have a plan to harmonize timeout handling across the board Statistics ---------- * The ``MAIN.sess_drop`` counter is gone. It should be removed from any statistics gathering tools, if present * ``sess.timeout_idle`` / :ref:`ref_param_timeout_idle` being reached on HTTP/1 used to be accounted to the ``MAIN.rx_timeout`` statistic. We have now added the ``MAIN.rx_close_idle`` counter for this case specifically. * ``sess.send_timeout`` / :ref:`ref_param_send_timeout` being reached on HTTP/1 used to be accounted to ``MAIN.sc_rem_close``. Such timeout events are now accounted towards ``MAIN.sc_tx_error``. See :ref:`varnish-counters(7)` for details. vsl/logs -------- * The ``Process`` timestamp for ``vcl_synth {}`` was wrongly issued before the VCL subroutine was called, now it gets emitted after VCL returns for consistency with ``vcl_deliver {}``. Users of this timestamp should be aware that it now includes ``vcl_synth {}`` processing time and appears at a different position in the log. * A ``Notice`` VSL tag has been added. *eof* varnish-7.5.0/doc/sphinx/whats-new/upgrading-6.5.rst000066400000000000000000000063371457605730600222360ustar00rootroot00000000000000.. Copyright (c) 2020 Varnish Software AS SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _whatsnew_upgrading_6.5: %%%%%%%%%%%%%%%%%%%%%%%%%% Upgrading to Varnish 6.5.0 %%%%%%%%%%%%%%%%%%%%%%%%%% varnishstat =========== The JSON output (``-j`` option) changed to avoid having the ``timestamp`` field mixed with the counters fields. As such the schema version was bumped from 0 to 1, and a ``version`` top-level field was added to keep track of future schema changes. Counters are in a new ``counters`` top-level field. Before:: { "timestamp": "YYYY-mm-ddTHH:MM:SS", "MGT.uptime": { ... }, ... } After:: { "version": 1, "timestamp": "YYYY-mm-ddTHH:MM:SS", "counters": { "MGT.uptime": { ... }, ... } } The filter option ``-f`` is now deprecated in favor of the ``-I`` and ``-X`` options for field inclusions and exclusions, respectively. Tools using ``varnishstat`` should prepare for future removal and be changed accordingly. VSL === If you need to build VSL queries that depend on ``BackendReuse`` you can now rely on ``BackendClose``, for example:: varnishlog -q 'BackendReuse[2] ~ www' The new query would be:: varnishlog -q 'BackendClose[2] ~ www and BackendClose[3] eq recycle' Changes for developers and VMOD authors ======================================= VSB ~~~ VSB support for dynamic vs. static allocations has been changed and code using VSBs will need to be adjusted, see :ref:`whatsnew_changes_6.5_libvarnish`. It should be noted that the VSB itself and the string buffer must be either both dynamic or both static. It is no longer possible for example to have a static ``struct`` with a dynamic buffer with the new API. Workspace API ~~~~~~~~~~~~~ VMODs using the Workspace API might need minor adjustments, see :ref:`whatsnew_changes_6.5_workspace`. In general, accessing any field of ``struct ws`` is strongly discouraged and if the workspace API doesn't satisfy all your needs please bring that to our attention. VSC ~~~ The ``'f'`` argument for ``VSC_Arg()`` is now deprecated as mentioned in the above note on `varnishstat`_ and :ref:`whatsnew_changes_6.5_vsc`. Otherwise you can use the ``'I'`` ans ``'X'`` arguments to respectively include or exclude counters, they work in a first-match fashion. Since ``'f'`` is now emulated using the new arguments, its filtering behavior slightly changed from exclusions first to first match. If like ``varnishstat`` in curses mode, you have a utility that always needs some counters to be present the ``'R'`` argument takes a glob of required fields. Such counters are not affected by filtering from other ``VSC_Arg()`` arguments. Official Packages related changes ================================= * The default systemd `varnish.service` unit file now sets `varnishd` to listen for PROXY protocol connections on port 8443. This corresponds with the Hitch default configuration, making it easier to set up Varnish using TLS. * The default systemd `varnish.service` unit file now enables the HTTP/2 feature of `varnishd`. This corresponds with the default ALPN token advertisement in the Hitch default configuration, making it easier to enable HTTP/2 in Varnish setups. *eof* varnish-7.5.0/doc/sphinx/whats-new/upgrading-6.6.rst000066400000000000000000000050151457605730600222270ustar00rootroot00000000000000.. Copyright 2021 UPLEX Nils Goroll Systemoptimierung SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _whatsnew_upgrading_6.6: %%%%%%%%%%%%%%%%%%%%%%%% Upgrading to Varnish 6.6 %%%%%%%%%%%%%%%%%%%%%%%% In general, this release should not come with relevant incompatibilies to the previous release 6.5. VCL should continue to work as before except when rather exotic, partly unintended and/or undocumented features are used. Header Validation ================= Varnish now validates any headers set from VCL to contain only characters allowed by RFC7230. A (runtime) VCL failure is triggered if not. Such VCL failures, which result in ``503`` responses, should be investigated. As a last resort, the ``validate_headers`` parameter can be set to ``false`` to avoid these VCL failures. BAN changes =========== * The ``ban_cutoff`` parameter now refers to the overall length of the ban list, including completed bans, where before only non-completed ("active") bans were counted towards ``ban_cutoff``. * The ``ban()`` VCL builtin is now deprecated and should be replaced with :ref:`whatsnew_changes_6.6_ban` Accounting Changes ================== Accounting statistics and Log records have changed. See :ref:`whatsnew_changes_6.6_accounting` for details. VSL changes =========== The ``-c`` option of log utilities no longer includes ESI requests. A new ``-E`` option is now available for ESI requests and it implies ``-c`` too. This brings all log utilities on par with ``varnishncsa`` where the ``-E`` option was initially introduced. If you use ``-c`` to collect both client and ESI requests, you should use ``-E`` instead. If you use ``-c`` and a VSL query to exclude ESI requests, the query should no longer be needed. VMOD ``cookie`` functions ========================= The regular expression arguments taken by various functions from the ``cookie`` VMOD now need to be literal. See :ref:`whatsnew_changes_6.6_cookie` for details. Other VCL Changes ================= * The ``resp.proto`` variable is now read-only as it should have been for long, like the other ``*.proto`` variables. Changing the protocol is an error and should not be required. * Trying to use ``std.rollback()`` from ``vcl_pipe`` now results in VCL failure. * ``return(retry)`` from ``vcl_backend_error {}`` now correctly resets ``beresp.status`` and ``beresp.reason``. Changes to VMODs ================ Many VMODs will need minor adjustments to work with this release. See :ref:`whatsnew_changes_6.6_vmod` for details. *eof* varnish-7.5.0/doc/sphinx/whats-new/upgrading-7.0.rst000066400000000000000000000312751457605730600222310ustar00rootroot00000000000000.. Copyright 2021 Varnish Software SPDX-License-Identifier: BSD-2-Clause See LICENSE file for full text of license .. _whatsnew_upgrading_7.0: %%%%%%%%%%%%%%%%%%%%%%%% Upgrading to Varnish 7.0 %%%%%%%%%%%%%%%%%%%%%%%% PCRE2 ===== The migration from PCRE to PCRE2 led to many changes, starting with a change of build dependencies. See the current installation notes for package dependencies on various platforms. Previously the Just In Time (jit) compilation of regular expressions was always enabled at run time if it was present during the build. From now on jit compilation is enabled by default, but can be disabled with the ``--disable-pcre2-jit`` configure option. Once enabled, jit compilation is merely attempted and failures are ignored since they are not essential. The new ``varnishd`` parameter ``pcre2_jit_compilation`` controls whether jit compilation should be attempted and has no effect if jit support was disabled at configure time. See :ref:`ref_param_pcre2_jit_compilation`. The former parameters ``pcre_match_limit`` and ``pcre_match_limit_recursion`` were renamed to ``pcre2_match_limit`` and ``pcre2_depth_limit``. With older PCRE2 libraries, it is possible to see the depth limit being referred to as recursion limit in error messages. See :ref:`ref_param_pcre2_depth_limit` and :ref:`ref_param_pcre2_depth_limit` for more information. The syntax of regular expression should be the same, but it is possible to run into subtle differences. We are aware one such difference, PCRE2 fails the compilation of unknown escape sequences. For example PCRE interprets ``"\T"`` as ``"T"`` and ignores the escape character, but PCRE2 sees it as a syntax error. The solution is to simply use ``"T"`` and in general remove all spurious escape characters. While PCRE2 can capture named groups and has its own substitution syntax where captured groups can be referred to by position with ``$`` or even by name. The substitution syntax for VCL's ``regsub()`` remains the same and captured groups still require the ``\`` syntax where ``\1`` refers to the first group. For this reason, there shouldn't be changes required to existing VCL, ban expressions, VSL queries, or anything working with regular expression in Varnish, except maybe where PCRE2 seems to be stricter and refuses invalid escape sequences. VMOD authors manipulating ``VCL_REGEX`` arguments should not be affected by this migration if they only use the VRT API. However, the underlying VRE API was substantially changed and the new revision of VRE allowed to cover all the Varnish use cases so that ``libvarnish`` is now the only binary linking *directly* to ``libpcre2-8``. The migration implies that bans persisted in the deprecated persistent storage are no longer compatible and a new deprecated persistent storage should be rebuilt from scratch. Structured Fields numbers ========================= VCL types INTEGER and REAL now map respectively to Structured Fields integer and decimal numbers. Numbers outside of the Structured Fields bounds are no longer accepted by the VCL compiler and the various conversion functions from vmod_std will fail the transaction for numbers out of bounds. The scientific notation is no longer supported, for example ``12.34e+3`` must be spelled out as ``12340`` instead. Memory footprint ================ In order to lower the likelihood of flushing the logs of a single task more than once, the default value for ``vsl_buffer`` was increased to 16kB. This should generally result in better performance with tools like ``varnishlog`` or ``varnishncsa`` except for ``raw`` grouping. To accommodate this extra workspace consumption and add even more headroom on top of it, ``workspace_client`` and ``workspace_backend`` both increased to 96kB by default. The PCRE2 jit compiler produces code that consumes more stack, so the default value of ``thread_pool_stack`` was increased to 80kB, and to 64kB on 32bit systems. If you are relying on default values, this will result in an increase of virtual memory consumption proportional to the number of concurrent client requests and backend fetches being processed. This memory is not accounted for in the storage limits that can be applied. To address a potential head of line blocking scenario with HTTP/2, request bodies are now buffered between the HTTP/2 session (stream 0) and the client request. This is allocated on storage, controlled by the ``h2_rxbuf_storage`` parameter and comes in addition to the existing buffering between a client request and a backend fetch also allocated on storage. The new buffer size depends on ``h2_initial_window_size`` that has a new default value of 65535B to avoid having streams with negative windows. Range requests ============== Varnish only supports bytes units for range requests and always stripped ``Accept-Range`` headers coming from the backend. This is no longer the case for pass transactions. To find out whether an ``Accept-Range`` header came from the backend, the ``obj.uncacheable`` in ``vcl_deliver`` indicates whether this was a pass transaction. When ``http_range_support`` is on, a consistency check is added to ensure the backend doesn't act as a bad gateway. If an unexpected ``Content-Range`` header is received, or if it doesn't match the client's ``Range`` header, it is considered an error and a 503 response is generated instead. If your backend adds spurious ``Content-Range`` headers that you can assess are safe to ignore, you can amend the response in VCL:: sub vcl_backend_response { if (!bereq.http.range) { unset beresp.http.content-range; } } When a consistency check fails, an error is logged with the specific range problem encountered. ACL === The ``acl`` keyword in VCL now supports bit flags: - ``log`` - ``pedantic`` (enabled by default) - ``table`` The global parameter ``vcc_acl_pedantic`` (off by default) was removed, and as a result ACLs are now pedantic by default. TODO: reference to manual. They are also quiet by default, the following ACL declarations are equivalent:: acl { ... } acl -log +pedantic -table { ... } This means that the entry an ACL matched is no longer logged as ``VCL_acl`` by default. To restore the previous default behavior, declare your ACL like this:: acl +log -pedantic { ... } ACLs are optimized for runtime performance by default, which can increase significantly the VCL compilation time with very large ACLs. The ``table`` flag improves compilation time at the expense of runtime performance. See :ref:`vcl-acl`. Changes for developers ====================== Build ----- Building from source requires autoconf 2.69 or newer and automake 1.13 or newer. Neither are needed when building from a release archive since they are already bootstrapped. There is a new ``--enable-workspace-emulator`` configure flag to replace the regular "packed allocation" workspace with a "sparse allocation" alternative. Combined with the Address Sanitizer it can help VMOD authors find memory handling issues like buffer overflows that could otherwise be missed on a regular workspace. ``vdef.h`` ---------- The ``vdef.h`` header is no longer self-contained, it includes ``stddef.h``. Since it is the first header that should be included when working with Varnish bindings, some definitions were promoted to ``vdef.h``: - a fallback for the ``__has_feature()`` macro in its absence - VRT macros for Structured Fields number limits - ``struct txt`` and its companion macros (the macros require ``vas.h`` too) This header is implicitly included by ``vrt.h`` and ``cache.h`` and should not concern VMOD authors. Workspace API ------------- The deprecated functions ``WS_Front()`` and ``WS_Inside()`` are gone, they were replaced by ``WS_Reservation()`` and ``WS_Allocated()``. For this reason ``WS_Assert_Allocated()`` was removed despite not being deprecated, since it became redundant with ``assert(WS_Allocated(...))``. Accessing the workspace front pointer only makes sense during a reservation, that's why ``WS_Front()`` was deprecated in a previous release. It should no longer be needed to access ``struct ws`` fields directly, and everything should be possible with the ``WS_*()`` functions. It even becomes mandatory when the workspace emulator is enabled, the ``struct ws`` fields have different semantics. ``STRING_LIST`` --------------- VMOD authors can no longer take ``STRING_LIST`` arguments in functions or object methods. To work with string fragments, use ``VCL_STRANDS`` instead. As a result the following symbols are gone: - ``VRT_String()`` - ``VRT_StringList()`` - ``VRT_CollectString()`` - ``vrt_magic_string_end`` Functions that used to take a ``STRING_LIST`` in the form of a prototype ending with ``const char *, ...`` now take ``const char *, VCL_STRANDS``: - ``VRT_l_client_identity()`` - ``VRT_l_req_method()`` - ``VRT_l_req_url()`` - ``VRT_l_req_proto()`` - ``VRT_l_bereq_method()`` - ``VRT_l_bereq_url()`` - ``VRT_l_bereq_proto()`` - ``VRT_l_beresp_body()`` - ``VRT_l_beresp_proto()`` - ``VRT_l_beresp_reason()`` - ``VRT_l_beresp_storage_hint()`` - ``VRT_l_beresp_filters()`` - ``VRT_l_resp_body()`` - ``VRT_l_resp_proto()`` - ``VRT_l_resp_reason()`` - ``VRT_l_resp_filters()`` The ``VRT_SetHdr()`` function also used to take a ``STRING_LIST`` and now takes a ``const char *, VCL_STRANDS`` too. But, in addition to this change, it also no longer accepts the special ``vrt_magic_string_unset`` argument. Instead, a new ``VRT_UnsetHdr()`` function was added. The ``VRT_CollectStrands()`` function was renamed to ``VRT_STRANDS_string()``, which was its original intended name. Null sentinels -------------- Two convenience sentinels ``vrt_null_strands`` and ``vrt_null_blob`` were added to avoid ``NULL`` usage. ``VRT_blob()`` returns ``vrt_null_blob`` when the source is null or the length is zero. The null blob has the type ``VRT_NULL_BLOB_TYPE``. libvarnishapi ------------- Deprecated functions ``VSB_new()`` and ``VSB_delete()`` were removed. Use ``VSB_init()`` and ``VSB_fini()`` for static buffers and ``VSB_new_auto()`` and ``VSB_destroy()`` for dynamic buffers. Their removal resulted in bumping the soname to 3.0.0 for libvarnishapi. libvarnish ---------- Other changes were made to libvarnish, those are only available to VMOD authors since they are not exposed by libvarnishapi. VNUM '''' The ``VNUMpfx()`` function was replaced by ``SF_Parse_Number()`` that parses both decimal and integer numbers from RFC8941. In addition there are new ``SF_Parse_Decimal()`` and ``SF_Parse_Integer()`` more specialized functions. ``VNUM_bytes_unit()`` returns an integer and no longer parses factional bytes. New token parsers ``VNUM_uint()`` and ``VNUM_hex()`` were added. The other VNUM functions rely on the new SF functions for parsing, with the same limitations. The following macros define the Structured Fields number bounds: - ``VRT_INTEGER_MIN`` - ``VRT_INTEGER_MAX`` - ``VRT_DECIMAL_MIN`` - ``VRT_DECIMAL_MAX`` VRE ''' The VRE API completely changed in preparation for the PCRE2 migration, in order to funnel all PCRE usage in the Varnish source code through VRE. Similarly to how parameters were renamed, the ``match_recursion`` field from ``struct vre_limits`` was renamed to ``depth``. It has otherwise the same meaning and purpose. Notable breaking changes: - ``VRE_compile()`` signature changed - ``VRE_exec()`` was replaced: - ``VRE_match()`` does simple matching - ``VRE_capture()`` captures matched groups in a ``txt`` array - ``VRE_sub()`` substitute matches with a replacement in a VSB - ``VRE_error()`` prints an error message for all the functions above in a VSB - ``VRE_export()`` packs a usable ``vre_t`` that can be persisted as a byte stream An exported regular expression takes the form of a byte stream of a given size that can be used as-is by the various matching functions. Care should be taken to always maintain pointer alignment of an exported ``vre_t``. The ``VRE_ERROR_NOMATCH`` symbol is now hard-linked like ``VRE_CASELESS``, and ``VRE_NOTEMPTY`` is no longer supported. There are no match options left in the VRE facade but the ``VRE_match()``, ``VRE_capture()`` and ``VRE_sub()`` functions still take an ``options`` argument to keep the ability of allowing match options in the future. The ``VRE_ERROR_LEN`` gives a size that should be safe to avoid truncated error messages in a static buffer. To gain full access to PCRE2 features from a regular expression provided via ``vre_t`` a backend-specific ``vre_pcre2.h`` contains a ``VRE_unpack()`` function. This opens for example the door to ``pcre2_substitute()`` with the PCRE2 substitution syntax and named capture groups as an alternative to VCL's ``regsub()`` syntax backed by ``VRE_sub()``. Ideally, ``vre_pcre2.h`` will be the only breaking change next time we move to a different regular expression engine. Hopefully not too soon. *eof* varnish-7.5.0/doc/sphinx/whats-new/upgrading-7.1.rst000066400000000000000000000114751457605730600222320ustar00rootroot00000000000000.. _whatsnew_upgrading_7.1: %%%%%%%%%%%%%%%%%%%%%%%% Upgrading to Varnish 7.1 %%%%%%%%%%%%%%%%%%%%%%%% varnishd ======== Varnish now has an infrastructure in place to rename parameters or VCL variables while keeping a deprecated alias for compatibility. Parameters ~~~~~~~~~~ There are plans to rename certain arguments. When this happens, aliases will not be listed by ``param.show [-j|-l]`` commands, but they will be displayed by ``param.show [-j] ``. Systems operating on top of ``varnishadm`` or the Varnish CLI can be updated to anticipate this change with the help of the ``deprecated_dummy`` parameter added for testing purposes. The deprecated ``vsm_space`` parameter was removed. It was ignored and having no effect since Varnish 6.0.0 and should have disappeared with the 7.0.0 release. The sub-argument of the ``-l`` command line option that was used as a shorthand for ``vsm_space`` is also no longer accepted. Command line options ~~~~~~~~~~~~~~~~~~~~ A common pattern when a CLI script is used during startup is to combine the ``-I`` and ``-f ''`` options to prevent an automatic startup of the cache process. In this case a start command is usually present in the CLI script, most likely as the last command. This enables loading VCLs and potentially VCL labels which require a specific order if the active VCL is supposed to switch execution to labels. To support this pattern, a VCL loaded through the CLI script is no longer implicitly used if there is no active VCL yet. If no VCL was loaded through the ``-b`` or ``-f`` options it means that an explicit ``vcl.use`` command is needed before the ``start`` command. In the scenario described above, that would already be the case since the desired active VCL would likely need to be loaded last, not eligible for an implicit ``vcl.use`` since dependencies were loaded first. This change should not affect existing ``-I`` scripts, but if it does, simply add the missing ``vcl.use`` command. Other changes in varnishd ~~~~~~~~~~~~~~~~~~~~~~~~~ The ESI parser now recognizes the ``onerror="continue"`` attribute of the ```` XML tag. The ``+esi_include_onerror`` feature flag controls if the attribute is honored: If enabled, failure of an include stops ESI processing unless the ``onerror="continue"`` attribute was set for it. The feature flag is off by default, preserving the existing behavior to continue ESI processing despite include failures. Users of persistent storage engines be advised that objects created before the introduction of this change can not carry the ``onerror="continue"`` attribute and, consequently, will be handled as if it was not present if the ``+esi_include_onerror`` feature flag is enabled. Also, as this change is not backwards compatible, downgrades with persisted storage are not supported across this release. varnishtest =========== The deprecated ``err_shell`` command was removed, use ``shell -err`` instead. Changes for developers and VMOD authors ======================================= Backends ~~~~~~~~ Backends have reference counters now to avoid the uncertainty of a task holding onto a dynamic backend for a long time, for example in the waiting list, with the risk of the backend going away during the transaction. Assignments should be replaced as such:: -lvalue = expr; +VRT_Assign_Backend(&lvalue, expr); .. XXX: there should be a coccinelle patch to help. For backends which are guaranteed at least VCL lifetime, the respective VMOD can opt out of reference counting with ``VRT_StaticDirector()`` to avoid the reference counting overhead. Filters ~~~~~~~ Two new functions ``VRT_AddFilter()`` and ``VRT_RemoveFilter()`` manage filters as VDP/VFP pairs. When used as pairs, the filters must have the same name, otherwise operating with only one fetch or delivery filter is fine. Unlike its deprecated predecessors ``VRT_AddVFP()`` and ``VRT_AddVDP()``, the new ``VRT_AddFilter()`` returns an error string. The ``VRT_RemoveVFP()`` and ``VRT_RemoveVDP()`` functions are also deprecated and kept for now as wrappers of ``VRT_RemoveFilter()`` without error handling. VMOD deprecated aliases ~~~~~~~~~~~~~~~~~~~~~~~ A VMOD author can from now on rename a function or object method without immediately breaking compatibility by declaring the old name as an alias. In the VMOD descriptor, it is possible to add the following stanza:: $Alias deprecated_function original_function or $Alias .deprecated_method object.original_method This is a good occasion to revisit unfortunate name choices in existing VMODs. Platform Support ================ systemd ~~~~~~~ To make the selection of the main process deterministic for the kill mode, a PID file is now expected by default in the varnish service. In a setup where the service command for ``ExecStart`` is overridden, a ``-P`` option matching the ``PIDFile`` setting is needed. *eof* varnish-7.5.0/doc/sphinx/whats-new/upgrading-7.2.rst000066400000000000000000000051361457605730600222300ustar00rootroot00000000000000.. _whatsnew_upgrading_7.2: %%%%%%%%%%%%%%%%%%%%%%%% Upgrading to Varnish 7.2 %%%%%%%%%%%%%%%%%%%%%%%% varnishd ======== Parameters ~~~~~~~~~~ The following parameters are deprecated: - ``vcc_allow_inline_c`` - ``vcc_err_unref`` - ``vcc_unsafe_path`` They can still be set as individual boolean parameters. The deprecated aliases will be removed in a future release. They are replaced by the ``vcc_feature`` bits parameter: - ``allow_inline_c`` - ``err_unref`` (enabled by default) - ``unsafe_path`` (enabled by default) The following commands are equivalent:: param.set vcc_err_unref off param.set vcc_feature -err_unref Identity ~~~~~~~~ The server identity must be a valid HTTP token, which may pose a problem to existing setups. For example ``varnishd -i "edge server 1"`` is no longer accepted. You can use something like ``varnishd -i "edge-server-1"`` instead. VCL === Varnish generates a Via header and forwards it to the backend by default. This can be prevented for example in ``vcl_recv`` or ``vcl_backend_fetch``. :: sub vcl_recv { unset req.http.via; } sub vcl_backend_fetch { unset bereq.http.via; } The Via header is generated with the ``server.identity`` variable for the ``received-by`` field. See `rfc9110_` for a description of the Via header. A ``resp.http.via`` header is no longer overwritten by varnish, but rather appended to. .. _rfc9110: https://www.rfc-editor.org/rfc/rfc9110#name-via VMODs ===== Cookies generated by vmod_cookie used to have a trailing semi-colon that goes against the recommandations from `rfc6265`_. This should not pose a problem, unless a piece of VCL code or a backend have come to rely on this incorrect behavior. .. _rfc6265: https://www.rfc-editor.org/rfc/rfc6265 varnishlog ========== The ``Begin`` and ``Link`` log records have an optional 4th field for the sub-request level. This may break log processors, log queries or NCSA formats that expect those records to have exactly 3 fields. varnishstat =========== The ``MAIN.fetch_no_thread`` counter is gone, it never worked. Track the ``MAIN.bgfetch_no_thread`` counter instead. Changes for developers and VMOD authors ======================================= The functions ``VRT_AddVDP()``, ``VRT_AddVFP()``, ``VRT_RemoveVDP()`` and ``VRT_RemoveVFP()`` are deprecated. Use ``VRT_AddFilter()`` to add a pair (VFP and VDP) and ``VRT_RemoveFilter()`` to remove it. A filter pair needs at least one member of the pair. The ``varnishtest -i`` option no longer works outside of a Varnish source tree. There shouldn't be a reason to use ``-i`` outside of the Varnish test suite. *eof* varnish-7.5.0/doc/sphinx/whats-new/upgrading-7.3.rst000066400000000000000000000037361457605730600222350ustar00rootroot00000000000000.. _whatsnew_upgrading_7.3: %%%%%%%%%%%%%%%%%%%%%%%% Upgrading to Varnish 7.3 %%%%%%%%%%%%%%%%%%%%%%%% New VSL format ============== The binary format of Varnish logs changed to increase the space for VXIDs from 32 bits to 64. This is not a change that older versions of the Varnish logging utilities can understand, and the new utilities can also not process old logs. There is no conversion tool from the old format to the new one, so this should become a problem only when raw logs are stored for future processing. If old binary logs need to remain usable, the only solution is to use a compatible Varnish version and at the time of this release, the 6.0 branch is the only one without an EOL date. For developers and VMOD authors: C interface changes requiring adjustments ========================================================================== Via backends ------------ The new backend argument to the ``VRT_new_backend*()`` functions is optional and ``NULL`` can be passed to match the previous behavior. suckaddr -------- The following functions return or accept ``const`` pointers from now on: - ``VSA_Clone()`` - ``VSA_getsockname()`` - ``VSA_getpeername()`` - ``VSA_Malloc()`` - ``VSA_Build*()`` - ``VSS_ResolveOne()`` - ``VSS_ResolveFirst()`` ``VSA_free()`` has been added to free heap memory allocated by ``VSA_Malloc()`` or one of the ``VSA_Build*()`` functions with a ``NULL`` first argument. directors --------- Directors which take and hold references to other directors via ``VRT_Assign_Backend()`` (typically any director which has other directors as backends) are now expected to implement the new ``.release`` callback of type ``void vdi_release_f(VCL_BACKEND)``. This function is called by ``VRT_DelDirector()``. The implementation is expected drop any backend references which the director holds (again using ``VRT_Assign_Backend()`` with ``NULL`` as the second argument). Failure to implement this callback can result in deadlocks, in particular during VCL discard. *eof* varnish-7.5.0/doc/sphinx/whats-new/upgrading-7.4.rst000066400000000000000000000016571457605730600222360ustar00rootroot00000000000000.. _whatsnew_upgrading_7.4: %%%%%%%%%%%%%%%%%%%%%%%% Upgrading to Varnish 7.4 %%%%%%%%%%%%%%%%%%%%%%%% Important VCL Changes ===================== When upgrading from Varnish-Cache 7.3, there is only one breaking change to consider in VCL: The ``Content-Length`` and ``Transfer-Encoding`` headers are now *protected*, they can neither be changed nor unset. This change was implemented to avoid de-sync issues from accidental, inadequate modifications of these headers. For the common use case of ``unset (be)req.http.Content-Length`` to dismiss a request body, ``unset (be)req.body`` should be used. Parameter Changes ================= The new ``varnishd`` parameter ``startup_timeout`` now specifically replaces ``cli_timeout`` for the initial startup only. In cases where ``cli_timeout`` was increased specifically to accommodate long startup times (e.g. for storage engine initialization), ``startup_timeout`` should be used. *eof* varnish-7.5.0/doc/sphinx/whats-new/upgrading-7.5.rst000066400000000000000000000055131457605730600222320ustar00rootroot00000000000000.. _whatsnew_upgrading_7.5: %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Upgrading to Varnish **7.5** %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Logs ==== The optional reason field of ``BackendClose`` records changed from a description (for example "Receive timeout") to a reason tag (for example ``RX_TIMEOUT``). This will break VSL queries based on the reason field. Using a tag should however make queries more reliable:: # before varnishlog -q 'BackendClose ~ "Receive timeout"' # after varnishlog -q 'BackendClose[4] eq RX_TIMEOUT' Such queries would no longer break when a description is changed. Timeouts ======== The value zero for timeouts could mean either timing out immediately, never timing out, or not having a value and falling back to another. The semantic changed so zero always means non-blocking, timing out either immediately after checking whether progress is possible, or after a millisecond in parts where this can't practically be done in a non-blocking fashion. In order to disable a timeout, that is to say never time out, the value ``INFINITY`` is used (or tested with ``isinf()``). In the places where a timeout setting may fall back to another value, like VCL variables falling back to parameters, ``NAN`` is used to convey the lack of timeout setting. VCL ~~~ All the timeouts backed by parameters can now be unset, meaning that setting the value zero no longer falls back to parameters. Parameters ~~~~~~~~~~ All the timeout parameters that can be disabled are now documented with the ``timeout`` flag, meaning that they can take the special value ``never`` for that purpose:: varnishadm param.set pipe_timeout never The parameters ``idle_send_timeout`` and ``timeout_idle`` are now limited to a maximum of 1 hour. VRT ~~~ For VMOD authors, it means that the ``vtim_dur`` type expects ``INFINITY`` to never time out or ``NAN`` to not set a timeout. For VMOD authors managing backends, there is one exception where a timeout fallback did not change from zero to ``NAN`` in ``struct vrt_backend``. The ``vtim_dur`` fields must take a negative value in order to fall back to the respective parameters due to a limitation in how VCL is compiled. As a convenience, a new macro ``VRT_BACKEND_INIT()`` behaves like ``INIT_OBJ`` but initializes timeouts to a negative value. VCL COLD events have been fixed for directors vs. VMODs: VDI COLD now comes before VMOD COLD. Other changes relevant for VMOD / VEXT authors ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Transports are now responsible for calling ``VDP_Close()`` in all cases. Storage engines are now responsible for deciding which ``fetch_chunksize`` to use. When Varnish-Cache does not know the expected object size, it calls the ``objgetspace`` stevedore function with a zero ``sz`` argument. ``(struct req).filter_list`` has been renamed to ``vdp_filter_list``. *eof* varnish-7.5.0/etc/000077500000000000000000000000001457605730600137655ustar00rootroot00000000000000varnish-7.5.0/etc/Makefile.am000066400000000000000000000011021457605730600160130ustar00rootroot00000000000000# DISTCLEANFILES = builtin.vcl dist_doc_DATA = builtin.vcl \ example.vcl builtin.vcl: $(top_srcdir)/bin/varnishd/builtin.vcl ( printf "This is the VCL configuration Varnish will automatically append to your VCL\nfile during compilation/loading. See the vcl(7) man page for details on syntax\nand semantics.\n\ New users is recommended to use the example.vcl file as a starting point.\n\n";\ sed -n '/vcl_recv/,$$p' $(top_srcdir)/bin/varnishd/builtin.vcl ) | \ sed 's/^\(.*\)$$/# \1/' > builtin.vcl vcldir=$(datarootdir)/$(PACKAGE)/vcl dist_vcl_DATA = devicedetect.vcl varnish-7.5.0/etc/devicedetect.vcl000066400000000000000000000142261457605730600171300ustar00rootroot00000000000000# # Copyright (c) 2016-2018 Varnish Cache project # Copyright (c) 2012-2016 Varnish Software AS # # SPDX-License-Identifier: BSD-2-Clause # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # 1. Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # 2. Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in the # documentation and/or other materials provided with the distribution. # # THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND # ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE # ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS # OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) # HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY # OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF # SUCH DAMAGE. # # detectdevice.vcl - regex based device detection for Varnish # https://github.com/varnishcache/varnish-devicedetect/ # # Original author: Lasse Karstensen sub devicedetect { unset req.http.X-UA-Device; set req.http.X-UA-Device = "pc"; # Handle that a cookie may override the detection altogether. if (req.http.Cookie ~ "(?i)X-UA-Device-force") { /* ;?? means zero or one ;, non-greedy to match the first. */ set req.http.X-UA-Device = regsub(req.http.Cookie, "(?i).*X-UA-Device-force=([^;]+);??.*", "\1"); /* Clean up our mess in the cookie header */ set req.http.Cookie = regsuball(req.http.Cookie, "(^|; ) *X-UA-Device-force=[^;]+;? *", "\1"); /* If the cookie header is now empty, or just whitespace, unset it. */ if (req.http.Cookie ~ "^ *$") { unset req.http.Cookie; } } else { if (req.http.User-Agent ~ "\(compatible; Googlebot-Mobile/2.1; \+http://www.google.com/bot.html\)" || (req.http.User-Agent ~ "(Android|iPhone)" && req.http.User-Agent ~ "\(compatible.?; Googlebot/2.1.?; \+http://www.google.com/bot.html") || (req.http.User-Agent ~ "(iPhone|Windows Phone)" && req.http.User-Agent ~ "\(compatible; bingbot/2.0; \+http://www.bing.com/bingbot.htm")) { set req.http.X-UA-Device = "mobile-bot"; } elsif (req.http.User-Agent ~ "(?i)(ads|google|bing|msn|yandex|baidu|ro|career|seznam|)bot" || req.http.User-Agent ~ "(?i)(baidu|jike|symantec)spider" || req.http.User-Agent ~ "(?i)pingdom" || req.http.User-Agent ~ "(?i)facebookexternalhit" || req.http.User-Agent ~ "(?i)scanner" || req.http.User-Agent ~ "(?i)slurp" || req.http.User-Agent ~ "(?i)(web)crawler") { set req.http.X-UA-Device = "bot"; } elsif (req.http.User-Agent ~ "(?i)ipad") { set req.http.X-UA-Device = "tablet-ipad"; } elsif (req.http.User-Agent ~ "(?i)ip(hone|od)") { set req.http.X-UA-Device = "mobile-iphone"; } /* how do we differ between an android phone and an android tablet? http://stackoverflow.com/questions/5341637/how-do-detect-android-tablets-in-general-useragent */ elsif (req.http.User-Agent ~ "(?i)android.*(mobile|mini)") { set req.http.X-UA-Device = "mobile-android"; } // android 3/honeycomb was just about tablet-only, and any phones will probably handle a bigger page layout. elsif (req.http.User-Agent ~ "(?i)android 3") { set req.http.X-UA-Device = "tablet-android"; } /* Opera Mobile */ elsif (req.http.User-Agent ~ "Opera Mobi") { set req.http.X-UA-Device = "mobile-smartphone"; } // May very well give false positives towards android tablets. Suggestions welcome. elsif (req.http.User-Agent ~ "(?i)android") { set req.http.X-UA-Device = "tablet-android"; } elsif (req.http.User-Agent ~ "PlayBook; U; RIM Tablet") { set req.http.X-UA-Device = "tablet-rim"; } elsif (req.http.User-Agent ~ "hp-tablet.*TouchPad") { set req.http.X-UA-Device = "tablet-hp"; } elsif (req.http.User-Agent ~ "Kindle/3") { set req.http.X-UA-Device = "tablet-kindle"; } elsif (req.http.User-Agent ~ "Touch.+Tablet PC" || req.http.User-Agent ~ "Windows NT [0-9.]+; ARM;" ) { set req.http.X-UA-Device = "tablet-microsoft"; } elsif (req.http.User-Agent ~ "Mobile.+Firefox") { set req.http.X-UA-Device = "mobile-firefoxos"; } elsif (req.http.User-Agent ~ "^HTC" || req.http.User-Agent ~ "Fennec" || req.http.User-Agent ~ "IEMobile" || req.http.User-Agent ~ "BlackBerry" || req.http.User-Agent ~ "BB10.*Mobile" || req.http.User-Agent ~ "GT-.*Build/GINGERBREAD" || req.http.User-Agent ~ "SymbianOS.*AppleWebKit") { set req.http.X-UA-Device = "mobile-smartphone"; } elsif (req.http.User-Agent ~ "(?i)symbian" || req.http.User-Agent ~ "(?i)^sonyericsson" || req.http.User-Agent ~ "(?i)^nokia" || req.http.User-Agent ~ "(?i)^samsung" || req.http.User-Agent ~ "(?i)^lg" || req.http.User-Agent ~ "(?i)bada" || req.http.User-Agent ~ "(?i)blazer" || req.http.User-Agent ~ "(?i)cellphone" || req.http.User-Agent ~ "(?i)iemobile" || req.http.User-Agent ~ "(?i)midp-2.0" || req.http.User-Agent ~ "(?i)u990" || req.http.User-Agent ~ "(?i)netfront" || req.http.User-Agent ~ "(?i)opera mini" || req.http.User-Agent ~ "(?i)palm" || req.http.User-Agent ~ "(?i)nintendo wii" || req.http.User-Agent ~ "(?i)playstation portable" || req.http.User-Agent ~ "(?i)portalmmm" || req.http.User-Agent ~ "(?i)proxinet" || req.http.User-Agent ~ "(?i)windows\ ?ce" || req.http.User-Agent ~ "(?i)winwap" || req.http.User-Agent ~ "(?i)eudoraweb" || req.http.User-Agent ~ "(?i)htc" || req.http.User-Agent ~ "(?i)240x320" || req.http.User-Agent ~ "(?i)avantgo") { set req.http.X-UA-Device = "mobile-generic"; } } } varnish-7.5.0/etc/example.vcl000066400000000000000000000022461457605730600161320ustar00rootroot00000000000000# # This is an example VCL file for Varnish. # # It does not do anything by default, delegating control to the # builtin VCL. The builtin VCL is called when there is no explicit # return statement. # # See the VCL chapters in the Users Guide for a comprehensive documentation # at https://www.varnish-cache.org/docs/. # Marker to tell the VCL compiler that this VCL has been written with the # 4.0 or 4.1 syntax. vcl 4.1; # Default backend definition. Set this to point to your content server. backend default { .host = "127.0.0.1"; .port = "8080"; } sub vcl_recv { # Happens before we check if we have this in cache already. # # Typically you clean up the request here, removing cookies you don't need, # rewriting the request, etc. } sub vcl_backend_response { # Happens after we have read the response headers from the backend. # # Here you clean the response headers, removing silly Set-Cookie headers # and other mistakes your backend does. } sub vcl_deliver { # Happens when we have all the pieces we need, and are about to send the # response to the client. # # You can do accounting or modifying the final object here. } varnish-7.5.0/flint.lnt000066400000000000000000000235131457605730600150510ustar00rootroot00000000000000// Copyright (c) 2013-2020 Varnish Software AS // SPDX-License-Identifier: BSD-2-Clause // See LICENSE file for full text of license /* * Toplevel control file for FlexeLint */ //d__flexelint_v9__=1 +fan // No automatic custody -ffc -hm4 /////////////////////////////////////////////////////////////////////// // electives //+e9* -e904 -e935 -e955 -e956 //+e958 // report internal struct padding //+e959 // report struct tail/size padding -e960 -e961 -efile(966, "/usr/include/*") // unused indir include -efile(966, "../../include/tbl/*") -e964 -e970 -e971 -e9012 -e9021 -e9022 -e9023 -e9024 -e9026 -e9034 -e9037 -e9042 -e9048 -e9050 -e9051 -e9067 +e9071 // defined macro '...' is reserved to the compiler +e9075 // external symbol without declaration -esym(9075, main) -e9085 -e9105 -e9107 -e9109 -e9113 -e9131 -e9132 -e9133 -e9136 -e9140 -e9141 -e9147 // we dont use & on func_pointers -e9149 -e9158 -e9159 -e9165 /////////////////////////////////////////////////////////////////////// // This does not work with pcre2.h, the injected /* lint ... */ comments // interfere with macro argument concatenation. Clear flexelint bug // because it does not happen when run with -p // -emacro((835),*) // A zero has been given as ___ argument to operator '___ -e835 // A zero has been given as ___ argument to operator '___ /////////////////////////////////////////////////////////////////////// // build/config related -efile(451, "tbl/*.h") // No include guard -efile(537, "tbl/*.h") // Repeated include -efile(967, "tbl/*.h") // No include guard -efile(451, "../../include/vut_options.h") // No include guard -efile(451, "../../include/vapi/vapi_options.h") // No include guard -efile(451, ../../config.h) // No include guard -efile(766, ../../config.h) // Header file '___' not used in module '___' +libh(../../config.h) -esym(768, vmod_priv) // global struct member '___' (___) not referenced /////////////////////////////////////////////////////////////////////// // Thread/locking, too many false positives still -e454 // A thread mutex has been locked but not unlocked___ -e455 // A thread mutex that had not been locked is being unlocked -e456 // Two execution paths are being combined with different mutex lock states -e457 // unprotected write access -e458 // unprotected access -e459 // unprotected access /////////////////////////////////////////////////////////////////////// // General stylistic issues -e663 // Suspicious array to pointer conversion //-e574 // Signed-unsigned mix with relational -e641 // Converting enum '...' to int -e716 // while(1) ... -e726 // Extraneous comma ignored -e728 // Symbol ... not explicitly initialized -e737 // Loss of sign in promotion from int to unsigned int -e763 // Redundant declaration for symbol '...' previously declared -e717 // do ... while(0); -e777 // Testing floats for equality -e785 // Too few initializers for aggregate -e786 // String concatenation within initializer -e788 // enum constant '___' not used within defaulted switch -esym(818, argv) // Pointer parameter '...' could be declared as pointing to const -e850 // loop variable modified in loop /* * va_list's are opaque for a reason, but we pretend to FlexeLint that it * is just a void*, so it proposes constification, which is not generally OK, * for instance on register-spilling architectures. * XXX: Maybe 'ap' is a badly chosen conventional name here... */ -esym(818, ap) // Pointer parameter '...' could be declared as pointing to const -efunc(789, main) // Assigning address of auto variable '...' to static // +e958 // padding /////////////////////////////////////////////////////////////////////// // System/Posix/Iso-C library related -emacro(747, isnan) // significant coercion -emacro(506, isinf) // Constant value Boolean -emacro(866, isinf) // Unusual use of '?' in argument to sizeof -emacro(736, isinf) // Loss of precision // ignore retval -esym(534, printf) -esym(534, fprintf) -esym(534, vfprintf) -esym(534, sprintf) -esym(534, fputc) -esym(534, memset) -esym(534, memcpy) -esym(534, memmove) -esym(534, strcat) -esym(534, strcpy) -esym(534, strncpy) -esym(534, sleep) -esym(534, usleep) /////////////////////////////////////////////////////////////////////// // Vmod/vmodtool.py //-esym(14, vmod_enum_*) // Symbol '___' previously defined (___) //-esym(759, vmod_enum_*) // header declaration for symbol '___' defined at (___) //-esym(765, vmod_enum_*) // external '___' (___) could be made static /////////////////////////////////////////////////////////////////////// // -sem(VUT_Error, r_no) /////////////////////////////////////////////////////////////////////// // -sem(VAS_Fail, r_no) // does not return -emacro(506, assert) // constant value boolean -emacro(827, assert) // loop not reachable -emacro(774, assert) // boolean always true -emacro(731, assert) // boolean arg to eq/non-eq -emacro(731, xxxassert) // arg to eq/non-eq -emacro(527, WRONG) // unreachable code -emacro(774, VALID_OBJ) // boolean always true -emacro(506, v_static_assert) // Constant value Boolean -esym(751, __vassert_*) // local typedef '___' (___) not referenced /////////////////////////////////////////////////////////////////////// // Places where we use x<<0 for reasons of symmetry -emacro(835, VCT_SP) // A zero has been given as ___ argument to operator '___' -emacro(835, VSL_COPT_TAIL) // A zero has been given as ___ argument to operator '___' -emacro(835, SLT_F_UNUSED) // A zero has been given as ___ argument to operator '___' -emacro(835, ARGV_COMMENT) // A zero has been given as ___ argument to operator '___' -emacro(835, F_SEEN_ixIX) // A zero has been given as ___ argument to operator '___' -emacro(835, VEX_OPT_CASELESS) // A zero has been given as ___ argument to operator '___' /////////////////////////////////////////////////////////////////////// // -esym(759, VSB_*) // header decl could be moved -esym(765, VSB_*) // extern could be made static -esym(714, VSB_*) // symb not ref -sem(VSB_new, @p == (1p ? 1p : malloc(1))) -sem(VSB_delete, custodial(1)) // ignore retval -esym(534, VSB_cat) -esym(534, VSB_bcat) -esym(534, VSB_putc) -esym(534, VSB_printf) -esym(534, VSB_vprintf) /////////////////////////////////////////////////////////////////////// // // ignore retval -esym(534, VTE_cat) -esym(534, VTE_putc) -esym(534, VTE_printf) /////////////////////////////////////////////////////////////////////// // // -emacro(801, VRBT_*) // goto considered bad -emacro(527, VRBT_*) // unreachable code -emacro(740, VRBT_*) // unusual pointer cast -emacro(438, VRBT_*) // last value assigned not used -emacro(613, VRBT_*) // Possible use of null pointer 'child' in left argument to -emacro(838, VRBT_*) // Previously assigned value to variable 'child' has not been used -emacro(50, VRBT_GENERATE_*) // Attempted to take the address of a non-lvalue -emacro(506, VRBT_GENERATE_*) // Constant value Boolean -emacro(845, VRBT_GENERATE_*) // The left argument to operator '&&' is certain to be 0 -emacro(774, VRBT_GENERATE_*) // Boolean within 'if' always evaluates to False -esym(534, *_VRBT_REMOVE) // ignore retval -esym(534, *_VRBT_INSERT) // ignore retval /////////////////////////////////////////////////////////////////////// // -esym(755, VLIST_*) // Global macro not ref. -esym(755, VSLIST_*) -esym(755, VSTAILQ_*) -esym(755, VTAILQ_*) // 506 = constant value boolean -emacro(506, VTAILQ_FOREACH_REVERSE_SAFE) -emacro(506, VTAILQ_FOREACH_SAFE) -emacro(506, VSTAILQ_FOREACH_SAFE) // constant value boolean // 826 = Suspicious pointer-to-pointer conversion (area to o small) -emacro((826), VTAILQ_LAST) -emacro((826), VTAILQ_PREV) -emacro(740, VTAILQ_LAST) // Unusual pointer cast (incompatible indirect types) -emacro(740, VTAILQ_PREV) // Unusual pointer cast (incompatible indirect types) -esym(754, "*::vtqh_first") // local struct member '...' not referenced /////////////////////////////////////////////////////////////////////// // -emacro(527, NEEDLESS) // unreachable code -emacro(160, _vtake) // The sequence '( {' is non standard +rw( __typeof__ ) /////////////////////////////////////////////////////////////////////// // -emacro(446, TOSTRAND, TOSTRANDS) // side effect in initializer /////////////////////////////////////////////////////////////////////// // -esym(765, vsl_vbm_bitclr) -esym(759, vsl_vbm_bitclr) -esym(765, vsl_vbm_bitset) -esym(759, vsl_vbm_bitset) -esym(765, vsm_diag) -esym(759, vsm_diag) /////////////////////////////////////////////////////////////////////// // "miniobj.h" -emacro(774, REPLACE) // It is ok to default after handling a few select SLT_* tags -esym(788, VSL_tag_e::SLT_*) // enum constant '...' not used within defaulted switch -esym(785,VSL_tags) // Sparse array /////////////////////////////////////////////////////////////////////// // readline etc. -esym(534, add_history) /////////////////////////////////////////////////////////////////////// // -lcurses -esym(534, beep) -esym(534, curs_set) -esym(534, delwin) -esym(534, doupdate) -esym(534, endwin) -esym(534, initscr) -esym(534, intrflush) -esym(534, keypad) -esym(534, mvprintw) -esym(534, waddnstr) -esym(534, mvwprintw) -esym(534, nodelay) -esym(534, noecho) -esym(534, nonl) -esym(534, raw) -esym(534, waddch) -esym(534, wattr_off) -esym(534, wattr_on) -esym(534, wbkgd) -esym(534, werase) -esym(534, wmove) -esym(534, wnoutrefresh) -esym(534, wprintw) -esym(534, wredrawln) -esym(534, wrefresh) /////////////////////////////////////////////////////////////////////// // Noise reduction, review periodically -e459 // unlocked access from func-ptr -e679 // Suspicious Truncation in arithmetic expression combining with pointer -e712 // Loss of precision (___) (___ to ___) -e713 // Loss of precision (___) (___ to ___) -e732 // Loss of sign (___) (___ to ___) -e734 // Loss of precision (___) (___ bits to ___ bits) -e747 // Significant prototype coercion (___) ___ to ___ varnish-7.5.0/include/000077500000000000000000000000001457605730600146355ustar00rootroot00000000000000varnish-7.5.0/include/Makefile.am000066400000000000000000000057461457605730600167050ustar00rootroot00000000000000# # API headers nobase_pkginclude_HEADERS = \ tbl/acct_fields_bereq.h \ tbl/acct_fields_req.h \ tbl/backend_poll.h \ tbl/ban_arg_oper.h \ tbl/ban_oper.h \ tbl/ban_vars.h \ tbl/bereq_flags.h \ tbl/beresp_flags.h \ tbl/boc_state.h \ tbl/body_status.h \ tbl/cli_cmds.h \ tbl/debug_bits.h \ tbl/experimental_bits.h \ tbl/feature_bits.h \ tbl/h2_error.h \ tbl/h2_frames.h \ tbl/h2_settings.h \ tbl/h2_stream.h \ tbl/htc.h \ tbl/http_headers.h \ tbl/http_response.h \ tbl/locks.h \ tbl/obj_attr.h \ tbl/oc_exp_flags.h \ tbl/oc_flags.h \ tbl/params.h \ tbl/req_bereq_flags.h \ tbl/req_flags.h \ tbl/sess_attr.h \ tbl/sess_close.h \ tbl/symbol_kind.h \ tbl/vcc_feature_bits.h \ tbl/vcl_returns.h \ tbl/vcl_context.h \ tbl/vcl_states.h \ tbl/vhd_fsm.h \ tbl/vhd_fsm_funcs.h \ tbl/vhd_return.h \ tbl/vhp_huffman.h \ tbl/vhp_static.h \ tbl/vrt_stv_var.h \ tbl/vsc_levels.h \ tbl/vsig_list.h \ tbl/vsl_tags.h \ tbl/vsl_tags_http.h \ tbl/waiters.h \ vapi/vsc.h \ vapi/vsig.h \ vapi/vsl.h \ vapi/vsl_int.h \ vapi/vsm.h \ vapi/voptget.h \ vapi/vapi_options.h \ vcli.h \ vut.h \ vut_options.h # Headers for use with vmods nobase_pkginclude_HEADERS += \ vbh.h \ miniobj.h \ vas.h \ vav.h \ vbm.h \ vcl.h \ vcs.h \ vmod_abi.h \ vnum.h \ vqueue.h \ vre.h \ vre_pcre2.h \ vdef.h \ vrt.h \ vrt_obj.h \ vsa.h \ vsb.h \ vsha256.h \ vtcp.h \ vte.h \ vtim.h \ vtree.h \ vrnd.h # Private headers nobase_noinst_HEADERS = \ compat/daemon.h \ vfl.h \ libvcc.h \ vcc_interface.h \ vcli_serve.h \ vcs_version.h \ vct.h \ vcurses.h \ venc.h \ vend.h \ vev.h \ vfil.h \ vin.h \ vjsn.h \ vlu.h \ vmb.h \ vpf.h \ vsc_priv.h \ vsl_priv.h \ vsm_priv.h \ vsub.h \ vss.h \ vtcp.h \ vus.h ## keep in sync with lib/libvcc/Makefile.am vcl.h: \ $(top_srcdir)/lib/libvcc/generate.py \ $(top_srcdir)/include/vcc_interface.h \ $(top_srcdir)/include/vdef.h \ $(top_srcdir)/include/vrt.h \ $(top_srcdir)/doc/sphinx/reference/vcl_var.rst mkdir -p $(top_builddir)/include/tbl ${PYTHON} $(top_srcdir)/lib/libvcc/generate.py \ $(top_srcdir) $(top_builddir) GEN_H = \ tbl/vrt_stv_var.h \ tbl/vcl_returns.h \ vrt_obj.h $(GEN_H): vcl.h GENERATED_H = vcl.h $(GEN_H) ## vcs_version.h / vmod_abi.h need to be up-to-date with every build ## except when building from a distribution vcs_version.h: @if test -e $(srcdir)/generate.py && \ test -e $(top_srcdir)/.git || \ ! test -f $(srcdir)/vmod_abi.h || \ ! test -f $(srcdir)/vcs_version.h ; then \ ${PYTHON} $(srcdir)/generate.py \ $(top_srcdir) $(top_builddir) ; \ fi vmod_abi.h: vcs_version.h .PHONY: vcs_version.h ## BUILT_SOURCES = \ $(GENERATED_H) \ vcs_version.h \ vmod_abi.h \ vrt_test.c MAINTAINERCLEANFILES = $(GENERATED_H) CLEANFILES = vrt_test.c noinst_PROGRAMS = vbm_test vrt_test vbm_test_SOURCES = vbm_test.c vbm.h vrt_test.c: vdef.h vrt.h $(AM_V_GEN) cat $(srcdir)/vdef.h $(srcdir)/vrt.h > $@ vrt_test_CFLAGS = -c vrt_test_LINK = echo >$@ TESTS = vbm_test varnish-7.5.0/include/compat/000077500000000000000000000000001457605730600161205ustar00rootroot00000000000000varnish-7.5.0/include/compat/daemon.h000066400000000000000000000032331457605730600175350ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2009 Varnish Software AS * All rights reserved. * * Author: Dag-Erling Smørgrav * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #ifndef COMPAT_DAEMON_H_INCLUDED #define COMPAT_DAEMON_H_INCLUDED #ifndef HAVE_DAEMON int varnish_daemon(int nochdir, int noclose); #else #define varnish_daemon(a,b) daemon(a,b) #endif #endif varnish-7.5.0/include/generate.py000077500000000000000000000054301457605730600170060ustar00rootroot00000000000000#!/usr/bin/env python3 # # Copyright (c) 2006 Verdens Gang AS # Copyright (c) 2006-2015 Varnish Software AS # All rights reserved. # # Author: Poul-Henning Kamp # # SPDX-License-Identifier: BSD-2-Clause # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # 1. Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # 2. Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in the # documentation and/or other materials provided with the distribution. # # THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND # ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE # ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS # OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) # HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY # OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF # SUCH DAMAGE. # # Generate vcs_version.h vmod_abi.h import subprocess import os import sys srcroot = "../.." buildroot = "../.." if len(sys.argv) == 3: srcroot = sys.argv[1] buildroot = sys.argv[2] elif len(sys.argv) != 1: print("Two arguments or none") exit(2) ####################################################################### def file_header(fo): fo.write("""/* * NB: This file is machine generated, DO NOT EDIT! * * Edit and run include/generate.py instead. */ """) ####################################################################### v = subprocess.check_output([ "git --git-dir=%s rev-parse HEAD 2>/dev/null || echo NOGIT" % (os.path.join(srcroot, ".git")) ], shell=True, universal_newlines=True).strip() vcsfn = os.path.join(srcroot, "include", "vcs_version.h") try: i = open(vcsfn).readline() except IOError: i = "" ident = "/* " + v + " */\n" if i != ident: fo = open(vcsfn, "w") fo.write(ident) file_header(fo) fo.write('#define VCS_Version "%s"\n' % v) fo.close() for i in open(os.path.join(buildroot, "Makefile")): if i[:14] == "PACKAGE_STRING": break i = i.split("=")[1].strip() fo = open(os.path.join(srcroot, "include", "vmod_abi.h"), "w") file_header(fo) fo.write('#define VMOD_ABI_Version "%s %s"\n' % (i, v)) fo.close() varnish-7.5.0/include/libvcc.h000066400000000000000000000040211457605730600162450ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2009 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ struct vcc; struct vcc *VCC_New(void); void VCC_Builtin_VCL(struct vcc *, const char *); void VCC_VCL_path(struct vcc *, const char *); void VCC_VMOD_path(struct vcc *, const char *); void VCC_Predef(struct vcc *, const char *type, const char *name); void VCC_VCL_Range(unsigned *, unsigned *); void VCC_VEXT(struct vcc *, const char *); #define VCC_FEATURE_BIT(U, l, d) \ void VCC_Opt_ ## l(struct vcc *, unsigned); #include "tbl/vcc_feature_bits.h" int VCC_Compile(struct vcc *, struct vsb **, const char *, const char *, const char *, const char *); varnish-7.5.0/include/miniobj.h000066400000000000000000000046031457605730600164400ustar00rootroot00000000000000/* * Written by Poul-Henning Kamp * * This file is in the public domain. * */ #define ZERO_OBJ(to, sz) \ do { \ void *(*volatile z_obj)(void *, int, size_t) = memset; \ (void)z_obj(to, 0, sz); \ } while (0) #define INIT_OBJ(to, type_magic) \ do { \ (void)memset(to, 0, sizeof *(to)); \ (to)->magic = (type_magic); \ } while (0) #define FINI_OBJ(to) \ do { \ ZERO_OBJ(&(to)->magic, sizeof (to)->magic); \ to = NULL; \ } while (0) #define ALLOC_OBJ(to, type_magic) \ do { \ (to) = calloc(1, sizeof *(to)); \ if ((to) != NULL) \ (to)->magic = (type_magic); \ } while (0) #define ALLOC_OBJ_EXTRA(to, extra_size, type_magic) \ do { \ (to) = calloc(1, sizeof(*(to)) + (extra_size)); \ if ((to) != NULL) \ (to)->magic = (type_magic); \ } while (0) #define ALLOC_FLEX_OBJ(to, fld, len, type_magic) \ ALLOC_OBJ_EXTRA(to, (len) * sizeof *((to)->fld), (type_magic)) #define FREE_OBJ(to) \ do { \ ZERO_OBJ(&(to)->magic, sizeof (to)->magic); \ free(to); \ to = NULL; \ } while (0) #define VALID_OBJ(ptr, type_magic) \ ((ptr) != NULL && (ptr)->magic == (type_magic)) #define CHECK_OBJ(ptr, type_magic) \ do { \ assert((ptr)->magic == type_magic); \ } while (0) #define CHECK_OBJ_NOTNULL(ptr, type_magic) \ do { \ assert((ptr) != NULL); \ assert((ptr)->magic == type_magic); \ } while (0) #define CHECK_OBJ_ORNULL(ptr, type_magic) \ do { \ if ((ptr) != NULL) \ assert((ptr)->magic == type_magic); \ } while (0) #define CAST_OBJ(to, from, type_magic) \ do { \ (to) = (from); \ if ((to) != NULL) \ CHECK_OBJ((to), (type_magic)); \ } while (0) #define CAST_OBJ_NOTNULL(to, from, type_magic) \ do { \ (to) = (from); \ AN((to)); \ CHECK_OBJ((to), (type_magic)); \ } while (0) #define TAKE_OBJ_NOTNULL(to, pfrom, type_magic) \ do { \ AN((pfrom)); \ (to) = *(pfrom); \ *(pfrom) = NULL; \ CHECK_OBJ_NOTNULL((to), (type_magic)); \ } while (0) #define REPLACE(ptr, val) \ do { \ const char *_vreplace = (val); \ free(ptr); \ if (_vreplace != NULL) { \ ptr = strdup(_vreplace); \ AN((ptr)); \ } else { \ ptr = NULL; \ } \ } while (0) varnish-7.5.0/include/tbl/000077500000000000000000000000001457605730600154165ustar00rootroot00000000000000varnish-7.5.0/include/tbl/README000066400000000000000000000003421457605730600162750ustar00rootroot00000000000000The include files in this directory are special, they are used to define sets of objects using macros, such that a list of all such objects can be compiled into the code by including these files with a properly defined macro. varnish-7.5.0/include/tbl/acct_fields_bereq.h000066400000000000000000000032471457605730600212130ustar00rootroot00000000000000/*- * Copyright (c) 2008 Verdens Gang AS * Copyright (c) 2008-2014 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * These are the stats we keep track of per busyobj. */ /*lint -save -e525 -e539 */ ACCT(bereq_hdrbytes) ACCT(bereq_bodybytes) ACCT(beresp_hdrbytes) ACCT(beresp_bodybytes) #undef ACCT /*lint -restore */ varnish-7.5.0/include/tbl/acct_fields_req.h000066400000000000000000000033551457605730600207040ustar00rootroot00000000000000/*- * Copyright (c) 2008 Verdens Gang AS * Copyright (c) 2008-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * These are the stats we keep track of per request. * NB: Remember to mark those in vsc_fields.h to be included in struct dstat. */ /*lint -save -e525 -e539 */ ACCT(req_hdrbytes) ACCT(req_bodybytes) ACCT(resp_hdrbytes) ACCT(resp_bodybytes) #undef ACCT /*lint -restore */ varnish-7.5.0/include/tbl/backend_poll.h000066400000000000000000000034541457605730600202120ustar00rootroot00000000000000/*- * Copyright (c) 2008-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ /*lint -save -e525 -e539 */ BITMAP(good_ipv4, '4', "Good IPv4", 0) BITMAP(good_ipv6, '6', "Good IPv6", 0) BITMAP(good_unix, 'U', "Good UNIX", 0) BITMAP( err_xmit, 'x', "Error Xmit", 0) BITMAP(good_xmit, 'X', "Good Xmit", 0) BITMAP( err_recv, 'r', "Error Recv", 0) BITMAP(good_recv, 'R', "Good Recv", 0) BITMAP(happy, 'H', "Happy", 1) #undef BITMAP /*lint -restore */ varnish-7.5.0/include/tbl/ban_arg_oper.h000066400000000000000000000045671457605730600202210ustar00rootroot00000000000000/*- * Copyright 2017 UPLEX - Nils Goroll Systemoptimierung * All rights reserved. * * Author: Nils Goroll * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are met: * 1. Redistributions of source code must retain the above copyright notice, * this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright notice, * this list of conditions and the following disclaimer in the documentation * and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND ANY * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE * DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE FOR ANY * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * Define which operators are valid for which ban arguments */ /*lint -save -e525 -e539 */ #define BANS_OPER_STRING \ (1< * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are met: * 1. Redistributions of source code must retain the above copyright notice, * this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright notice, * this list of conditions and the following disclaimer in the documentation * and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND ANY * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE * DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE FOR ANY * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * Define ban operators */ /*lint -save -e525 -e539 */ OPER(BANS_OPER_EQ, "==") OPER(BANS_OPER_NEQ, "!=") OPER(BANS_OPER_MATCH, "~") OPER(BANS_OPER_NMATCH, "!~") OPER(BANS_OPER_GT, ">") OPER(BANS_OPER_GTE, ">=") OPER(BANS_OPER_LT, "<") OPER(BANS_OPER_LTE, "<=") #undef OPER /*lint -restore */ varnish-7.5.0/include/tbl/ban_vars.h000066400000000000000000000042711457605730600173660ustar00rootroot00000000000000/*- * Copyright (c) 2008-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Define which variables we can ban on, and which function does it. * */ /*lint -save -e525 -e539 */ PVAR("req.url", BANS_FLAG_REQ, BANS_ARG_URL) PVAR("req.http.", BANS_FLAG_REQ | BANS_FLAG_HTTP, BANS_ARG_REQHTTP) PVAR("obj.status", BANS_FLAG_OBJ, BANS_ARG_OBJSTATUS) PVAR("obj.http.", BANS_FLAG_OBJ | BANS_FLAG_HTTP, BANS_ARG_OBJHTTP) PVAR("obj.ttl", BANS_FLAG_OBJ | BANS_FLAG_DURATION | BANS_FLAG_NODEDUP, BANS_ARG_OBJTTL) PVAR("obj.age", BANS_FLAG_OBJ | BANS_FLAG_DURATION | BANS_FLAG_NODEDUP, BANS_ARG_OBJAGE) PVAR("obj.grace", BANS_FLAG_OBJ | BANS_FLAG_DURATION, BANS_ARG_OBJGRACE) PVAR("obj.keep", BANS_FLAG_OBJ | BANS_FLAG_DURATION, BANS_ARG_OBJKEEP) #undef PVAR /*lint -restore */ varnish-7.5.0/include/tbl/bereq_flags.h000066400000000000000000000033451457605730600200460ustar00rootroot00000000000000/*- * Copyright (c) 2014-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ /*lint -save -e525 -e539 */ /* lower, vcl_r, vcl_w, doc */ BEREQ_FLAG(uncacheable, 0, 0, "") // also beresp BEREQ_FLAG(is_bgfetch, 1, 0, "") #define REQ_BEREQ_FLAG(lower, vcl_r, vcl_w, doc) \ BEREQ_FLAG(lower, vcl_r, vcl_w, doc) #include "tbl/req_bereq_flags.h" #undef BEREQ_FLAG /*lint -restore */ varnish-7.5.0/include/tbl/beresp_flags.h000066400000000000000000000034271457605730600202310ustar00rootroot00000000000000/*- * Copyright (c) 2014-2015 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ /*lint -save -e525 -e539 */ /* * filters: whether this flag determines beresp.filters default * * lower, vcl_r, vcl_w, filters, doc */ BERESP_FLAG(do_esi, 1, 1, 1, "") BERESP_FLAG(do_gzip, 1, 1, 1, "") BERESP_FLAG(do_gunzip, 1, 1, 1, "") BERESP_FLAG(do_stream, 1, 1, 0, "") BERESP_FLAG(was_304, 1, 0, 0, "") #undef BERESP_FLAG /*lint -restore */ varnish-7.5.0/include/tbl/boc_state.h000066400000000000000000000035111457605730600175320ustar00rootroot00000000000000/*- * Copyright (c) 2016 Varnish Software AS * All rights reserved. * * Author: Federico G. Schwindt * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ /*lint -save -e525 -e539 */ BOC_STATE(INVALID, invalid) /* don't touch (yet) */ BOC_STATE(REQ_DONE, req_done) /* bereq.* can be examined */ BOC_STATE(PREP_STREAM, prep_stream) /* Prepare for streaming */ BOC_STATE(STREAM, stream) /* beresp.* can be examined */ BOC_STATE(FINISHED, finished) /* object is complete */ BOC_STATE(FAILED, failed) /* something went wrong */ #undef BOC_STATE /*lint -restore */ varnish-7.5.0/include/tbl/body_status.h000066400000000000000000000035511457605730600201330ustar00rootroot00000000000000/*- * Copyright (c) 2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Various ways to handle the body coming from the backend. */ /*lint -save -e525 -e539 */ /* Upper lower nbr, avail len_known */ BODYSTATUS(NONE, none, 0, 0, 1) BODYSTATUS(ERROR, error, 1, -1, 0) BODYSTATUS(CHUNKED, chunked, 2, 1, 0) BODYSTATUS(LENGTH, length, 3, 1, 1) BODYSTATUS(EOF, eof, 4, 1, 0) BODYSTATUS(TAKEN, taken, 5, 0, 0) BODYSTATUS(CACHED, cached, 6, 2, 1) #undef BODYSTATUS /*lint -restore */ varnish-7.5.0/include/tbl/cli_cmds.h000066400000000000000000000277631457605730600173630ustar00rootroot00000000000000/*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp * * SPDX-License-Identifier: BSD-2-Clause * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * These macros define the common data for requests in the CLI protocol. * The fields are: * const char * upper-case C-ident request_name * const char * request_name * const char * request_syntax (for short help) * const char * request_help (for long help) * const char * documentation (for sphinx) * int minimum_arguments * int maximum_arguments */ /*lint -save -e525 -e539 */ CLI_CMD(BAN, "ban", "ban [&& ...]", "Mark obsolete all objects where all the conditions match.", " See :ref:`vcl(7)_ban` for details", 3, -1 ) CLI_CMD(BAN_LIST, "ban.list", "ban.list [-j]", "List the active bans.", " Unless ``-j`` is specified for JSON output, " " the output format is:\n\n" " * Time the ban was issued.\n\n" " * Objects referencing this ban.\n\n" " * ``C`` if ban is completed = no further testing against it.\n\n" " * if ``lurker`` debugging is enabled:\n\n" " * ``R`` for req.* tests\n\n" " * ``O`` for obj.* tests\n\n" " * Pointer to ban object\n\n" " * Ban specification\n\n" " Durations of ban specifications get normalized, for example \"7d\"" " gets changed into \"1w\".", 0, 0 ) CLI_CMD(VCL_LOAD, "vcl.load", "vcl.load [auto|cold|warm]", "Compile and load the VCL file under the name provided.", "", 2, 3 ) CLI_CMD(VCL_INLINE, "vcl.inline", "vcl.inline [auto|cold|warm]", "Compile and load the VCL data under the name provided.", " Multi-line VCL can be input using the here document" " :ref:`ref_syntax`.", 2, 3 ) CLI_CMD(VCL_STATE, "vcl.state", "vcl.state auto|cold|warm", " Force the state of the named configuration.", "", 2, 2 ) CLI_CMD(VCL_DISCARD, "vcl.discard", "vcl.discard ...", "Unload the named configurations (when possible).", " Unload the named configurations and labels matching at least" " one name pattern. All matching configurations and labels" " are discarded in the correct order with respect to potential" " dependencies. If one configuration or label could not be" " discarded because one of its dependencies would remain," " nothing is discarded." " Each individual name pattern must match at least one named" " configuration or label.", 1, -1 ) CLI_CMD(VCL_LIST, "vcl.list", "vcl.list [-j]", "List all loaded configuration.", " Unless ``-j`` is specified for JSON output, " " the output format is five or seven columns of dynamic width, " " separated by white space with the fields:\n\n" " * status: active, available or discarded\n\n" " * state: label, cold, warm, or auto\n\n" " * temperature: init, cold, warm, busy or cooling\n\n" " * busy: number of references to this vcl (integer)\n\n" " * name: the name given to this vcl or label\n\n" " * [ ``<-`` | ``->`` ] and label info last two fields)\n\n" " * ``->`` : label \"points to\" the named \n\n" " * ``<-`` ( label[s]): the vcl has label(s)\n\n", 0, 0 ) CLI_CMD(VCL_DEPS, "vcl.deps", "vcl.deps [-j]", "List all loaded configuration and their dependencies.", " Unless ``-j`` is specified for JSON output, the" " output format is up to two columns of dynamic width" " separated by white space with the fields:\n\n" " * VCL: a VCL program\n\n" " * Dependency: another VCL program it depends on\n\n" " Only direct dependencies are listed, and VCLs with" " multiple dependencies are listed multiple times.", 0, 0 ) CLI_CMD(VCL_SHOW, "vcl.show", "vcl.show [-v] []", "Display the source code for the specified configuration.", "", 0, 2 ) CLI_CMD(VCL_USE, "vcl.use", "vcl.use ", "Switch to the named configuration immediately.", "", 1, 1 ) CLI_CMD(VCL_LABEL, "vcl.label", "vcl.label