pax_global_header00006660000000000000000000000064134204222160014506gustar00rootroot0000000000000052 comment=84b80fbb98ef77d43324ddc2ebf62c090f29bce5 puppetdb-6.2.0/000075500000000000000000000000001342042221600133345ustar00rootroot00000000000000puppetdb-6.2.0/.dockerignore000064400000000000000000000000771342042221600160140ustar00rootroot00000000000000* !project.clj !src !resources !documentation !docker/puppetdb puppetdb-6.2.0/.gitattributes000064400000000000000000000004301342042221600162240ustar00rootroot00000000000000docker/puppetdb/* text eol=lf docker/puppetdb/conf.d/* text eol=lf docker/puppetdb/logging/* text eol=lf docker/puppetdb/docker-entrypoint.d/* text eol=lf puppetdb-6.2.0/.gitignore000064400000000000000000000017161342042221600153310ustar00rootroot00000000000000 # Leiningen /target/ /target-gems/ /pom.xml # Acceptance tests /junit/ /log/ # Emacs *# *~ .#* # Local overide for testing config /test-resources/config/local.clj # Vagrant /.vagrant/ # A generalized temporary area for local use, tmp for # general temporary items, and scripts for any local # scripts. /tmp /scripts # Bundler files /vendor /Gemfile.lock # Used by Puppet Labs packaging system /ext/packaging/ # VIM swap files *.swp # The leiningen temporary/local files .lein-failures .lein-deps-sum .lein-repl-history # lein-checkouts folder checkouts # RVM files for localised setups .ruby-gemset .ruby-version # clj-i18n /resources/puppetlabs/puppetdb/*.class /resources/locales.clj /mp-* /dev-resources/i18n/bin /.bundle/ /ext/test-conf/pgbin-requested /ext/test-conf/pgport-requested /ext/test-conf/pgver-requested /ext/test-conf/puppet-ref-requested /ext/test-conf/puppetserver-dep /ext/test-conf/puppetserver-ref-requested /puppet/state/ /puppetserver/ puppetdb-6.2.0/.travis.yml000064400000000000000000000155751342042221600154620ustar00rootroot00000000000000language: generic dist: trusty # Always explicitly set sudo. Otherwise travis' defaults may vary # based on when the repository testing was enabled. sudo: required services: - docker # The test specifications are all extracted from the PDB_TEST value. # jdk_switcher is a shell function, so we can't handle it in # prep-os-essentials-for: # https://github.com/travis-ci/travis-ci/issues/9927 # We explicitly set up lein and pgbox at the top level so that we can # use them in commands like test-config, and so we can't end up doing # it multiple times if any of the (e.g. boxed-) sub-commands also make # the attempt. aliases: - &run-core-and-ext-tests | set -e jdk="$(ext/bin/jdk-from-spec "$PDB_TEST")" jdkver="${jdk##*jdk}" ext/travisci/prep-os-essentials-for "$PDB_TEST" case "$OSTYPE" in linux*) if test "$jdkver" -lt 9 ; then jdk_switcher use "$jdk"; else ext/bin/require-jdk "$jdk" ext/travisci/local export JAVA_HOME="$(pwd)/ext/travisci/local/jdk" export PATH="$JAVA_HOME/bin:$PATH" fi ;; darwin*) export JAVA_HOME="/Library/Java/JavaVirtualMachines/adoptopenjdk-$jdkver.jdk/Contents/Home" export PATH="$JAVA_HOME/bin:$PATH" hash -r ;; *) echo "$OSTYPE is not a supported system" 1>&2 exit 2 ;; esac mkdir -p ext/travisci/local export PATH="$(pwd)/ext/travisci/local/bin:$PATH" ext/bin/require-leiningen default ext/travisci/local ext/bin/require-pgbox default ext/travisci/local pgver="$(ext/travisci/prefixed-ref-from-spec "$PDB_TEST" pg-)" ext/bin/test-config --set pgver "$pgver" ext/bin/test-config --set pgport 34335 ext/bin/check-spec-env "$PDB_TEST" ext/bin/boxed-core-tests -- lein test ext/bin/run-external-tests - &run-integration-tests | set -e jdk="$(ext/bin/jdk-from-spec "$PDB_TEST")" jdkver="${jdk##*jdk}" ext/travisci/prep-os-essentials-for "$PDB_TEST" case "$OSTYPE" in linux*) if test "$jdkver" -lt 9 ; then jdk_switcher use "$jdk"; else ext/bin/require-jdk "$jdk" ext/travisci/local export JAVA_HOME="$(pwd)/ext/travisci/local/jdk" export PATH="$JAVA_HOME/bin:$PATH" fi ;; darwin*) export JAVA_HOME="/Library/Java/JavaVirtualMachines/adoptopenjdk-$jdkver.jdk/Contents/Home" export PATH="$JAVA_HOME/bin:$PATH" hash -r ;; *) echo "$OSTYPE is not a supported system" 1>&2 exit 2 ;; esac mkdir -p ext/travisci/local export PATH="$(pwd)/ext/travisci/local/bin:$PATH" ext/bin/require-leiningen default ext/travisci/local ext/bin/require-pgbox default ext/travisci/local pgver="$(ext/travisci/prefixed-ref-from-spec "$PDB_TEST" pg-)" puppet="$(ext/travisci/prefixed-ref-from-spec "$PDB_TEST" pup-)" server="$(ext/travisci/prefixed-ref-from-spec "$PDB_TEST" srv-)" ext/bin/test-config --set pgver "$pgver" ext/bin/test-config --set pgport 34335 ext/bin/test-config --set puppet-ref "$puppet" ext/bin/test-config --set puppetserver-ref "$server" PDB_TEST_RICH_DATA="$(ext/travisci/spec-includes "$PDB_TEST" rich)" export PDB_TEST_RICH_DATA ext/bin/check-spec-env "$PDB_TEST" ext/bin/boxed-integration-tests -- lein test :integration - &run-spec-tests | set -e puppet_ref="$(ext/travisci/prefixed-ref-from-spec "$PDB_TEST" pup-)" ext/bin/check-spec-env "$PDB_TEST" ext/bin/run-rspec-tests "$puppet_ref" - &run-docker-tests | set -ex cd docker make lint make build make test jobs: include: # === core+ext tests - stage: ❧ pdb tests env: PDB_TEST=core+ext/openjdk8/pg-9.6 script: *run-core-and-ext-tests - stage: ❧ pdb tests env: PDB_TEST=core+ext/oraclejdk8/pg-9.6 script: *run-core-and-ext-tests - stage: ❧ pdb tests env: PDB_TEST=core+ext/openjdk10/pg-9.6 script: *run-core-and-ext-tests - stage: ❧ pdb tests env: PDB_TEST=core+ext/openjdk8/pg-11 script: *run-core-and-ext-tests # === integration with master branches - stage: ❧ pdb tests env: PDB_TEST=int/openjdk10/pup-master/srv-master/pg-9.6 script: *run-integration-tests - stage: ❧ pdb tests env: PDB_TEST=int/openjdk10/pup-master/srv-master/pg-9.6/rich script: *run-integration-tests - stage: ❧ pdb tests env: PDB_TEST=int/openjdk8/pup-master/srv-master/pg-9.6/rich script: *run-integration-tests - stage: ❧ pdb tests env: PDB_TEST=int/oraclejdk8/pup-master/srv-master/pg-9.6/rich script: *run-integration-tests - stage: ❧ pdb tests env: PDB_TEST=int/openjdk8/pup-master/srv-master/pg-11 script: *run-integration-tests # === integration with current platform - stage: ❧ pdb tests env: PDB_TEST=int/openjdk10/pup-6.0.x/srv-6.0.x/pg-9.6 script: *run-integration-tests - stage: ❧ pdb tests env: PDB_TEST=int/openjdk10/pup-6.0.x/srv-6.0.x/pg-9.6/rich script: *run-integration-tests - stage: ❧ pdb tests env: PDB_TEST=int/openjdk8/pup-6.0.x/srv-6.0.x/pg-9.6/rich script: *run-integration-tests - stage: ❧ pdb tests env: PDB_TEST=int/oraclejdk8/pup-6.0.x/srv-6.0.x/pg-9.6/rich script: *run-integration-tests # === rspec tests - stage: ❧ pdb tests env: PDB_TEST=rspec/pup-6.0.x script: *run-spec-tests - stage: ❧ pdb tests env: PDB_TEST=rspec/pup-5.5.x script: *run-spec-tests # ==== osx # === core+ext tests - stage: ❧ pdb tests env: PDB_TEST=core+ext/openjdk8/pg-9.6 script: *run-core-and-ext-tests os: osx - stage: ❧ pdb tests env: PDB_TEST=core+ext/openjdk10/pg-9.6 script: *run-core-and-ext-tests os: osx # === integration tests - stage: ❧ pdb tests env: PDB_TEST=int/openjdk8/pup-master/srv-master/pg-9.6/rich script: *run-integration-tests os: osx - stage: ❧ pdb tests env: PDB_TEST=int/openjdk10/pup-master/srv-master/pg-9.6/rich script: *run-integration-tests os: osx # === rspec tests - stage: ❧ pdb tests env: PDB_TEST=rspec/pup-6.0.x script: *run-spec-tests os: osx - stage: ❧ pdb container tests script: *run-docker-tests on_success: ext/travisci/on-success notifications: email: false slack: template: - "<%{compare_url}|%{commit_subject}> | %{author}" - "%{repository_slug} %{branch} | <%{build_url}|#%{build_number}> %{result} in %{elapsed_time}" rooms: secure: IJU0YgGYbKgM7NupaOmE2BYra2mNx7+e5vAYNL+5oaRXolbHCyg0WzfFWilhMK3KEi8oIMKXR4ZzoUZLAqeOQzX7nnsLqC3wjyDHCgxtp4O+5GNKyeLN4ItoI1f2d6qyiiBPkHgVPuLhG3yyQ+wD0dMc9vSYmxfoazqe9HD/9UE= cache: directories: - $HOME/.m2 - $HOME/Library/Caches/Homebrew - vendor/bundle/ruby - ext/travisci/local/jdk puppetdb-6.2.0/Gemfile000064400000000000000000000044551342042221600146370ustar00rootroot00000000000000gemfile_home = File.dirname(__FILE__) source ENV['GEM_SOURCE'] || "https://rubygems.org" oldest_supported_puppet = "5.0.0" beaker_version = ENV['BEAKER_VERSION'] begin puppet_ref = File.read(gemfile_home + '/ext/test-conf/puppet-ref-requested').strip rescue Errno::ENOENT puppet_ref = File.read(gemfile_home + '/ext/test-conf/puppet-ref-default').strip end def location_for(place, fake_version = nil) if place =~ /^(git:[^#]*)#(.*)/ [fake_version, { :git => $1, :branch => $2, :require => false }].compact elsif place =~ /^file:\/\/(.*)/ ['>= 0', { :path => File.expand_path($1), :require => false }] else [place, { :require => false }] end end gem 'facter' gem 'rake' gem 'packaging', *location_for(ENV['PACKAGING_VERSION'] || '~> 0.99') group :test do # Add test-unit for ruby 2.2+ support (has been removed from stdlib) gem 'test-unit' # Pinning for Ruby 1.9.3 support gem 'json_pure', '~> 1.8' # Pinning for Ruby < 2.2.0 support gem 'activesupport', '~> 4.2' # addressable 2.5 requires public_suffix, which requires ruby 2. gem 'addressable', '< 2.5.0' # Pinning to work-around an incompatiblity with 2.14 in puppetlabs_spec_helper gem 'rspec', '~> 3.1' gem 'puppetlabs_spec_helper', '0.10.3', :require => false # docker-api 1.32.0 requires ruby 2.0.0 gem 'docker-api', '1.31.0' case puppet_ref when "latest" gem 'puppet', ">= #{oldest_supported_puppet}", :require => false when "oldest" gem 'puppet', oldest_supported_puppet, :require => false else gem 'puppet', :git => 'https://github.com/puppetlabs/puppet.git', :ref => puppet_ref, :require => false end gem 'mocha', '~> 1.0' end # This is a workaround for a bug in bundler, where it likes to look at ruby # version deps regardless of what groups you want or not. This lets us # conditionally shortcut evaluation entirely. if ENV['NO_ACCEPTANCE'] != 'true' group :acceptance do if beaker_version #use the specified version gem 'beaker', *location_for(beaker_version) else # use the pinned version gem 'beaker', '~> 4.1' end end gem 'beaker-hostgenerator', '1.1.13' gem 'beaker-abs', *location_for(ENV['BEAKER_ABS_VERSION'] || '~> 0.2') gem 'beaker-vmpooler', *location_for(ENV['BEAKER_VMPOOLER_VERSION'] || "~> 1.3") gem 'beaker-puppet', '~> 1.0' end puppetdb-6.2.0/LICENSE.txt000064400000000000000000000261361342042221600151670ustar00rootroot00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. puppetdb-6.2.0/MAINTAINERS000064400000000000000000000014631342042221600150350ustar00rootroot00000000000000{ "version": 1, "file_format": "This MAINTAINERS file format is described at http://pup.pt/maintainers", "issues": "https://tickets.puppetlabs.com/browse/PDB", "internal_list": "https://groups.google.com/a/puppet.com/forum/?hl=en#!forum/discuss-puppetdb-maintainers", "people": [ { "github": "wkalt", "email": "wyatt@puppet.com", "name": "Wyatt Alt" }, { "github": "kbarber", "email": "ken@puppet.com", "name": "Ken Barber" }, { "github": "ajroetker", "email": "andrew.roetker@puppet.com", "name": "Andrew Roetker" }, { "github": "mullr", "email": "russell.mull@puppet.com", "name": "Russell Mull" }, { "github": "rbrw", "email": "rlb@puppet.com", "name": "Rob Browning" } ] } puppetdb-6.2.0/Makefile000064400000000000000000000000441342042221600147720ustar00rootroot00000000000000include dev-resources/Makefile.i18n puppetdb-6.2.0/NOTICE.txt000064400000000000000000000012411342042221600150540ustar00rootroot00000000000000PuppetDB - Centralized Puppet Storage Copyright 2011-2016 Puppet Inc This product includes software developed at Puppet Inc (http://puppet.com/). Licensed under the Apache License, Version 2.0 (the "License"); you may not use this software except in compliance with the License. You may obtain a copy of the License at: http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. puppetdb-6.2.0/README.md000064400000000000000000000015641342042221600146210ustar00rootroot00000000000000# PuppetDB [![Build Status](https://travis-ci.org/puppetlabs/puppetdb.svg?branch=master)](https://travis-ci.org/puppetlabs/puppetdb) [docs]: https://docs.puppet.com/puppetdb/latest [contributing]: documentation/CONTRIBUTING.md [users]: https://groups.google.com/forum/#!forum/puppet-users PuppetDB is the fast, scalable, and reliable data warehouse for Puppet. It caches data generated by Puppet, and gives you advanced features at awesome speed with a powerful API. For documentation on this product, consult the [latest documentation][docs]. ## Contributing If you would like to contribute to PuppetDB, please take a look at our [contributing doc][contributing]. ## Maintenance Community Help: [Puppet Users Mailing List][users], [Puppet Community Portal](https://puppet.com/community) Tickets: [https://tickets.puppet.com/browse/PDB](https://tickets.puppet.com/browse/PDB) puppetdb-6.2.0/Rakefile000064400000000000000000000052071342042221600150050ustar00rootroot00000000000000require 'rake' def run_beaker(test_files) config = ENV["BEAKER_CONFIG"] || "acceptance/config/vmpooler.cfg" options = ENV["BEAKER_OPTIONS"] || "acceptance/options/postgres.rb" preserve_hosts = ENV["BEAKER_PRESERVE_HOSTS"] || "never" no_provision = ENV["BEAKER_NO_PROVISION"] == "true" ? true : false color = ENV["BEAKER_COLOR"] == "false" ? false : true xml = ENV["BEAKER_XML"] == "true" ? true : false type = ENV["BEAKER_TYPE"] || "aio" keyfile = ENV["BEAKER_KEYFILE"] || nil collect_perf_data = ENV["BEAKER_COLLECT_PERF_DATA"] || nil beaker = "bundle exec beaker " + "-c '#{config}' " + "--type #{type} " + "--debug " + "--tests " + test_files + " " + "--options-file '#{options}' " + "--root-keys " + "--preserve-hosts #{preserve_hosts}" beaker += " --keyfile #{keyfile}" if keyfile beaker += " --no-color" unless color beaker += " --xml" if xml beaker += " --collect-perf-data #{collect_perf_data}" if collect_perf_data beaker += " --no-provision" if no_provision sh beaker end namespace :beaker do desc "Run beaker based acceptance tests" task :acceptance, [:test_files] do |t, args| args.with_defaults(:test_files => 'acceptance/tests/') run_beaker(args[:test_files]) end desc "Run beaker based performance tests" task :performance, :test_files do |t, args| args.with_defaults(:test_files => 'acceptance/performance/') run_beaker(args[:test_files]) end desc "Run beaker based acceptance tests, leaving VMs in place for later runs" task :first_run, [:test_files] do |t, args| args.with_defaults(:test_files => 'acceptance/tests/') ENV["BEAKER_PRESERVE_HOSTS"] = "always" run_beaker(args[:test_files]) end desc "Re-run beaker based acceptance tests, using previously provisioned VMs" task :rerun, [:test_files] do |t, args| args.with_defaults(:test_files => 'acceptance/tests/') ENV["PUPPETDB_SKIP_PRESUITE_PROVISIONING"] = "true" ENV["BEAKER_NO_PROVISION"] = "true" ENV["BEAKER_PRESERVE_HOSTS"] = "always" run_beaker(args[:test_files]) end desc "List your VMs in ec2" task :list_vms do sh 'aws ec2 describe-instances --filters "Name=key-name,Values=Beaker-${USER}*" --query "Reservations[*].Instances[*].[InstanceId, State.Name, PublicIpAddress]" --output table' end end begin require 'packaging' Pkg::Util::RakeUtils.load_packaging_tasks rescue LoadError => e puts "Error loading packaging rake tasks: #{e}" end namespace :package do task :bootstrap do puts 'Bootstrap is no longer needed, using packaging-as-a-gem' end task :implode do puts 'Implode is no longer needed, using packaging-as-a-gem' end end puppetdb-6.2.0/acceptance/000075500000000000000000000000001342042221600154225ustar00rootroot00000000000000puppetdb-6.2.0/acceptance/README.md000064400000000000000000000206111342042221600167010ustar00rootroot00000000000000Acceptance Testing ------------------ This README outlines how to run tests using the system testing framework `beaker`, specifically for PuppetDB. ## Quick Start The acceptance tests utilise the gem `beaker`, and are initiated by the rake task `rake test:beaker`. An example of how to run the tests: $ bundle install $ rake beaker:acceptance However, there are a number of environment variables that can be utilised to modify the build. An example of how to run a package based build, with vagrant/virtualbox on Centos 6 would be: PUPPETDB_INSTALL_TYPE='package' \ PUPPETDB_EXPECTED_RPM_VERSION="1.3.3.73-1" \ rake beaker:acceptance Source based build: rake beaker:acceptance EC2 build from source on Debian 6: BEAKER_CONFIG="ec2-west-debian6-64mda-64a" \ rake beaker:acceptance EC2 build with packages on Debian 6: BEAKER_CONFIG="ec2-west-debian6-64mda-64a" \ PUPPETDB_INSTALL_TYPE='package' \ PUPPETDB_PACKAGE_REPO_URL="http://puppetdb-prerelease.s3.amazonaws.com/puppetdb/1.3.x" \ PUPPETDB_EXPECTED_DEB_VERSION="1.3.3.73-1puppetlabs1" \ rake beaker:acceptance Note that the tests currently depend on the Puppet internal lein-ezbake project, so running them requires either access to the Puppet VPN during the test run, or a local copy of lein-ezbake. For the latter, run these commands once, while connected to the VPN: repo='http://nexus.delivery.puppetlabs.net/content/repositories/releases/' mvn org.apache.maven.plugins:maven-dependency-plugin:2.7:get \ -Dtransitive=false \ -DrepoUrl="$repo" \ -Dartifact=puppetlabs:lein-ezbake:0.2.2 \ -Ddest=lein-ezbake-0.2.2.jar mvn org.apache.maven.plugins:maven-dependency-plugin:2.7:get \ -Dtransitive=false \ -DrepoUrl="$repo" \ -Dartifact=puppetlabs:lein-ezbake:0.2.2:pom \ -Ddest=lein-ezbake-0.2.2.pom And then this before every beaker:acceptance invocation: mvn org.apache.maven.plugins:maven-install-plugin:2.5.2:install-file \ -DpomFile=lein-ezbake-0.2.2.pom \ -Dfile=lein-ezbake-0.2.2.jar \ -DlocalRepositoryPath=tmp/m2-local You can find the required version of ezbake in project.clj. ## How to set options ## PuppetDB Specific Options You can set these options in one of two ways; either by specifying them as a key-value pair in the hash that you return from your "--options" file, or by setting environment variables. The symbols listed below are the keys you would put in your hash; the environment variable names are the same but uppercased (shown in parens below, for all of your copy'n'paste needs): * `:puppetdb_install_type` (`PUPPETDB_INSTALL_TYPE`) : Can be one of `:git`, `:pe` or `:package`. This dictates how the software gets installed. * `:install_mode` (`INSTALL_MODE`) : This setting is only relevant when our `install_type` is set to `:package` (meaning that we are running the tests against OS packages rather than against source code pulled from git). Legal values are `:install` and `:upgrade`; if set to `:install`, the test will be run against a freshly-installed package built from the latest development source code. If set to `:upgrade`, we will first install the latest release packages, then start puppetdb, then upgrade to the new package built from development source code. * `:puppetdb_validate_package_version` (`PUPPETDB_VALIDATE_PACKAGE_VERSION`) : This boolean determines whether or not we attempt to verify that the installed package version matches what we expect. This is mostly useful in CI, to make sure we're testing the right code. Legal values are `:true` and `:false`. * `:puppetdb_expected_rpm_version` (`PUPPETDB_EXPECTED_RPM_VERSION`) : This is only relevant if `:puppetdb_validate_package_version` is set to `:true`. In that case, this setting may contain a version number string (including the rpm release specifier), and the tests will fail on RedHat-based systems if, when we install the development puppetdb package, its version number does not match this string. * `:puppetdb_expected_deb_version` (`PUPPETDB_EXPECTED_DEB_VERSION`) : This is only relevant if `:puppetdb_validate_package_version` is set to `:true`. In that case, this setting may contain a version number string, and the tests will fail on Debian-based systems if, when we install the development puppetdb package, its version number does not match this string. * `:puppetdb_use_proxies` (`PUPPETDB_USE_PROXIES`) : This determines whether or not the test run will configure package managers (apt/yum) and maven to use our internal proxy server. This can provide a big speedup for CI, but is not desirable for remote employees during development. Legal values are `:true` and `:false`. * `:puppetdb_purge_after_run` (`PUPPETDB_PURGE_AFTER_RUN`) : This determines whether or not the post-suite cleanup phase will remove packages and perform exhaustive cleanup after the run. This is useful if you would like to avoid resetting VMs between every run of the acceptance tests. Defaults to `:true`. * `:puppetdb_package_build_host` (`PUPPETDB_PACKAGE_BUILD_HOST`) : This specifies the hostname where the final packages built by the packaging job are available. This should typically not need to be overridden, as it defaults to the well-known host name provided by our release engineering team. * `:puppetdb_package_repo_host` (`PUPPETDB_PACAKGE_REPO_HOST`): This specifies the hostname where the final apt/yum repos will be deployed and accessible to the test nodes. When testing under EC2, this will be an s3 "hostname" that differs from our default for internal VMs. This should be overridden by jenkins jobs that are running in EC2. * `:puppetdb_package_repo_url` (`PUPPETDB_PACKAGE_REPO_URL`) : By default, the test setup will install the latest 'master' branch of puppetdb dev packages from puppetlabs.lan; however, if this option is set, then it will try to use the apt/yum repos from that url (appending 'debian', 'el', etc.) instead. This is required for jobs that will be running outside of the puppetlabs LAN. * `:puppetdb_repo_(puppet|facter|hiera)` (`PUPPETDB_REPO_(PUPPET|FACTER|HIERA)`) : Specify the git repository and reference to use for installing Puppet/Facter/Hiera from source. This is primarily so we can test against alternate versions of Puppet, so if the Puppet repo is not specified we fall back to using packages. * `:puppetdb_git_ref` (`REF`) : Specify the specific git ref that the tests should be run against. This should almost always be passed in by the Jenkins job, and not overridden by configuration. ## Beaker Specific Options These options are only environment variables, and are specific to the `beaker:acceptance` rake task. ###`BEAKER_TYPE` Can be one of: * _git_: this changes the installation type to use `git`, and is really associated with FOSS based tests. * _pe_: this changes the installation type for `pe` based tests. The default is `git`. *Note:* In the future this option will be removed from Beaker. ###`BEAKER_CONFIG` This variable allows you to select a host configuration. See the directory `acceptance/config/` for a list of availabe configurations (the .cfg extension is not necessary). ###`BEAKER_OPTIONS` This allows one to choose from a number of different configuration option files. Including: * postgres * embedded\_db * puppet\_enterprise These files are listed in the directory `acceptance/options`, so you should look at the detail in those files more more information. ###`BEAKER_PRESERVE_HOSTS` When this setting is `always` it will stop the removal of the virtual machine once the task has been completed. `onfail` will leave them behind only if they have failed. The default is `never`. ####`BEAKER_COLOR` Set this to `true` to enable color in Beaker. Setting to `false` will disable color in Beaker. ####`BEAKER_XML` Set this to `true` to produce XML that can be consumed by Jenkins for test reporting. ## Extra Information ### EC2 Benchmarking We determined there was a cost/benefit sweet spot by moving to `c1.medium` from `m1.small`, the calculations are here: 42 minutes, 6 cents: m1.small <-- legacy 24 minutes, 12 cents: m1.medium 19 minutes, 14.5 cents: c1.medium <-- good bang for buck 19 minutes, 24 cents: m1.large 17 minutes, 48 cents: m1.xlarge 15 minutes, 41 cents: m2.xlarge <-- next best jump after c1.medium? 15 minutes, 82 cents: m2.2xlarge 14 minutes, 50 cents: m3.xlarge puppetdb-6.2.0/acceptance/config/000075500000000000000000000000001342042221600166675ustar00rootroot00000000000000puppetdb-6.2.0/acceptance/config/ec2-west-debian7-64mda-64a.cfg000064400000000000000000000010241342042221600236060ustar00rootroot00000000000000HOSTS: debian7-64-1: roles: - master - database - dashboard - agent vmname: debian-7-amd64-west platform: debian-7-amd64 amisize: c3.large hypervisor: ec2 snapshot: foss debian7-64-2: roles: - agent vmname: debian-7-amd64-west platform: debian-7-amd64 amisize: c3.large hypervisor: ec2 snapshot: foss CONFIG: nfs_server: none consoleport: 443 vpc_id: vpc-3cb28658 subnet_ids: - subnet-8990e3ff - subnet-11d78c75 - subnet-afaa18f7 puppetdb-6.2.0/acceptance/config/ec2-west-debian7-64mda-fallback.cfg000064400000000000000000000010271342042221600247560ustar00rootroot00000000000000HOSTS: debian7-64-1: roles: - master - database - dashboard - agent vmname: debian-7-amd64-west platform: debian-7-amd64 amisize: c3.large hypervisor: ec2 snapshot: foss debian7-64-2: roles: - database vmname: debian-7-amd64-west platform: debian-7-amd64 amisize: c3.large hypervisor: ec2 snapshot: foss CONFIG: nfs_server: none consoleport: 443 vpc_id: vpc-3cb28658 subnet_ids: - subnet-8990e3ff - subnet-11d78c75 - subnet-afaa18f7 puppetdb-6.2.0/acceptance/config/ec2-west-debian8-64mda-64a.cfg000064400000000000000000000010241342042221600236070ustar00rootroot00000000000000HOSTS: debian8-64-1: roles: - master - database - dashboard - agent vmname: debian-8-amd64-west platform: debian-8-amd64 amisize: c3.large hypervisor: ec2 snapshot: foss debian8-64-2: roles: - agent vmname: debian-8-amd64-west platform: debian-8-amd64 amisize: c3.large hypervisor: ec2 snapshot: foss CONFIG: nfs_server: none consoleport: 443 vpc_id: vpc-3cb28658 subnet_ids: - subnet-8990e3ff - subnet-11d78c75 - subnet-afaa18f7 puppetdb-6.2.0/acceptance/config/ec2-west-dev.cfg000064400000000000000000000005361342042221600215610ustar00rootroot00000000000000HOSTS: debian7-64-1: roles: - master - database - dashboard - agent vmname: debian-7-amd64-west platform: debian-7-amd64 amisize: c3.large hypervisor: ec2 snapshot: foss # ip: 54.186.214.143 CONFIG: nfs_server: none consoleport: 443 vpc_id: vpc-3ea7605b subnet_ids: - subnet-4e768a39 puppetdb-6.2.0/acceptance/config/ec2-west-el5-64mda-el5-64a.cfg000064400000000000000000000010001342042221600234370ustar00rootroot00000000000000HOSTS: el5-64-1: roles: - master - database - dashboard - agent vmname: el-5-x86_64-west platform: el-5-x86_64 amisize: c3.large hypervisor: ec2 snapshot: foss el5-64-2: roles: - agent vmname: el-5-x86_64-west platform: el-5-x86_64 amisize: c3.large hypervisor: ec2 snapshot: foss CONFIG: nfs_server: none consoleport: 443 vpc_id: vpc-3cb28658 subnet_ids: - subnet-8990e3ff - subnet-11d78c75 - subnet-afaa18f7 puppetdb-6.2.0/acceptance/config/ec2-west-el6-64mda-el5-64a-ubuntu1204-64a.cfg000064400000000000000000000012571342042221600256750ustar00rootroot00000000000000HOSTS: el6-64-1: roles: - master - database - dashboard - agent vmname: el-6-x86_64-west platform: el-6-x86_64 amisize: c3.large hypervisor: ec2 snapshot: foss el5-64-1: roles: - agent vmname: el-5-x86_64-west platform: el-5-x86_64 amisize: c3.large hypervisor: ec2 snapshot: foss ubuntu-1204-64-1: roles: - agent vmname: ubuntu-12.04-amd64-west platform: ubuntu-12.04-amd64 amisize: c3.large hypervisor: ec2 snapshot: foss CONFIG: nfs_server: none consoleport: 443 vpc_id: vpc-3cb28658 subnet_ids: - subnet-8990e3ff - subnet-11d78c75 - subnet-afaa18f7 puppetdb-6.2.0/acceptance/config/ec2-west-el6-64mda-el6-64a.cfg000064400000000000000000000010001342042221600234410ustar00rootroot00000000000000HOSTS: el6-64-1: roles: - master - database - dashboard - agent vmname: el-6-x86_64-west platform: el-6-x86_64 amisize: c3.large hypervisor: ec2 snapshot: foss el6-64-2: roles: - agent vmname: el-6-x86_64-west platform: el-6-x86_64 amisize: c3.large hypervisor: ec2 snapshot: foss CONFIG: nfs_server: none consoleport: 443 vpc_id: vpc-3cb28658 subnet_ids: - subnet-8990e3ff - subnet-11d78c75 - subnet-afaa18f7 puppetdb-6.2.0/acceptance/config/ec2-west-el6-64mda-perf.cfg000064400000000000000000000005551342042221600233350ustar00rootroot00000000000000HOSTS: puppetdb1.vm: roles: - master - agent - dashboard - database vmname: el-6-x86_64-west platform: el-6-x86_64 amisize: c3.xlarge hypervisor: ec2 snapshot: foss CONFIG: nfs_server: none consoleport: 443 vpc_id: vpc-3cb28658 subnet_ids: - subnet-8990e3ff - subnet-11d78c75 - subnet-afaa18f7 puppetdb-6.2.0/acceptance/config/ec2-west-el6-64mda.cfg000064400000000000000000000005541342042221600224020ustar00rootroot00000000000000HOSTS: puppetdb1.vm: roles: - master - agent - dashboard - database vmname: el-6-x86_64-west platform: el-6-x86_64 amisize: c3.large hypervisor: ec2 snapshot: foss CONFIG: nfs_server: none consoleport: 443 vpc_id: vpc-3cb28658 subnet_ids: - subnet-8990e3ff - subnet-11d78c75 - subnet-afaa18f7 puppetdb-6.2.0/acceptance/config/ec2-west-el7-64mda-el7-64a.cfg000064400000000000000000000010001342042221600234430ustar00rootroot00000000000000HOSTS: el7-64-1: roles: - master - database - dashboard - agent vmname: el-7-x86_64-west platform: el-7-x86_64 amisize: c3.large hypervisor: ec2 snapshot: foss el7-64-2: roles: - agent vmname: el-7-x86_64-west platform: el-7-x86_64 amisize: c3.large hypervisor: ec2 snapshot: foss CONFIG: nfs_server: none consoleport: 443 vpc_id: vpc-3cb28658 subnet_ids: - subnet-8990e3ff - subnet-11d78c75 - subnet-afaa18f7 puppetdb-6.2.0/acceptance/config/ec2-west-f20-64mda-f20-64a.cfg000064400000000000000000000010331342042221600232510ustar00rootroot00000000000000HOSTS: fedora-20-1: roles: - master - agent - dashboard - database vmname: fedora-20-x86_64-west platform: fedora-20-x86_64 amisize: c3.large hypervisor: ec2 snapshot: foss fedora-20-2: roles: - agent vmname: fedora-20-x86_64-west platform: fedora-20-x86_64 amisize: c3.large hypervisor: ec2 snapshot: foss CONFIG: nfs_server: none consoleport: 443 vpc_id: vpc-3cb28658 subnet_ids: - subnet-8990e3ff - subnet-11d78c75 - subnet-afaa18f7 puppetdb-6.2.0/acceptance/config/ec2-west-ubuntu1204-64mda-64a.cfg000064400000000000000000000010541342042221600241310ustar00rootroot00000000000000HOSTS: ubuntu-1204-64-1: roles: - master - database - dashboard - agent vmname: ubuntu-12.04-amd64-west platform: ubuntu-12.04-amd64 amisize: c3.large hypervisor: ec2 snapshot: foss ubuntu-1204-64-2: roles: - agent vmname: ubuntu-12.04-amd64-west platform: ubuntu-12.04-amd64 amisize: c3.large hypervisor: ec2 snapshot: foss CONFIG: nfs_server: none consoleport: 443 vpc_id: vpc-3cb28658 subnet_ids: - subnet-8990e3ff - subnet-11d78c75 - subnet-afaa18f7 puppetdb-6.2.0/acceptance/config/ec2-west-ubuntu1404-64mda-64a.cfg000064400000000000000000000010541342042221600241330ustar00rootroot00000000000000HOSTS: ubuntu-1404-64-1: roles: - master - database - dashboard - agent vmname: ubuntu-14.04-amd64-west platform: ubuntu-14.04-amd64 amisize: c3.large hypervisor: ec2 snapshot: foss ubuntu-1404-64-2: roles: - agent vmname: ubuntu-14.04-amd64-west platform: ubuntu-14.04-amd64 amisize: c3.large hypervisor: ec2 snapshot: foss CONFIG: nfs_server: none consoleport: 443 vpc_id: vpc-3cb28658 subnet_ids: - subnet-8990e3ff - subnet-11d78c75 - subnet-afaa18f7 puppetdb-6.2.0/acceptance/config/ec2-west-ubuntu1510-64mda-64a.cfg000064400000000000000000000010541342042221600241310ustar00rootroot00000000000000HOSTS: ubuntu-1510-64-1: roles: - master - database - dashboard - agent vmname: ubuntu-15.10-amd64-west platform: ubuntu-15.10-amd64 amisize: c3.large hypervisor: ec2 snapshot: foss ubuntu-1510-64-2: roles: - agent vmname: ubuntu-15.10-amd64-west platform: ubuntu-15.10-amd64 amisize: c3.large hypervisor: ec2 snapshot: foss CONFIG: nfs_server: none consoleport: 443 vpc_id: vpc-3cb28658 subnet_ids: - subnet-8990e3ff - subnet-11d78c75 - subnet-afaa18f7 puppetdb-6.2.0/acceptance/config/ec2-west-ubuntu1604-64mda-64a.cfg000064400000000000000000000010541342042221600241350ustar00rootroot00000000000000HOSTS: ubuntu-1604-64-1: roles: - master - database - dashboard - agent vmname: ubuntu-16.04-amd64-west platform: ubuntu-16.04-amd64 amisize: c4.large hypervisor: ec2 snapshot: foss ubuntu-1604-64-2: roles: - agent vmname: ubuntu-16.04-amd64-west platform: ubuntu-16.04-amd64 amisize: c4.large hypervisor: ec2 snapshot: foss CONFIG: nfs_server: none consoleport: 443 vpc_id: vpc-3cb28658 subnet_ids: - subnet-8990e3ff - subnet-11d78c75 - subnet-afaa18f7 puppetdb-6.2.0/acceptance/config/vbox-debian7-64mda-2xdb.cfg000064400000000000000000000010621342042221600234020ustar00rootroot00000000000000HOSTS: debian-7-amd64-1: roles: - master - agent - dashboard - database platform: debian-7-amd64 hypervisor: vagrant box: debian-70rc1-x64-vbox4210-nocm box_url: http://puppet-vagrant-boxes.puppetlabs.com/debian-70rc1-x64-vbox4210-nocm.box debian-7-amd64-2: roles: - database platform: debian-7-amd64 hypervisor: vagrant box: debian-70rc1-x64-vbox4210-nocm box_url: http://puppet-vagrant-boxes.puppetlabs.com/debian-70rc1-x64-vbox4210-nocm.box CONFIG: nfs_server: none consoleport: 443 puppetdb-6.2.0/acceptance/config/vbox-debian7-64mda.cfg000064400000000000000000000005161342042221600225500ustar00rootroot00000000000000HOSTS: debian-7-amd64.vm: roles: - master - agent - dashboard - database platform: debian-7-amd64 hypervisor: vagrant box: debian-70rc1-x64-vbox4210-nocm box_url: http://puppet-vagrant-boxes.puppetlabs.com/debian-70rc1-x64-vbox4210-nocm.box CONFIG: nfs_server: none consoleport: 443 puppetdb-6.2.0/acceptance/config/vbox-el5-64mda-el5-64a.cfg000064400000000000000000000010421342042221600227720ustar00rootroot00000000000000HOSTS: el5-64-1.vm: roles: - master - agent - dashboard - database platform: el-5-x86_64 hypervisor: vagrant box: centos-510-x64-virtualbox-nocm box_url: http://puppet-vagrant-boxes.puppetlabs.com/centos-510-x64-virtualbox-nocm.box el5-64-2.vm: roles: - agent platform: el-5-x86_64 hypervisor: vagrant box: centos-510-x64-virtualbox-nocm box_url: http://puppet-vagrant-boxes.puppetlabs.com/centos-510-x64-virtualbox-nocm.box CONFIG: nfs_server: none consoleport: 443 puppetdb-6.2.0/acceptance/config/vbox-el6-64mda-el5-64a-ubuntu1204-64a.cfg000064400000000000000000000014151342042221600252160ustar00rootroot00000000000000HOSTS: el6-64-1.vm: roles: - master - agent - dashboard - database platform: el-6-x86_64 hypervisor: vagrant box: centos-64-x64-vbox4210-nocm box_url: http://puppet-vagrant-boxes.puppetlabs.com/centos-64-x64-vbox4210-nocm.box el5-64-1.vm: roles: - agent platform: el-5-x86_64 hypervisor: vagrant box: centos-59-x64-vbox4210-nocm box_url: http://puppet-vagrant-boxes.puppetlabs.com/centos-59-x64-vbox4210-nocm.box ubuntu-1204-1.vm: roles: - agent platform: ubuntu-12.04-amd64 hypervisor: vagrant box: ubuntu-server-12042-x64-vbox4210-nocm box_url: http://puppet-vagrant-boxes.puppetlabs.com/ubuntu-server-12042-x64-vbox4210-nocm.box CONFIG: nfs_server: none consoleport: 443 puppetdb-6.2.0/acceptance/config/vbox-el6-64mda-el6-64a.cfg000064400000000000000000000010261342042221600227760ustar00rootroot00000000000000HOSTS: el6-64-1.vm: roles: - master - agent - dashboard - database platform: el-6-x86_64 hypervisor: vagrant box: centos-64-x64-vbox4210-nocm box_url: http://puppet-vagrant-boxes.puppetlabs.com/centos-64-x64-vbox4210-nocm.box el6-64-2.vm: roles: - agent platform: el-6-x86_64 hypervisor: vagrant box: centos-64-x64-vbox4210-nocm box_url: http://puppet-vagrant-boxes.puppetlabs.com/centos-64-x64-vbox4210-nocm.box CONFIG: nfs_server: none consoleport: 443 puppetdb-6.2.0/acceptance/config/vbox-el6-64mda.cfg000064400000000000000000000004751342042221600217310ustar00rootroot00000000000000HOSTS: el6-64.vm: roles: - master - agent - dashboard - database platform: el-6-x86_64 hypervisor: vagrant box: centos-64-x64-vbox4210-nocm box_url: http://puppet-vagrant-boxes.puppetlabs.com/centos-64-x64-vbox4210-nocm.box CONFIG: nfs_server: none consoleport: 443 puppetdb-6.2.0/acceptance/config/vbox-ubuntu1204-64mda-64a.cfg000064400000000000000000000011261342042221600234560ustar00rootroot00000000000000HOSTS: ubuntu-1204-1.vm: roles: - master - agent - dashboard - database platform: ubuntu-12.04-amd64 hypervisor: vagrant box: ubuntu-server-12042-x64-vbox4210-nocm box_url: http://puppet-vagrant-boxes.puppetlabs.com/ubuntu-server-12042-x64-vbox4210-nocm.box ubuntu-1204-1.vm: roles: - agent platform: ubuntu-12.04-amd64 hypervisor: vagrant box: ubuntu-server-12042-x64-vbox4210-nocm box_url: http://puppet-vagrant-boxes.puppetlabs.com/ubuntu-server-12042-x64-vbox4210-nocm.box CONFIG: nfs_server: none consoleport: 443 puppetdb-6.2.0/acceptance/config/vmpooler.cfg000064400000000000000000000005231342042221600212130ustar00rootroot00000000000000HOSTS: debian7-64-1: pe_dir: pe_ver: pe_upgrade_dir: pe_upgrade_ver: hypervisor: vmpooler platform: debian-7-amd64 template: debian-7-x86_64 roles: - agent - master - database - dashboard CONFIG: nfs_server: none consoleport: 443 pooling_api: http://vmpooler.delivery.puppetlabs.net/puppetdb-6.2.0/acceptance/helper.rb000064400000000000000000001052521342042221600172330ustar00rootroot00000000000000require 'cgi' require 'beaker/dsl/install_utils' require 'beaker/dsl/helpers' require 'beaker-puppet' require 'open3' require 'pp' require 'set' require 'test/unit/assertions' require 'json' require 'inifile' module PuppetDBExtensions include Test::Unit::Assertions GitReposDir = Beaker::DSL::InstallUtils::FOSSUtils::SourcePath # We include the Puppet path here so we can use the Puppet ecosystem when # using rake, which needs the facter gem to work - AIO ruby contains this # by default. LeinCommandPrefix = "cd #{GitReposDir}/puppetdb; LEIN_ROOT=true LEIN_SNAPSHOTS_IN_RELEASE=true PATH=/opt/puppetlabs/puppet/bin:$PATH" def self.initialize_test_config(options, os_families) base_dir = File.join(File.dirname(__FILE__), '..') install_type = get_option_value(options[:puppetdb_install_type], [:git, :package, :pe], "install type", "PUPPETDB_INSTALL_TYPE", :git) install_mode = get_option_value(options[:install_type], [:install, :upgrade_latest, :upgrade_oldest], "install type", "INSTALL_TYPE", :install) validate_package_version = get_option_value(options[:puppetdb_validate_package_version], [:true, :false], "'validate package version'", "PUPPETDB_VALIDATE_PACKAGE_VERSION", :true) expected_rpm_version = get_option_value(options[:puppetdb_expected_rpm_version], nil, "'expected RPM package version'", "PUPPETDB_EXPECTED_RPM_VERSION", nil) expected_deb_version = get_option_value(options[:puppetdb_expected_deb_version], nil, "'expected DEB package version'", "PUPPETDB_EXPECTED_DEB_VERSION", nil) use_proxies = get_option_value(options[:puppetdb_use_proxies], [:true, :false], "'use proxies'", "PUPPETDB_USE_PROXIES", :false) purge_after_run = get_option_value(options[:puppetdb_purge_after_run], [:true, :false], "'purge packages and perform exhaustive cleanup after run'", "PUPPETDB_PURGE_AFTER_RUN", :false) skip_presuite_provisioning = get_option_value(options[:puppetdb_skip_presuite_provisioning], [:true, :false], "'skip installation steps", "PUPPETDB_SKIP_PRESUITE_PROVISIONING", :false) package_build_host = get_option_value(options[:puppetdb_package_build_host], nil, "'hostname for package build output'", "PUPPETDB_PACKAGE_BUILD_HOST", "builds.delivery.puppetlabs.net") package_repo_host = get_option_value(options[:puppetdb_package_repo_host], nil, "'hostname for yum/apt repos'", "PUPPETDB_PACKAGE_REPO_HOST", "builds.delivery.puppetlabs.net") package_repo_url = get_option_value(options[:puppetdb_package_repo_url], nil, "'base URL for yum/apt repos'", "PUPPETDB_PACKAGE_REPO_URL", "http://#{package_repo_host}/dev/puppetdb/master") package_build_version = get_option_value(options[:puppetdb_package_build_version], nil, "'package build version'", "PUPPETDB_PACKAGE_BUILD_VERSION", nil) puppetdb_repo_puppet = get_option_value(options[:puppetdb_repo_puppet], nil, "git repo for puppet source installs", "PUPPETDB_REPO_PUPPET", nil) puppetdb_repo_hiera = get_option_value(options[:puppetdb_repo_hiera], nil, "git repo for hiera source installs", "PUPPETDB_REPO_HIERA", nil) puppetdb_repo_facter = get_option_value(options[:puppetdb_repo_facter], nil, "git repo for facter source installs", "PUPPETDB_REPO_FACTER", nil) puppetdb_git_ref = get_option_value(options[:puppetdb_git_ref], nil, "git revision of puppetdb to test against", "REF", nil) nightly = get_option_value(options[:nightly], [:true, :false], "source puppet-agent and puppetserver from nightly builds", "NIGHTLY", :false) @config = { :base_dir => base_dir, :acceptance_data_dir => File.join(base_dir, "acceptance", "data"), :os_families => os_families, :install_type => install_type, :install_mode => install_mode, :validate_package_version => validate_package_version == :true, :expected_rpm_version => expected_rpm_version, :expected_deb_version => expected_deb_version, :use_proxies => use_proxies == :true, :purge_after_run => purge_after_run == :true, :package_build_host => package_build_host, :package_repo_host => package_repo_host, :package_repo_url => package_repo_url, :package_build_version => package_build_version, :repo_puppet => puppetdb_repo_puppet, :repo_hiera => puppetdb_repo_hiera, :repo_facter => puppetdb_repo_facter, :git_ref => puppetdb_git_ref, :skip_presuite_provisioning => skip_presuite_provisioning == :true, :nightly => nightly == :true } pp_config = PP.pp(@config, "") Beaker::Log.notify "PuppetDB Acceptance Configuration:\n\n#{pp_config}\n\n" end class << self attr_reader :config end def self.get_option_value(value, legal_values, description, env_var_name = nil, default_value = nil) # we give precedence to any value explicitly specified in an options file, # but we also allow environment variables to be used for # puppetdb-specific settings value = (value || (env_var_name && ENV[env_var_name]) || default_value) if value value = value.to_sym end unless legal_values.nil? or legal_values.include?(value) raise ArgumentError, "Unsupported #{description} '#{value}'" end value end # Return the configuration hash initialized at the start with # initialize_test_config # # @return [Hash] configuration hash def test_config PuppetDBExtensions.config end # Return the fact set for a given hostname # # Relies on populate_facts to be ran first. # # @param host [String] hostname to retrieve facts for # @return [Hash] facts hash def facts(host) test_config[:facts][host] end # Populate the facts storage area of test_config # # @return [void] def populate_facts fact_data = hosts.inject({}) do |result, host| facts_raw = on host, "facter -y" facts = YAML.load(facts_raw.stdout) result[host.name] = facts result end test_config[:facts] = fact_data nil end def get_os_family(host) on(host, "which yum", :silent => true) if result.exit_code == 0 on(host, "ls /etc/fedora-release", :silent => true) if result.exit_code == 2 :redhat else :fedora end else :debian end end def aio_pathing_exists?(host) result = on host, %Q|if [ -e /etc/puppetlabs/puppetdb ]; then echo aio; fi| result.stdout.strip == "aio" end def puppetdb_confdir(host, legacy=false) if aio_pathing_exists?(host) "/etc/puppetlabs/puppetdb" else "/etc/puppetdb" end end def puppetdb_bin_dir(host) "/opt/puppetlabs/server/bin" end def puppetdb_log_dir(host) "/var/log/puppetlabs/puppetdb" end def puppetdb_pids(host) java_bin = "java" jar_file = "puppetdb.jar" result = on host, %Q(ps -ef | grep "#{java_bin}" | grep "#{jar_file}" | grep " services -c " | awk '{print $2}') pids = result.stdout.chomp.split("\n") Beaker::Log.notify "PuppetDB PIDs appear to be: '#{pids}'" pids end def start_puppetdb(host) step "Starting PuppetDB" do on host, "service puppetdb start" sleep_until_started(host) end end # Display the last few log lines for debugging purposes # # @param host Hostname to display results for # @return [void] # @api private def display_last_logs(host) on host, "tail -n 100 #{puppetdb_log_dir(host)}/puppetdb-daemon.log", :acceptable_exit_codes => [0,1] on host, "tail -n 100 #{puppetdb_log_dir(host)}/puppetdb.log", :acceptable_exit_codes => [0,1] end # Sleep until PuppetDB is completely started # # @param host Hostname to test for PuppetDB availability # @return [void] # @api public def sleep_until_started(host) # Hit an actual endpoint to ensure PuppetDB is up and not just the webserver. # Retry until an HTTP response code of 200 is received. test_route = aio_pathing_exists?(host) ? "pdb/meta/v1/version" : "v4/version" curl_with_retries("start puppetdb", host, "-s -w '%{http_code}' http://localhost:8080/#{test_route} -o /dev/null", 0, 120, 1, /200/) curl_with_retries("start puppetdb (ssl)", host, "https://#{host.node_name}:8081/#{test_route}", [35, 60]) rescue RuntimeError => e display_last_logs(host) raise end def is_bionic() return test_config[:os_families].has_key? 'ubuntu1804-64-1' end def oldest_supported # account for special case where bionic doesn't have builds before 5.2.4 return is_bionic ? '5.2.4' : '5.1.1' end def get_testing_branch(version) branch_name = /^((?:\d+\.)*)\d+/.match(version)[1] + 'x' if branch_name.chars.first.to_i > 5 branch_name = 'master' end return branch_name end def get_latest_released(version) cloned = system('git clone https://github.com/puppetlabs/puppetdb.git') if cloned.nil? raise 'error cloning puppetdb repo' end branch_name = get_testing_branch(version) stdout, status = Open3.capture2('git', '-C', 'puppetdb', 'describe', '--tags', '--abbrev=0', "origin/#{branch_name}") if status.exitstatus != 0 raise "error getting most recent tagged release. status: #{status}" end return stdout.delete!("\n") end def get_package_version(host, version = nil) if version == 'latest' return 'latest' elsif version.nil? version = PuppetDBExtensions.config[:package_build_version].to_s # If no version was defined, default to latest. if version == '' return 'latest' end end # version can look like: # 3.0.0 # 3.0.0.SNAPSHOT.2015.07.08T0945 # Rewrite version if its a SNAPSHOT in rc form if version.include?("SNAPSHOT") version = version.sub(/^(.*)\.(SNAPSHOT\..*)$/, "\\1-0.1\\2") else version = version + "-1" end ## These 'platform' values come from the acceptance config files, so ## we're relying entirely on naming conventions here. Would be nicer ## to do this using lsb_release or something, but... if host['platform'].include?('el-5') "#{version}.el5" elsif host['platform'].include?('el-6') "#{version}.el6" elsif host['platform'].include?('el-7') "#{version}.el7" elsif host['platform'].include?('fedora') version_tag = host['platform'].match(/^fedora-(\d+)/)[1] "#{version}.fc#{version_tag}" elsif host['platform'].include?('ubuntu-16.04') "#{version}xenial" elsif host['platform'].include?('ubuntu-18.04') "#{version}bionic" elsif host['platform'].include?('debian-8') "#{version}jessie" elsif host['platform'].include?('debian-9') "#{version}stretch" else raise ArgumentError, "Unsupported platform: '#{host['platform']}'" end end def install_puppetdb(host, version=nil) manifest = <<-EOS class { 'puppetdb::globals': version => '#{get_package_version(host, version)}' } class { 'puppetdb::server': manage_firewall => false, disable_update_checking => true, } #{postgres_manifest} Postgresql::Server::Db['puppetdb'] -> Class['puppetdb::server'] EOS apply_manifest_on(host, manifest) print_ini_files(host) sleep_until_started(host) end def validate_package_version(host) step "Verifying package version" do os = PuppetDBExtensions.config[:os_families][host.name] installed_version = case os when :debian result = on host, "dpkg-query --showformat \"\\${Version}\" --show puppetdb" result.stdout.strip when :redhat, :fedora result = on host, "rpm -q puppetdb --queryformat \"%{VERSION}-%{RELEASE}\"" result.stdout.strip else raise ArgumentError, "Unsupported OS family: '#{os}'" end expected_version = get_package_version(host) Beaker::Log.notify "Expecting package version: '#{expected_version}', actual version: '#{installed_version}'" if installed_version != expected_version and expected_version != 'latest' raise RuntimeError, "Installed version '#{installed_version}' did not match expected version '#{expected_version}'" end end end def install_puppetdb_termini(host, databases, version=nil) # We pass 'restart_puppet' => false to prevent the module from trying to # manage the puppet master service, which isn't actually installed on the # acceptance nodes (they run puppet master from the CLI). server_urls = databases.map {|db| "https://#{db.node_name}:8081"}.join(',') manifest = <<-EOS class { 'puppetdb::globals': version => '#{get_package_version(host, version)}' } class { 'puppetdb::master::config': puppetdb_startup_timeout => 120, manage_report_processor => true, enable_reports => true, strict_validation => true, restart_puppet => false, } ini_setting {'server_urls': ensure => present, section => 'main', path => "${puppetdb::params::puppet_confdir}/puppetdb.conf", setting => 'server_urls', value => '#{server_urls}', } EOS apply_manifest_on(host, manifest) end def print_ini_files(host) confdir = "#{puppetdb_confdir(host)}/conf.d" step "Print out jetty.ini for posterity" do on host, "cat #{confdir}/jetty.ini" end step "Print out database.ini for posterity" do on host, "cat #{confdir}/database.ini" end end def current_time_on(host) result = on host, %Q|date --rfc-2822| CGI.escape(Time.rfc2822(result.stdout).iso8601) end def enable_https_apt_sources(host) manifest = <<-EOS if ($facts['osfamily'] == 'Debian') { package { 'apt-transport-https': ensure => installed } } EOS apply_manifest_on(host, manifest) end def postgres_manifest manifest = <<-EOS # get the pg server up and running class { 'postgresql::globals': manage_package_repo => true, version => '9.6', }-> class { '::postgresql::server': ip_mask_allow_all_users => '0.0.0.0/0', listen_addresses => 'localhost', } # create the puppetdb database postgresql::server::db { 'puppetdb': user => 'puppetdb', password => 'puppetdb', grant => 'all', } EOS manifest end def install_postgres(host) Beaker::Log.notify "Installing postgres on #{host}" apply_manifest_on(host, postgres_manifest) end # Restart postgresql using Puppet, by notifying the postgresql::server::service # class, which should cause the service to restart. def restart_postgres(host) manifest = <<-EOS notify { 'restarting postgresql': }~> Class['postgresql::server::service'] #{postgres_manifest} EOS apply_manifest_on(host, manifest) end # Prepare host's global git configuration so git tag will just work # later on. # # @param host [String] host to execute on # @return [void] # @api private def prepare_git_author(host) on host, 'git config --global user.email "beaker@localhost"' on host, 'git config --global user.name "Beaker"' end # Install puppetdb using `rake install` methodology, which works # without packages today. # # @param host [String] host to execute on # @return [void] # @api public def install_puppetdb_via_rake(host) # Uncomment for pinning against a particular ezbake revision #ezbake_dev_build("git@github.com:kbarber/ezbake.git", "pdb-1455") install_from_ezbake host step "Configure database.ini file" do manifest = " class { 'puppetdb::server::database_ini': }" apply_manifest_on(host, manifest) end print_ini_files(host) end def install_puppetdb_termini_via_rake(host, databases) # Uncomment for pinning against a particular ezbake revision #ezbake_dev_build("git@github.com:kbarber/ezbake.git", "pdb-1455") install_termini_from_ezbake host server_urls = databases.map {|db| "https://#{db.node_name}:8081"}.join(',') manifest = <<-EOS class { 'puppetdb::globals': version => '#{get_package_version(host)}' } include puppetdb::master::routes include puppetdb::master::storeconfigs class { 'puppetdb::master::report_processor': enable => true, } ini_setting {'server_urls': ensure => present, section => 'main', path => "${puppetdb::params::puppet_confdir}/puppetdb.conf", setting => 'server_urls', value => '#{server_urls}', } EOS apply_manifest_on(host, manifest) end def stop_puppetdb(host) pids = puppetdb_pids(host) on host, "service puppetdb stop" sleep_until_stopped(host, pids) end def sleep_until_stopped(host, pids) curl_with_retries("stop puppetdb", host, "http://localhost:8080", 7) stopped = false while (! stopped) Beaker::Log.notify("Waiting for pids #{pids} to exit") exit_codes = pids.map do |pid| result = on host, %Q(ps -p #{pid}), :acceptable_exit_codes => [0, 1] result.exit_code end stopped = exit_codes.all? { |x| x == 1} end end def restart_puppetdb(host) stop_puppetdb(host) start_puppetdb(host) end def clear_and_restart_puppetdb(host) stop_puppetdb(host) clear_database(host) start_puppetdb(host) end def sleep_until_queue_empty(host, timeout=60) metric = "puppetlabs.puppetdb.mq:name=global.depth" queue_size = nil begin Timeout.timeout(timeout) do until queue_size == 0 result = on host, %Q(curl http://localhost:8080/metrics/v1/mbeans/#{CGI.escape(metric)} 2> /dev/null | python -c "import sys, json; print json.load(sys.stdin)['Count']") queue_size = Integer(result.stdout.chomp) end end rescue Timeout::Error => e raise "Queue took longer than allowed #{timeout} seconds to empty" end end # Queries the metrics endpoint for command processing results, return a hash # of results. def command_processing_stats(host, counter = "processed") metric = "puppetlabs.puppetdb.command:type=global,name=discarded" result = on host, %Q(curl http://localhost:8080/metrics/v1/mbeans/#{CGI.escape(metric)}) begin JSON.parse(result.stdout) rescue JSON::ParserError => e Beaker::Log.notify "Invalid JSON response: #{result.stdout}" raise e end end def apply_manifest_on(host, manifest_content) manifest_path = host.tmpfile("puppetdb_manifest.pp") create_remote_file(host, manifest_path, manifest_content) Beaker::Log.notify "Applying manifest on #{host}:\n\n#{manifest_content}" on host, puppet_apply("--detailed-exitcodes #{manifest_path}"), :acceptable_exit_codes => [0,2] end def modify_config_setting(host, config_file_name, section, setting, value) manifest_content = < '#{puppetdb_confdir(host)}/conf.d/#{config_file_name}', section => '#{section}', setting => '#{setting}', value => '#{value}' } EOS apply_manifest_on(host, manifest_content) end # Keep curling until the required condition is met # # Condition can be a desired_exit code, and/or expected_output, and it will # keep executing the curl command until one of these conditions are met # or the max_retries is reached. # # @param desc [String] descriptive name for this cycling # @param host [String] host to execute retry on # @param curl_args [String] the curl_args to use for testing # @param desired_exit_codes [Number,Array] a desired exit code, or array of exist codes # @param max_retries [Number] maximum number of retries before failing # @param retry_interval [Number] time in secs to wait before next try # @param expected_output [Regexp] a regexp to use for matching the output # @return [void] def curl_with_retries(desc, host, curl_args, desired_exit_codes, max_retries = 60, retry_interval = 1, expected_output = /.*/) command = "curl --tlsv1 #{curl_args}" log_prefix = host.log_prefix logger.debug "\n#{log_prefix} #{Time.new.strftime('%H:%M:%S')}$ #{command}" logger.debug " Trying command #{max_retries} times." logger.debug ".", add_newline=false desired_exit_codes = [desired_exit_codes].flatten result = on host, command, :acceptable_exit_codes => (0...127), :silent => true num_retries = 0 until desired_exit_codes.include?(exit_code) and (result.stdout =~ expected_output) sleep retry_interval result = on host, command, :acceptable_exit_codes => (0...127), :silent => true num_retries += 1 logger.debug ".", add_newline=false if (num_retries > max_retries) logger.debug " Command \`#{command}\` failed." fail("Command \`#{command}\` failed. Unable to #{desc}.") end end logger.debug "\n#{log_prefix} #{Time.new.strftime('%H:%M:%S')}$ #{command} ostensibly successful." end def clear_database(host) on host, 'su postgres -c "dropdb puppetdb"' install_postgres(host) end ############################################################################ # NOTE: This code was merged into beaker, however it does not work as desired. # We need to get the version in beaker working as expected and then we can # remove this version. # # Temp copy of Justins new Puppet Master Methods ############################################################################ # Restore puppet.conf from backup, if puppet.conf.bak exists. # # @api private def restore_puppet_conf host confdir = host.puppet['confdir'] on host, "if [ -f #{confdir}/puppet.conf.bak ]; then " + "cat #{confdir}/puppet.conf.bak > " + "#{confdir}/puppet.conf; " + "rm -rf #{confdir}/puppet.conf.bak; " + "fi" end ############################################################################## # END_OF Temp Copy of Justins new Puppet Master Methods ############################################################################## def parse_json_with_error(input) begin facts = JSON.parse(input) rescue Exception => e facts = "#{e.message} on input '#{input}'" end return facts end ############################################################################## # Object diff functions ############################################################################## # This is horrible and really doesn't belong here, but I'm not sure where # else to put it. I need a way to do a recursive diff of a hash (which may # contain nested objects whose type can be any of Hash, Array, Set, or a # scalar). The hashes may be absolutely gigantic, so if they don't match, # I need a way to be able to show a small enough diff so that the user can # actually figure out what's going wrong (rather than dumping out the entire # gigantic string). I searched for gems that could handle this and tried # 4 or 5 different things, and couldn't find anything that suited the task, # so I had to write my own. This could use improvement, relocation, or # replacement with a gem if we ever find a suitable one. # # UPDATE: chatted with Justin about this and he suggests creating a special # puppetlabs-test-utils repo or similar and have that pulled down via # bundler, once the acceptance harness is accessible as a gem. You know, # in "The Future". # JSON gem doesn't have native support for Set objects, so we have to # add this hack. class ::Set def to_json(arg) to_a.to_json(arg) end end def hash_diff(obj1, obj2) result = (obj1.keys | obj2.keys).inject({}) do |diff, k| if obj1[k] != obj2[k] objdiff = object_diff(obj1[k], obj2[k]) if (objdiff) diff[k] = objdiff end end diff end (result == {}) ? nil : result end def array_diff(arr1, arr2) (0..([arr1.length, arr2.length].max)).inject([]) do |diff, i| objdiff = object_diff(arr1[i], arr2[i]) if (objdiff) diff << objdiff end diff end end def set_diff(set1, set2) diff1 = set1 - set2 diff2 = set2 - set1 unless (diff1.empty? and diff2.empty?) [diff1, diff2] end end def object_diff(obj1, obj2) if (obj1.class != obj2.class) [obj1, obj2] else case obj1 when Hash hash_diff(obj1, obj2) when Array array_diff(obj1, obj2) when Set set_diff(obj1, obj2) else (obj1 == obj2) ? nil : [obj1, obj2] end end end ############################################################################## # End Object diff functions ############################################################################## def initialize_repo_on_host(host, os, nightly) case os when :debian # For openjdk8 if host['platform'].version == '8' create_remote_file(host, "/etc/apt/sources.list.d/jessie-backports.list", "deb http://httpredir.debian.org/debian jessie-backports main") end if options[:type] == 'aio' then if nightly ## puppet6 repos on host, "curl -O http://nightlies.puppetlabs.com/apt/puppet6-nightly-release-$(lsb_release -sc).deb" on host, "dpkg -i puppet6-nightly-release-$(lsb_release -sc).deb" else on host, "curl -O http://apt.puppetlabs.com/puppet-release-$(lsb_release -sc).deb" on host, "dpkg -i puppet-release-$(lsb_release -sc).deb" end else on host, "curl -O http://apt.puppetlabs.com/puppetlabs-release-$(lsb_release -sc).deb" on host, "dpkg -i puppetlabs-release-$(lsb_release -sc).deb" end on host, "apt-get update" on host, "apt-get install debian-archive-keyring" when :redhat if options[:type] == 'aio' then /^(el|centos)-(\d+)-(.+)$/.match(host.platform) variant = ($1 == 'centos') ? 'el' : $1 version = $2 arch = $3 if nightly ## puppet6 repos on host, "curl -O http://yum.puppetlabs.com/puppet6-nightly/puppet6-nightly-release-#{variant}-#{version}.noarch.rpm" on host, "rpm -i puppet6-nightly-release-#{variant}-#{version}.noarch.rpm" else on host, "curl -O http://yum.puppetlabs.com/puppet/puppet-release-#{variant}-#{version}.noarch.rpm" on host, "rpm -i puppet-release-#{variant}-#{version}.noarch.rpm" end else on host, "yum clean all -y" on host, "yum upgrade -y" create_remote_file host, '/etc/yum.repos.d/puppetlabs-dependencies.repo', <<-REPO.gsub(' '*8, '') [puppetlabs-dependencies] name=Puppet Labs Dependencies - $basearch baseurl=http://yum.puppetlabs.com/el/$releasever/dependencies/$basearch gpgkey=http://yum.puppetlabs.com/RPM-GPG-KEY-puppetlabs enabled=1 gpgcheck=1 REPO create_remote_file host, '/etc/yum.repos.d/puppetlabs-products.repo', <<-REPO.gsub(' '*8, '') [puppetlabs-products] name=Puppet Labs Products - $basearch baseurl=http://yum.puppetlabs.com/el/$releasever/products/$basearch gpgkey=http://yum.puppetlabs.com/RPM-GPG-KEY-puppetlabs enabled=1 gpgcheck=1 REPO create_remote_file host, '/etc/yum.repos.d/epel.repo', <<-REPO [epel] name=Extra Packages for Enterprise Linux $releasever - $basearch baseurl=http://download.fedoraproject.org/pub/epel/$releasever/$basearch mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-$releasever&arch=$basearch failovermethod=priority enabled=1 gpgcheck=0 REPO end when :fedora create_remote_file host, '/etc/yum.repos.d/puppetlabs-dependencies.repo', <<-REPO.gsub(' '*8, '') [puppetlabs-dependencies] name=Puppet Labs Dependencies - $basearch baseurl=http://yum.puppetlabs.com/fedora/f$releasever/dependencies/$basearch gpgkey=http://yum.puppetlabs.com/RPM-GPG-KEY-puppetlabs enabled=1 gpgcheck=1 REPO create_remote_file host, '/etc/yum.repos.d/puppetlabs-products.repo', <<-REPO.gsub(' '*8, '') [puppetlabs-products] name=Puppet Labs Products - $basearch baseurl=http://yum.puppetlabs.com/fedora/f$releasever/products/$basearch gpgkey=http://yum.puppetlabs.com/RPM-GPG-KEY-puppetlabs enabled=1 gpgcheck=1 REPO else raise ArgumentError, "Unsupported OS '#{os}'" end end def install_puppet_from_package hosts.each do |host| install_puppet_agent_on(host, {:puppet_collection => "puppet6"}) on( host, puppet('resource', 'host', 'updates.puppetlabs.com', 'ensure=present', "ip=127.0.0.1") ) install_package(host, 'puppetserver') end end # This helper has been grabbed from beaker, and overriden with the opts # component so I can add a new 'refspec' functionality to allow a custom # refspec if required. # # Once this methodology is confirmed we should merge it back upstream. def install_from_git(host, path, repository, opts = {}) name = repository[:name] repo = repository[:path] rev = repository[:rev] target = "#{path}/#{name}" step "Clone #{repo} if needed" do on host, "test -d #{path} || mkdir -p #{path}" on host, "test -d #{target} || git clone #{repo} #{target}" end step "Update #{name} and check out revision #{rev}" do commands = ["cd #{target}", "remote rm origin", "remote add origin #{repo}", "fetch origin #{opts[:refspec]}", "clean -fdx", "checkout -f #{rev}"] on host, commands.join(" && git ") end step "Install #{name} on the system" do # The solaris ruby IPS package has bindir set to /usr/ruby/1.8/bin. # However, this is not the path to which we want to deliver our # binaries. So if we are using solaris, we have to pass the bin and # sbin directories to the install.rb install_opts = '' install_opts = '--bindir=/usr/bin --sbindir=/usr/sbin' if host['platform'].include? 'solaris' on host, "cd #{target} && " + "if [ -f install.rb ]; then " + "ruby ./install.rb #{install_opts}; " + "else true; fi" end end def install_puppet_from_source os_families = test_config[:os_families] extend Beaker::DSL::InstallUtils source_path = Beaker::DSL::InstallUtils::SourcePath git_uri = Beaker::DSL::InstallUtils::GitURI github_sig = Beaker::DSL::InstallUtils::GitHubSig tmp_repositories = [] repos = Hash[*test_config.select {|k, v| k =~ /^repo_/}.flatten].values repos.each do |uri| raise(ArgumentError, "#{uri} is not recognized.") unless(uri =~ git_uri) tmp_repositories << extract_repo_info_from(uri) end repositories = order_packages(tmp_repositories) hosts.each_with_index do |host, index| os = os_families[host.name] case os when :redhat, :fedora install_pacakge(host, 'ruby') install_pacakge(host, 'git-core') when :debian install_pacakge(host, 'ruby') install_pacakge(host, 'git-core') else raise "OS #{os} not supported" end on host, "echo #{github_sig} >> $HOME/.ssh/known_hosts" repositories.each do |repository| step "Install #{repository[:name]}" install_from_git host, source_path, repository, :refspec => '+refs/pull/*:refs/remotes/origin/pr/*' end on host, "getent group puppet || groupadd puppet" on host, "getent passwd puppet || useradd puppet -g puppet -G puppet" on host, "mkdir -p /var/run/puppet" on host, "chown puppet:puppet /var/run/puppet" end end def install_puppet_conf hosts.each do |host| hostname = fact_on(host, "hostname") fqdn = fact_on(host, "fqdn") confdir = host.puppet['confdir'] on host, "mkdir -p #{confdir}" puppetconf = File.join(confdir, 'puppet.conf') pidfile = '/var/run/puppetlabs/puppetserver/puppetserver.pid' conf = IniFile.new conf['agent']['server'] = master conf['master']['pidfile'] = pidfile conf['master']['dns_alt_names']="puppet,#{hostname},#{fqdn},#{host.hostname}" create_remote_file(host, puppetconf, conf.to_s) end end def install_puppet hosts.each do |host| if host['platform'].variant == 'debian' && host['platform'].version == '8' on host, "apt-get install -y -t jessie-backports openjdk-8-jre-headless" end end # If our :install_type is :pe then the harness has already installed puppet. case test_config[:install_type] when :package install_puppet_from_package when :git if test_config[:repo_puppet] then install_puppet_from_source else install_puppet_from_package end end install_puppet_conf end def create_remote_site_pp(host, manifest) testdir = host.tmpdir("remote-site-pp") manifest_file = "#{testdir}/environments/production/manifests/site.pp" apply_manifest_on(host, <<-PP) File { ensure => directory, mode => "0750", owner => #{master.puppet['user']}, group => #{master.puppet['group']}, } file { '#{testdir}':; '#{testdir}/environments':; '#{testdir}/environments/production':; '#{testdir}/environments/production/manifests':; '#{testdir}/environments/production/modules':; } PP create_remote_file(host, manifest_file, manifest) remote_path = "#{testdir}/environments" on host, "chmod -R +rX #{testdir}" on host, "chown -R #{master.puppet['user']}:#{master.puppet['user']} #{testdir}" remote_path end def run_agents_with_new_site_pp(host, manifest, env_vars = {}, extra_cli_args = "") manifest_path = create_remote_site_pp(host, manifest) with_puppet_running_on host, { 'master' => { 'storeconfigs' => 'true', 'storeconfigs_backend' => 'puppetdb', 'autosign' => 'true', }, 'main' => { 'environmentpath' => manifest_path, }} do #only some of the opts work on puppet_agent, acceptable exit codes does not agents.each{ |agent| on agent, puppet_agent("--test --server #{host} #{extra_cli_args}", { 'ENV' => env_vars }), :acceptable_exit_codes => [0,2] } end end def databases extend Beaker::DSL::Roles hosts_as(:database).sort_by {|db| db.to_str} end def database # primary database must be numbered lowest databases[0] end end # oh dear. Beaker::TestCase.send(:include, PuppetDBExtensions) puppetdb-6.2.0/acceptance/options/000075500000000000000000000000001342042221600171155ustar00rootroot00000000000000puppetdb-6.2.0/acceptance/options/common.rb000064400000000000000000000004101342042221600207250ustar00rootroot00000000000000def common_options_hash() dir = File.expand_path(File.join(File.dirname(__FILE__), '..')) { :helper => [File.join(dir, 'helper.rb')], :pre_suite => [File.join(dir, 'setup', 'early'), File.join(dir, 'setup', 'pre_suite')] } end puppetdb-6.2.0/acceptance/options/postgres.rb000064400000000000000000000010231342042221600213040ustar00rootroot00000000000000# We are eval'd in the scope of the acceptance framework's option-parsing # code, so we can't use __FILE__ to find our location. We have access to # a variable 'options_file_path', though. require File.expand_path(File.join(File.dirname(options_file_path), 'common.rb')) common_options_hash.tap do |my_hash| my_hash[:is_puppetserver] = 'true' my_hash[:'use-service'] = 'true' my_hash[:'puppetserver-confdir'] = '/etc/puppetlabs/puppetserver/conf.d' my_hash[:puppetservice] = 'puppetserver' end puppetdb-6.2.0/acceptance/setup/000075500000000000000000000000001342042221600165625ustar00rootroot00000000000000puppetdb-6.2.0/acceptance/setup/early/000075500000000000000000000000001342042221600176765ustar00rootroot00000000000000puppetdb-6.2.0/acceptance/setup/early/00_remove_previous_config.rb000064400000000000000000000006071342042221600253030ustar00rootroot00000000000000 unless (options[:vmrun]) or (ENV['PUPPETDB_SKIP_PRESUITE_PROVISIONING']) step "Clean up configuration files on master" do on master, "rm -rf /etc/puppet/routes.yaml" end step "Clean up configuration files on puppetdb server" do on databases, "rm -rf /etc/puppetdb/ssl" end step "Remove old modules from master" do on master, "rm -rf /etc/puppet/modules/*" end end puppetdb-6.2.0/acceptance/setup/post_suite/000075500000000000000000000000001342042221600207605ustar00rootroot00000000000000puppetdb-6.2.0/acceptance/setup/post_suite/01_validate_database.rb000064400000000000000000000046421342042221600252300ustar00rootroot00000000000000step "Verify we've been talking to the correct database" do # The goal here is just to try to make sure we've tested # what we think we've tested. e.g., if we intended to # run the tests against postgres, let's try to make # sure that they didn't accidentally get run against the # embedded db and vice-versa. (The tests could theoretically # all pass even if they were running against a different # database than we'd intended.) # If we're running w/postgres, we're going to use some hacky raw SQL # and shell magic to validate that the database schema version is the # same in the database as in the source code, and that the certnames # table contains exactly the list of hostnames as the hosts array. # If so, we'll assume that puppetdb was communicating with postgres # successfully during the test run. (These conditions were chosen # somewhat arbitrarily.) step "Validate database schema version" do db_migration_version = on(database, "PGPASSWORD=puppetdb psql -h localhost -U puppetdb -d puppetdb --tuples-only -c \"SELECT max(version) from schema_migrations\" |tr -d \" \" |grep -v \"^\s*$\"").stdout.strip # This is terrible; it's dependent on the order of some function definitions # in the code, and it's just generally moronic. We should provide a lein task # or some way of interrogating the latest expected schema version from # the command line so that we can get rid of this crap. source_migration_version = on(database, "java -cp /usr/share/puppetdb/puppetdb.jar clojure.main -m puppetlabs.puppetdb.core version |grep target_schema_version|cut -f2 -d'='").stdout.strip assert_equal(db_migration_version, source_migration_version, "Expected migration version from source code '#{source_migration_version}' " + "to match migration version found in postgres database '#{db_migration_version}'") end step "Validate the contents of the certname table" do db_hostnames_str = on(database, "PGPASSWORD=puppetdb psql -h localhost -U puppetdb -d puppetdb --tuples-only -c \"SELECT name from certnames\" |grep -v \"^\s*$\"").stdout.strip db_hostnames = Set.new(db_hostnames_str.split.map { |x| x.strip }) acceptance_hostnames = Set.new(hosts.map { |host| host.node_name }) assert_equal(acceptance_hostnames, db_hostnames, "Expected hostnames from certnames table to match the ones known to the acceptance harness") end end puppetdb-6.2.0/acceptance/setup/post_suite/10_collect_artifacts.rb000064400000000000000000000003541342042221600252740ustar00rootroot00000000000000 step "Create artifacts directory" do unless File.directory?('artifacts') Dir.mkdir("artifacts") end end step "Collect puppetdb log file" do scp_from(database, "/var/log/puppetlabs/puppetdb/puppetdb.log", "./artifacts") end puppetdb-6.2.0/acceptance/setup/post_suite/99_teardown.rb000064400000000000000000000014451342042221600234550ustar00rootroot00000000000000#!/usr/bin/env ruby test_config = PuppetDBExtensions.config def uninstall_package(host, os_families, pkg_name) os = os_families[host.name] case os when :debian on(host, "apt-get -f -y purge #{pkg_name} ") when :redhat, :fedora on(host, "yum -y remove #{pkg_name}") else raise ArgumentError, "Unsupported OS family: '#{os}'" end end step "Stop puppetdb" do stop_puppetdb(database) end if (test_config[:purge_after_run]) if (test_config[:install_type] == :package) step "Uninstall packages" do uninstall_package(database, test_config[:os_families], "puppetdb") uninstall_package(master, test_config[:os_families], "puppetdb-termini") hosts.each do |host| uninstall_package(host, test_config[:os_families], "puppet") end end end end puppetdb-6.2.0/acceptance/setup/pre_suite/000075500000000000000000000000001342042221600205615ustar00rootroot00000000000000puppetdb-6.2.0/acceptance/setup/pre_suite/00_setup_test_env.rb000064400000000000000000000003621342042221600244550ustar00rootroot00000000000000os_families = {} step "Determine host OS's" do os_families = hosts.inject({}) do |result, host| result[host.name] = get_os_family(host) result end end PuppetDBExtensions.initialize_test_config(options, os_families) puppetdb-6.2.0/acceptance/setup/pre_suite/05_clear_firewalls.rb000064400000000000000000000003361342042221600245520ustar00rootroot00000000000000unless (test_config[:skip_presuite_provisioning]) step "Flushing iptables chains" do hosts.each do |host| on host, "iptables -F INPUT -t filter" on host, "iptables -F FORWARD -t filter" end end end puppetdb-6.2.0/acceptance/setup/pre_suite/10_setup_proxies.rb000064400000000000000000000041771342042221600243300ustar00rootroot00000000000000PROXY_URL = "http://modi.puppetlabs.lan:3128" def setup_apt_proxy() step "Configure apt to use local http proxy" do apt_conf_file_path = "/etc/apt/apt.conf.d/99apt-http-proxy" apt_conf_file_content = <<-EOS // Configure apt to use a local http proxy Acquire::http::Proxy "#{PROXY_URL}"; EOS create_remote_file(database, apt_conf_file_path, apt_conf_file_content) end end def setup_yum_proxy() step "Configure yum to use local http proxy" do existing_yumconf = on(database, "cat /etc/yum.conf").stdout new_yumconf_lines = [] existing_yumconf.each_line do |line| # filter out existing proxy line if there is one. unless line =~ /^\s*proxy\s*=/ new_yumconf_lines << line end end new_yumconf_lines << "proxy=#{PROXY_URL}\n" on(database, "mv /etc/yum.conf /etc/yum.conf.bak-puppet_acceptance") create_remote_file(database, "/etc/yum.conf", new_yumconf_lines.join) end end def setup_maven_proxy() on(database, "mkdir -p /root/.m2") m2_settings_path = "/root/.m2/settings.xml" m2_settings_content = <<-EOS true http modi.puppetlabs.lan 3128 EOS create_remote_file(database, m2_settings_path, m2_settings_content) end if (test_config[:use_proxies]) step "Configure package manager to use local http proxy" do # TODO: this should probably run on every host, not just on the database host, # and it should probably be moved into the main acceptance framework instead # of being used only for our project. case test_config[:os_families][database.name] when :debian setup_apt_proxy() when :redhat, :fedora setup_yum_proxy() else raise ArgumentError, "Unsupported OS family: '#{config[:os_families][database.name]}'" end end step "Configure maven to use local http proxy" do setup_maven_proxy end else Beaker::Log.notify "Skipping proxy setup ; test run configured not to use proxies via :puppetdb_use_proxies setting." end puppetdb-6.2.0/acceptance/setup/pre_suite/15_setup_repos.rb000064400000000000000000000003531342042221600237640ustar00rootroot00000000000000unless (test_config[:skip_presuite_provisioning]) step "Install Puppet Labs repositories" do hosts.each do |host| initialize_repo_on_host(host, test_config[:os_families][host.name], test_config[:nightly]) end end end puppetdb-6.2.0/acceptance/setup/pre_suite/20_install_puppet.rb000064400000000000000000000012731342042221600244550ustar00rootroot00000000000000test_name "Install Puppet" do unless (test_config[:skip_presuite_provisioning]) step "Install Puppet" do install_puppet end end step "Populate facts from each host" do populate_facts end pidfile = '/var/run/puppet/master.pid' master_facts = facts(master.name) if options[:type] == 'aio' then bounce_service( master, master['puppetservice'], 10 ) else with_puppet_running_on( master, :master => {:dns_alt_names => "puppet,#{master_facts['hostname']},#{master_facts['fqdn']}", :trace => 'true'}) do # PID file exists? step "PID file created?" do on master, "[ -f #{pidfile} ]" end end end end puppetdb-6.2.0/acceptance/setup/pre_suite/30_generate_ssl_certs.rb000064400000000000000000000005611342042221600252650ustar00rootroot00000000000000unless (test_config[:skip_presuite_provisioning]) step "Run an agent to create the SSL certs" do on( master, "chown -R puppet:puppet /opt/puppetlabs/puppet/cache") databases.each do |database| with_puppet_running_on(master, {'master' => {'autosign' => 'true', 'trace' => 'true'}}) do run_agent_on(database, "--test") end end end end puppetdb-6.2.0/acceptance/setup/pre_suite/40_install_deps.rb000064400000000000000000000136041342042221600240760ustar00rootroot00000000000000unless (test_config[:skip_presuite_provisioning]) step "Update CA cerificates" do os = test_config[:os_families][master.name] case os when :redhat on master, "yum install -y ca-certificates" when :fedora on master, "yum install -y ca-certificates" when :debian on master, "apt-get install -y ca-certificates" end end step "Ensure python 2 is available" do master_os = test_config[:os_families][master.name] case master_os when :debian on master, "apt-get -y install python" end databases.each do |database| os = test_config[:os_families][database.name] case os when :debian on database, "apt-get -y install python" end end end step "Install other dependencies on database" do databases.each do |database| os = test_config[:os_families][database.name] ### ensure nss is up to date case os when :debian on database, "apt-get install -y --force-yes libnss3" when :redhat on database, "yum update -y nss" when :fedora on database, "yum update -y nss" else raise ArgumentError, "Unsupported OS '#{os}'" end if test_config[:install_type] == :git then case os when :debian if database['platform'].variant == 'debian' && database['platform'].version == '8' on database, "apt-get install -y rake unzip" on database, "apt-get install -y -t jessie-backports openjdk-8-jre-headless" else on database, "apt-get install -y rake unzip openjdk-8-jre-headless" end when :redhat on database, "yum install -y java-1.7.0-openjdk rubygem-rake unzip" when :fedora on database, "yum install -y java-1.7.0-openjdk rubygem-rake unzip" else raise ArgumentError, "Unsupported OS '#{os}'" end step "Install lein on the PuppetDB server" do which_result = on database, "which lein", :acceptable_exit_codes => [0,1] needs_lein = which_result.exit_code == 1 if (needs_lein) on database, "curl --tlsv1 -Lk https://raw.github.com/technomancy/leiningen/stable/bin/lein -o /usr/local/bin/lein" on database, "chmod +x /usr/local/bin/lein" on database, "LEIN_ROOT=true lein" end end end # This works around the fact that puppetlabs-concat 1.2.3 requires a ruby in # the normal path, here we work around this for AIO by just installing one # on the database node that needs it. # # https://github.com/puppetlabs/puppetlabs-concat/blob/1.2.3/files/concatfragments.rb#L1 # # This can be removed with concat 2.x onces its re-released, as this doesn't # need an external ruby vm, it uses proper types & providers. if options[:type] == 'aio' && os == :debian && database['platform'] !~ /ubuntu/ then step "Install rubygems on database for AIO on Debian" do on database, "apt-get install -y rubygems ruby-dev" end end end step "Install rubygems and sqlite3 on master" do os = test_config[:os_families][master.name] case os when :redhat on master, "yum update -y nss" case master['platform'] when /^el-5/ on master, "yum install -y rubygems sqlite-devel rubygem-activerecord ruby-devel.x86_64" on master, "gem install sqlite3" when /^el-6/ on master, "yum install -y rubygems ruby-sqlite3 rubygem-activerecord" else # EL7 very much matches what Fedora 20 uses on master, "yum install -y rubygems rubygem-sqlite3" on master, "gem install activerecord -v 3.2.17 --no-ri --no-rdoc -V --backtrace" end when :fedora # This was really set with Fedora 20 in mind, later versions might differ on master, "yum install -y rubygems rubygem-sqlite3" on master, "gem install activerecord -v 3.2.17 --no-ri --no-rdoc -V --backtrace" on master, "yum update -y nss" when :debian on master, "apt-get install -y libnss3" case master['platform'] when /^ubuntu-10.04/ # Ubuntu 10.04 has rubygems 1.3.5 which is known to not be reliable, so therefore # we skip. when /^ubuntu-12.04/ on master, "apt-get install -y rubygems ruby-dev libsqlite3-dev build-essential" # this is to get around the activesupport dependency on Ruby 1.9.3 for # Ubuntu 12.04. We can remove it when we drop support for 1.8.7. on master, "gem install i18n -v 0.6.11" on master, "gem install activerecord -v 3.2.17 --no-ri --no-rdoc -V --backtrace" on master, "gem install sqlite3 -v 1.3.9 --no-ri --no-rdoc -V --backtrace" else if (master['platform'] =~ /^debian-8/) # Required to address failure during i18n install: # /usr/lib/x86_64-linux-gnu/ruby/2.1.0/openssl.so: symbol # SSLv2_method, version OPENSSL_1.0.0 not defined in file # libssl.so.1.0.0 with link time reference - # /usr/lib/x86_64-linux-gnu/ruby/2.1.0/openssl.so on master, "apt-get install -y openssl" end on master, "apt-get install -y ruby ruby-dev libsqlite3-dev build-essential" # this is to get around the activesupport dependency on Ruby 1.9.3 for # Ubuntu 12.04. We can remove it when we drop support for 1.8.7. on master, "gem install i18n -v 0.6.11" on master, "gem install activerecord -v 3.2.17 --no-ri --no-rdoc -V --backtrace" on master, "gem install sqlite3 -v 1.3.9 --no-ri --no-rdoc -V --backtrace" end else raise ArgumentError, "Unsupported OS: '#{os}'" end # Make sure there isn't a gemrc file, because that could ruin our day. on master, "rm -f ~/.gemrc" end end end puppetdb-6.2.0/acceptance/setup/pre_suite/50_install_modules.rb000064400000000000000000000003711342042221600246110ustar00rootroot00000000000000require 'beaker/dsl/install_utils' extend Beaker::DSL::InstallUtils unless (test_config[:skip_presuite_provisioning]) step "Install the puppetdb module and dependencies" do on databases, "puppet module install puppetlabs/puppetdb" end end puppetdb-6.2.0/acceptance/setup/pre_suite/70_add_dev_repo.rb000064400000000000000000000010241342042221600240240ustar00rootroot00000000000000repo_config_dir = 'tmp/repo_configs' if (test_config[:install_type] == :package \ && test_config[:package_build_version] \ && !(test_config[:skip_presuite_provisioning])) then # do not install the dev_repo if a package_build_version has not been specified. databases.each do |database| install_puppetlabs_dev_repo database, 'puppetdb', oldest_supported, repo_config_dir, options install_puppetlabs_dev_repo database, 'puppetdb', test_config[:package_build_version], repo_config_dir, options end end puppetdb-6.2.0/acceptance/setup/pre_suite/80_install_released_puppetdb.rb000064400000000000000000000022741342042221600266370ustar00rootroot00000000000000# We skip this step entirely unless we are running in :upgrade mode. # or if we're running against bionic on the 5.1.x branch version = test_config[:package_build_version].to_s latest_released = get_latest_released(version) if ([:upgrade_oldest, :upgrade_latest].include? test_config[:install_mode] \ && !(test_config[:skip_presuite_provisioning]) \ && !(is_bionic && get_testing_branch(version) == '5.1.x')) if (test_config[:install_mode] == :upgrade_latest \ && test_config[:nightly] == true) # install official repos when testing upgrades from the latest official release step "Install Official Puppet Labs repositories" do hosts.each do |host| initialize_repo_on_host(host, test_config[:os_families][host.name], false) end end end install_target = test_config[:install_mode] == :upgrade_latest ? latest_released : oldest_supported step "Install most recent released PuppetDB on the PuppetDB server for upgrade test" do databases.each do |database| enable_https_apt_sources(database) install_puppetdb(database, install_target) start_puppetdb(database) install_puppetdb_termini(master, databases, install_target) end end end puppetdb-6.2.0/acceptance/setup/pre_suite/90_install_devel_puppetdb.rb000064400000000000000000000032201342042221600261430ustar00rootroot00000000000000step "Install development build of PuppetDB on the PuppetDB server" do databases.each do |database| os = test_config[:os_families][database.name] case test_config[:install_type] when :git Log.notify("Install puppetdb from source") Log.error database enable_https_apt_sources(database) install_postgres(database) install_puppetdb_via_rake(database) start_puppetdb(database) when :package Log.notify("Installing puppetdb from package; install mode: '#{test_config[:install_mode].inspect}'") enable_https_apt_sources(database) install_puppetdb(database) if test_config[:validate_package_version] validate_package_version(database) end # The package should automatically start the service on debian. On redhat, # it doesn't. However, during test runs where we're doing package upgrades, # the redhat package *should* detect that the service was running before the # upgrade, and it should restart it automatically. # # That leaves the case where we're on a redhat box and we're running the # tests as :install only (as opposed to :upgrade). In that case we need # to start the service ourselves here. if test_config[:install_mode] == :install and [:redhat, :fedora].include?(os) start_puppetdb(database) else # make sure it got started by the package install/upgrade sleep_until_started(database) end end end case test_config[:install_type] when :git install_puppetdb_termini_via_rake(master, databases) when :package install_puppetdb_termini(master, databases) end end puppetdb-6.2.0/acceptance/tests/000075500000000000000000000000001342042221600165645ustar00rootroot00000000000000puppetdb-6.2.0/acceptance/tests/commands/000075500000000000000000000000001342042221600203655ustar00rootroot00000000000000puppetdb-6.2.0/acceptance/tests/commands/args-handling.rb000064400000000000000000000011041342042221600234240ustar00rootroot00000000000000test_name "argument handling" do bin_loc = puppetdb_bin_dir(database) ["ssl-setup"].each do |k| step "running puppetdb #{k} -h should not error out" do on database, "#{bin_loc}/puppetdb #{k} -h" end # We don't test ssl-setup here, because its actions without any arguments # are tested elsewhere. next if k == 'ssl-setup' step "running puppetdb-#{k} with no arguments should not error out" do on database, "#{bin_loc}/puppetdb #{k}", :acceptable_exit_codes => [1] assert_match(/Missing required argument/, stdout) end end end puppetdb-6.2.0/acceptance/tests/commands/list-sub-commands.rb000064400000000000000000000005341342042221600242550ustar00rootroot00000000000000test_name "list sub-commands" do bin_loc = puppetdb_bin_dir(database) step "running puppetdb on its own should list all sub-commands" do result = on database, "#{bin_loc}/puppetdb" ["ssl-setup", "foreground"].each do |k| assert_match(/#{k}/, result.stdout, "puppetdb command should list #{k} as a sub-command") end end end puppetdb-6.2.0/acceptance/tests/security/000075500000000000000000000000001342042221600204335ustar00rootroot00000000000000puppetdb-6.2.0/acceptance/tests/security/puppetdb-ssl-setup/000075500000000000000000000000001342042221600242135ustar00rootroot00000000000000puppetdb-6.2.0/acceptance/tests/security/puppetdb-ssl-setup/jetty-changes.rb000064400000000000000000000027301342042221600273070ustar00rootroot00000000000000test_name "puppetdb ssl-setup jetty.ini changes" do confd = "#{puppetdb_confdir(database)}/conf.d" bin_loc = "#{puppetdb_bin_dir(database)}" step "backup jetty.ini and setup template" do on database, "cp #{confd}/jetty.ini #{confd}/jetty.ini.bak.ssl_setup_tests" end step "check to make sure all settings were configured for jetty.ini" do ["ssl-host", "ssl-port", "ssl-key", "ssl-cert", "ssl-ca-cert"].each do |k| on database, "grep -e '^#{k} = ' #{confd}/jetty.ini" end end step "run puppetdb ssl-setup again to make sure it is idempotent" do on database, "#{bin_loc}/puppetdb ssl-setup" on database, "diff #{confd}/jetty.ini #{confd}/jetty.ini.bak.ssl_setup_tests" end step "purposely modify jetty.ini ssl-host and make sure puppetdb ssl-setup -f fixes it" do on database, "sed -i 's/^ssl-host = .*/ssl-host = foobarbaz/' #{confd}/jetty.ini" on database, "#{bin_loc}/puppetdb ssl-setup -f" on database, "grep -e '^ssl-host = foobarbaz' #{confd}/jetty.ini", :acceptable_exit_codes => [1] end step "purposely modify jetty.ini ssl-host and make sure puppetdb ssl-setup does not touch it" do on database, "sed -i 's/^ssl-host = .*/ssl-host = foobarbaz/' #{confd}/jetty.ini" on database, "#{bin_loc}/puppetdb ssl-setup" on database, "grep -e '^ssl-host = foobarbaz' #{confd}/jetty.ini" end step "restore original jetty.ini" do on database, "cp #{confd}/jetty.ini.bak.ssl_setup_tests #{confd}/jetty.ini" end end puppetdb-6.2.0/acceptance/tests/security/puppetdb-ssl-setup/no-puppet-certs.rb000064400000000000000000000036161342042221600276130ustar00rootroot00000000000000test_name "puppetdb ssl-setup with no puppet certs" do db_conf_dir = puppetdb_confdir(database) db_ssl_dir = "#{db_conf_dir}/ssl" db_confd = "#{puppetdb_confdir(database)}/conf.d" bin_loc = "#{puppetdb_bin_dir(database)}" ssl_dir = on(database, "puppet config print ssldir --section master").stdout.chomp step "backup jetty.ini and puppetdb ssl certs" do on database, "cp #{db_confd}/jetty.ini #{db_confd}/jetty.ini.bak.ssl_setup_tests" on database, "rm -rf #{db_ssl_dir}.bak.ssl_setup_tests" on database, "if [ -e #{db_ssl_dir} ]; then mv #{db_ssl_dir} #{db_ssl_dir}.bak.ssl_setup_tests; fi" end teardown do on database, "cp #{db_confd}/jetty.ini.bak.ssl_setup_tests #{db_confd}/jetty.ini" # This restores the certs if they weren't restored already on database, "if [ ! -e #{ssl_dir} -a -e #{ssl_dir}.bak.ssl_setup_tests ]; then mv #{ssl_dir}.bak.ssl_setup_tests #{ssl_dir}; fi" on database, "if [ ! -e #{db_ssl_dir} -a -e #{db_ssl_dir}.bak.ssl_setup_tests ]; then mv #{db_ssl_dir}.bak.ssl_setup_tests #{db_ssl_dir}; fi" end # The goal of this test is, make sure the user receives a nice error when # the agent certs are missing, as this implies Puppet has not ran yet. step "run puppetdb ssl-setup with no puppet certs, and make sure it returns a meaningful error" do on database, "rm -rf #{ssl_dir}.bak.ssl_setup_tests" on database, "mv #{ssl_dir} #{ssl_dir}.bak.ssl_setup_tests" result = on database, "#{bin_loc}/puppetdb ssl-setup", :acceptable_exit_codes => [1] assert_match(/Warning: Unable to find all puppet certificates to copy/, result.output) end # Now we restore the certificates step "restore certificates" do on database, "rm -rf #{ssl_dir}" on database, "mv #{ssl_dir}.bak.ssl_setup_tests #{ssl_dir}" end step "retest puppetdb ssl-setup again now there are certs" do on database, "#{bin_loc}/puppetdb ssl-setup" end end puppetdb-6.2.0/acceptance/tests/security/puppetdb-ssl-setup/nonprod-environment.rb000064400000000000000000000012071342042221600305610ustar00rootroot00000000000000test_name "puppetdb ssl-setup on nonproduction environment" do confdir = on(database, "puppet config print confdir --section master").stdout.chomp bin_loc = "#{puppetdb_bin_dir(database)}" step "back up existing puppet.conf" do on database, "cp #{confdir}/puppet.conf #{confdir}/puppet.conf.bak" end step "ensure proper exit code on ssl-setup" do on database, "puppet config set environment foo --section main" result = on database, "#{bin_loc}/puppetdb ssl-setup", :acceptable_exit_codes => [0] end step "restore original puppet.conf" do on database, "mv #{confdir}/puppet.conf.bak #{confdir}/puppet.conf" end end puppetdb-6.2.0/acceptance/tests/smoke.rb000064400000000000000000000030201342042221600202220ustar00rootroot00000000000000puppetdb_query_url = "" def check_record_count(endpoint, expected_count) result = on database, %Q|curl -G http://localhost:8080/pdb/query/v4/#{endpoint}| result_records = JSON.parse(result.stdout) assert_equal(expected_count, result_records.length, "Expected query to return '#{expected_count}' records from /pdb/query/v4/#{endpoint}; returned '#{result_records.length}'") end test_name "test postgresql database restart handling to ensure we recover from a restart" do step "clear puppetdb database" do clear_and_restart_puppetdb(database) end with_puppet_running_on master, { 'master' => { 'autosign' => 'true' }} do step "Run agents once to activate nodes" do run_agent_on agents, "--test --server #{master}" end end step "Verify that the number of active nodes is what we expect" do check_record_count("nodes", agents.length) end step "Verify that one factset was stored for each node" do check_record_count("factsets", agents.length) end step "Verify that one catalog was stored for each node" do check_record_count("catalogs", agents.length) end step "Verify that one report was stored for each node" do check_record_count("reports", agents.length) end restart_puppetdb(database) step "Verify puppetdb can be queried after restarting" do check_record_count("nodes", agents.length) end restart_postgres(database) step "Verify puppetdb can be queried after restarting" do check_record_count("nodes", agents.length) end end puppetdb-6.2.0/azure-pipelines.yml000064400000000000000000000041341342042221600171750ustar00rootroot00000000000000pool: # self-hosted agent on Windows 10 1709 environment # includes newer Docker engine with LCOW enabled, new build of LCOW image # includes Ruby 2.5, Go 1.10, Node.js 10.10, hadolint name: Default variables: NAMESPACE: puppet steps: - checkout: self # self represents the repo where the initial Pipelines YAML file was found clean: true # whether to fetch clean each time - powershell: | $line = '=' * 80 Write-Host "$line`nWindows`n$line`n" Get-ComputerInfo | select WindowsProductName, WindowsVersion, OsHardwareAbstractionLayer | Out-String | Write-Host # # Azure # Write-Host "`n`n$line`nAzure`n$line`n" Invoke-RestMethod -Headers @{'Metadata'='true'} -URI http://169.254.169.254/metadata/instance?api-version=2017-12-01 -Method Get | ConvertTo-Json -Depth 10 | Write-Host # # Docker # Write-Host "`n`n$line`nDocker`n$line`n" docker version docker images docker info sc.exe qc docker # # Ruby # Write-Host "`n`n$line`nRuby`n$line`n" ruby --version gem --version bundle --version # # Environment # Write-Host "`n`n$line`nEnvironment`n$line`n" Get-ChildItem Env: | % { Write-Host "$($_.Key): $($_.Value)" } displayName: Diagnostic Host Information name: hostinfo - powershell: | . ./docker/ci/build.ps1 Lint-Dockerfile -Path ./docker/puppetdb/Dockerfile displayName: Lint PuppetDB Dockerfile name: lint_dockerfiles - powershell: | . ./docker/ci/build.ps1 Build-Container -Namespace $ENV:NAMESPACE displayName: Build PuppetDB Container name: build_puppetdb_container - powershell: | . ./docker/ci/build.ps1 Invoke-ContainerTest -Name puppetdb -Namespace $ENV:NAMESPACE displayName: Test PuppetDB name: test_puppetdb - task: PublishTestResults@2 displayName: Publish PuppetDB test results inputs: testResultsFormat: 'JUnit' testResultsFiles: 'docker/TEST-*.xml' testRunTitle: PuppetDB Test Results - powershell: | . ./docker/ci/build.ps1 Clear-ContainerBuilds displayName: Container Cleanup condition: always() puppetdb-6.2.0/config.sample.ini000064400000000000000000000021671342042221600165700ustar00rootroot00000000000000# See README.md for more thorough explanations of each section and # option. [global] # Store mq/db data in a custom directory vardir = /var/lib/puppetdb # Use an external logback config file # logging-config = /path/to/logback.xml [puppetdb] # List of certificate names from which to allow incoming HTTPS requests: # certificate-whitelist = /path/to/certname/whitelist # Whether we should check for more recent PuppetDB versions. Defaults to 'false': # disable-update-checking = true [database] # Subname pattern: //host:port/databaseName #subname = //localhost:5432/puppetdb # Connect as a specific user # username = foobar # Use a specific password # password = foobar # How often (in minutes) to compact the database # gc-interval = 60 [command-processing] # How many command-processing threads to use, defaults to (CPUs / 2) # threads = 4 [jetty] # What host to listen on, defaults to binding to 'localhost' # host = foo.my.net # What port to listen on port = 8080 [nrepl] # Set to true to enable the remote REPL enabled = false # What port the REPL should listen on port = 8082 # IP address to listen on host = 127.0.0.1 puppetdb-6.2.0/config/000075500000000000000000000000001342042221600146015ustar00rootroot00000000000000puppetdb-6.2.0/config/image_templates/000075500000000000000000000000001342042221600177415ustar00rootroot00000000000000puppetdb-6.2.0/config/image_templates/ec2.yaml000064400000000000000000000021171342042221600212770ustar00rootroot00000000000000AMI: el-7-x86_64-west: :image: :pe: ami-c25448a3 :foss: ami-c25448a3 :region: us-west-2 el-6-x86_64-west: :image: :pe: ami-aa8b039a :foss: ami-aa8b039a :region: us-west-2 el-5-x86_64-west: :image: :pe: ami-85693fb5 :foss: ami-85693fb5 :region: us-west-2 fedora-20-x86_64-west: :image: :pe: ami-c0b9daf0 :foss: ami-c0b9daf0 :region: us-west-2 ubuntu-16.04-amd64-west: :image: :pe: ami-7a47b41a :foss: ami-7a47b41a :region: us-west-2 ubuntu-15.10-amd64-west: :image: :pe: ami-ddad4dbd :foss: ami-ddad4dbd :region: us-west-2 ubuntu-14.04-amd64-west: :image: :pe: ami-be524edf :foss: ami-be524edf :region: us-west-2 ubuntu-12.04-amd64-west: :image: :pe: ami-d6e860e6 :foss: ami-d6e860e6 :region: us-west-2 debian-8-amd64-west: :image: :pe: ami-17120727 :foss: ami-17120727 :region: us-west-2 debian-7-amd64-west: :image: :pe: ami-6a574b0b :foss: ami-6a574b0b :region: us-west-2 puppetdb-6.2.0/contrib/000075500000000000000000000000001342042221600147745ustar00rootroot00000000000000puppetdb-6.2.0/contrib/README.md000064400000000000000000000012441342042221600162540ustar00rootroot00000000000000# PuppetDB User-Contributed Content This directory contains code and other content that has been contributed by PuppetDB users. It's a place where we can include items that we know will be useful to other users, but that we can't yet officially maintain or support. We hope that you will find it valuable! Please keep in mind that the contents of this directory may not be exhaustively tested and could conceivably become out-of-sync with the main production code base. If you find something that is out of date or has any other issues, or if you have other content that you think may be useful to the community, we welcome additional submissions via github pull requests! puppetdb-6.2.0/contrib/gem/000075500000000000000000000000001342042221600155445ustar00rootroot00000000000000puppetdb-6.2.0/contrib/gem/Gemfile000064400000000000000000000001461342042221600170400ustar00rootroot00000000000000source 'https://rubygems.org' # Specify your gem's dependencies in puppetdb-terminus.gemspec gemspec puppetdb-6.2.0/contrib/gem/README.md000064400000000000000000000006551342042221600170310ustar00rootroot00000000000000# PuppetDB Terminus Gemspec The files in this directory are intended to provide a starting point for building a gem package of the puppetdb terminus code. This will be useful for cases where puppet is being installed as a gem and you would also like to include puppetdb. Note that support for installing the puppetdb termini via rubygems relies on changes to the puppet autoloader that were first introduced in the 3.0 series.puppetdb-6.2.0/contrib/gem/puppetdb-terminus.gemspec000064400000000000000000000016701342042221600226040ustar00rootroot00000000000000# -*- encoding: utf-8 -*- lib = File.expand_path('../lib', __FILE__) $LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib) Gem::Specification.new do |gem| gem.name = "puppetdb-terminus" gem.version = "3.0.0" gem.authors = ["Puppet Labs"] gem.email = ["puppet@puppetlabs.com"] gem.description = "Puppet terminus files to connect to PuppetDB" gem.summary = "Connect Puppet to PuppetDB by setting up a terminus for PuppetDB" gem.homepage = "https://github.com/puppetlabs/puppetdb" gem.license = "Apache-2.0" gem.files = Dir['LICENSE.txt', 'NOTICE.txt', 'README.md', 'puppet/lib/**/*', 'puppet/spec/**/*'] gem.executables = gem.files.grep(%r{^bin/}).map{ |f| File.basename(f) } gem.test_files = gem.files.grep(%r{^(test|spec|features)/}) gem.require_paths = ["lib"] gem.add_runtime_dependency 'json' gem.add_runtime_dependency 'puppet', ['>= 3.8.1', '<5.0'] end puppetdb-6.2.0/dev-resources/000075500000000000000000000000001342042221600161225ustar00rootroot00000000000000puppetdb-6.2.0/dev-resources/Makefile.i18n000064400000000000000000000140731342042221600203450ustar00rootroot00000000000000# -*- Makefile -*- # This file was generated by the i18n leiningen plugin # Do not edit this file; it will be overwritten the next time you run # lein i18n init # # The name of the package into which the translations bundle will be placed BUNDLE=puppetlabs.puppetdb # The name of the POT file into which the gettext code strings (msgid) will be placed POT_NAME=puppetdb.pot # The list of names of packages covered by the translation bundle; # by default it contains a single package - the same where the translations # bundle itself is placed - but this can be overridden - preferably in # the top level Makefile PACKAGES?=$(BUNDLE) LOCALES=$(basename $(notdir $(wildcard locales/*.po))) BUNDLE_DIR=$(subst .,/,$(BUNDLE)) BUNDLE_FILES=$(patsubst %,resources/$(BUNDLE_DIR)/Messages_%.class,$(LOCALES)) FIND_SOURCES=find src -name \*.clj # xgettext before 0.19 does not understand --add-location=file. Even CentOS # 7 ships with an older gettext. We will therefore generate full location # info on those systems, and only file names where xgettext supports it LOC_OPT=$(shell xgettext --add-location=file -f - /dev/null 2>&1 && echo --add-location=file || echo --add-location) LOCALES_CLJ=resources/locales.clj define LOCALES_CLJ_CONTENTS { :locales #{$(patsubst %,"%",$(LOCALES))} :packages [$(patsubst %,"%",$(PACKAGES))] :bundle $(patsubst %,"%",$(BUNDLE).Messages) } endef export LOCALES_CLJ_CONTENTS i18n: msgfmt # Update locales/.pot update-pot: locales/$(POT_NAME) locales/$(POT_NAME): $(shell $(FIND_SOURCES)) | locales @tmp=$$(mktemp $@.tmp.XXXX); \ $(FIND_SOURCES) \ | xgettext --from-code=UTF-8 --language=lisp \ --copyright-holder='Puppet ' \ --package-name="$(BUNDLE)" \ --package-version="$(BUNDLE_VERSION)" \ --msgid-bugs-address="docs@puppet.com" \ -k \ -kmark:1 -ki18n/mark:1 \ -ktrs:1 -ki18n/trs:1 \ -ktru:1 -ki18n/tru:1 \ -ktrun:1,2 -ki18n/trun:1,2 \ -ktrsn:1,2 -ki18n/trsn:1,2 \ $(LOC_OPT) \ --add-comments --sort-by-file \ -o $$tmp -f -; \ sed -i.bak -e 's/charset=CHARSET/charset=UTF-8/' $$tmp; \ sed -i.bak -e 's/POT-Creation-Date: [^\\]*/POT-Creation-Date: /' $$tmp; \ rm -f $$tmp.bak; \ if ! diff -q -I POT-Creation-Date $$tmp $@ >/dev/null 2>&1; then \ mv $$tmp $@; \ else \ rm $$tmp; touch $@; \ fi # Run msgfmt over all .po files to generate Java resource bundles # and create the locales.clj file msgfmt: $(BUNDLE_FILES) $(LOCALES_CLJ) clean-orphaned-bundles # Force rebuild of locales.clj if its contents is not the the desired one. The # shell echo is used to add a trailing newline to match the one from `cat` ifneq ($(shell cat $(LOCALES_CLJ) 2> /dev/null),$(shell echo '$(LOCALES_CLJ_CONTENTS)')) .PHONY: $(LOCALES_CLJ) endif $(LOCALES_CLJ): | resources @echo "Writing $@" @echo "$$LOCALES_CLJ_CONTENTS" > $@ # Remove every resource bundle that wasn't generated from a PO file. # We do this because we used to generate the english bundle directly from the POT. .PHONY: clean-orphaned-bundles clean-orphaned-bundles: @for bundle in resources/$(BUNDLE_DIR)/Messages_*.class; do \ locale=$$(basename "$$bundle" | sed -E -e 's/\$$?1?\.class$$/_class/' | cut -d '_' -f 2;); \ if [ ! -f "locales/$$locale.po" ]; then \ rm "$$bundle"; \ fi \ done resources/$(BUNDLE_DIR)/Messages_%.class: locales/%.po | resources msgfmt --java2 -d resources -r $(BUNDLE).Messages -l $(*F) $< # Use this to initialize translations. Updating the PO files is done # automatically through a CI job that utilizes the scripts in the project's # `bin` file, which themselves come from the `clj-i18n` project. locales/%.po: | locales @if [ ! -f $@ ]; then \ touch $@ && msginit --no-translator -l $(*F) -o $@ -i locales/$(POT_NAME); \ fi resources locales: @mkdir $@ help: $(info $(HELP)) @echo .PHONY: help define HELP This Makefile assists in handling i18n related tasks during development. Files that need to be checked into source control are put into the locales/ directory. They are locales/$(POT_NAME) - the POT file generated by 'make update-pot' locales/$$LANG.po - the translations for $$LANG Only the $$LANG.po files should be edited manually; this is usually done by translators. You can use the following targets: i18n: refresh all the files in locales/ and recompile resources update-pot: extract strings and update locales/$(POT_NAME) locales/LANG.po: create translations for LANG msgfmt: compile the translations into Java classes; this step is needed to make translations available to the Clojure code and produces Java class files in resources/ endef # @todo lutter 2015-04-20: for projects that use libraries with their own # translation, we need to combine all their translations into one big po # file and then run msgfmt over that so that we only have to deal with one # resource bundle puppetdb-6.2.0/docker/000075500000000000000000000000001342042221600146035ustar00rootroot00000000000000puppetdb-6.2.0/docker/.gitignore000064400000000000000000000000401342042221600165650ustar00rootroot00000000000000.bundle Gemfile.lock TEST-*.xml puppetdb-6.2.0/docker/.rspec000064400000000000000000000001111342042221600157110ustar00rootroot00000000000000--format RspecJunitFormatter --out TEST-rspec.xml --format documentation puppetdb-6.2.0/docker/Gemfile000064400000000000000000000006241342042221600161000ustar00rootroot00000000000000source ENV['GEM_SOURCE'] || "https://rubygems.org" def location_for(place, fake_version = nil) if place =~ /^(git[:@][^#]*)#(.*)/ [fake_version, { :git => $1, :branch => $2, :require => false }].compact elsif place =~ /^file:\/\/(.*)/ ['>= 0', { :path => File.expand_path($1), :require => false }] else [place, { :require => false }] end end gem 'rspec' gem 'rspec_junit_formatter' puppetdb-6.2.0/docker/Makefile000064400000000000000000000026631342042221600162520ustar00rootroot00000000000000git_describe = $(shell git describe) vcs_ref := $(shell git rev-parse HEAD) build_date := $(shell date -u +%FT%T) hadolint_available := $(shell hadolint --help > /dev/null 2>&1; echo $$?) hadolint_command := hadolint --ignore DL3008 --ignore DL3018 --ignore DL4000 --ignore DL4001 hadolint_container := hadolint/hadolint:latest ifeq ($(IS_NIGHTLY),true) dockerfile := Dockerfile.nightly version := puppet6-nightly else version = $(shell echo $(git_describe) | sed 's/-.*//') dockerfile := Dockerfile endif prep: ifneq ($(IS_NIGHTLY),true) @git fetch --unshallow ||: @git fetch origin 'refs/tags/*:refs/tags/*' endif lint: ifeq ($(hadolint_available),0) @$(hadolint_command) puppetdb/$(dockerfile) else @docker pull $(hadolint_container) @docker run --rm -v $(PWD)/puppetdb/$(dockerfile):/Dockerfile -i $(hadolint_container) $(hadolint_command) Dockerfile endif build: prep @docker build --pull --build-arg vcs_ref=$(vcs_ref) --build-arg build_date=$(build_date) --build-arg version=$(version) --file puppetdb/$(dockerfile) --tag puppet/puppetdb:$(version) .. ifeq ($(IS_LATEST),true) @docker tag puppet/puppetdb:$(version) puppet/puppetdb:latest endif test: prep @bundle install --path .bundle/gems @PUPPET_TEST_DOCKER_IMAGE=puppet/puppetdb:$(version) bundle exec rspec puppetdb/spec publish: prep @docker push puppet/puppetdb:$(version) ifeq ($(IS_LATEST),true) @docker push puppet/puppetdb:latest endif .PHONY: lint build test publish puppetdb-6.2.0/docker/ci/000075500000000000000000000000001342042221600151765ustar00rootroot00000000000000puppetdb-6.2.0/docker/ci/build.ps1000064400000000000000000000031601342042221600167220ustar00rootroot00000000000000$ErrorActionPreference = 'Stop' function Get-CurrentDirectory { $thisName = $MyInvocation.MyCommand.Name [IO.Path]::GetDirectoryName((Get-Content function:$thisName).File) } function Get-ContainerVersion { # shallow repositories need to pull remaining code to `git describe` correctly if (Test-Path "$(git rev-parse --git-dir)/shallow") { git pull --unshallow } # tags required for versioning git fetch origin 'refs/tags/*:refs/tags/*' (git describe) -replace '-.*', '' } function Lint-Dockerfile($Path) { hadolint --ignore DL3008 --ignore DL3018 --ignore DL4000 --ignore DL4001 $Path } function Build-Container( $Namespace = 'puppet', $Version = (Get-ContainerVersion), $Vcs_ref = $(git rev-parse HEAD)) { Push-Location (Join-Path (Get-CurrentDirectory) '..') $build_date = (Get-Date).ToUniversalTime().ToString('o') $docker_args = @( '--pull', '--build-arg', "vcs_ref=$Vcs_ref", '--build-arg', "build_date=$build_date", '--build-arg', "version=$Version", '--file', "puppetdb/Dockerfile", '--tag', "$Namespace/puppetdb:$Version" ) docker build $docker_args .. Pop-Location } function Invoke-ContainerTest( $Name, $Namespace = 'puppet', $Version = (Get-ContainerVersion)) { Push-Location (Join-Path (Get-CurrentDirectory) '..') bundle install --path .bundle/gems $ENV:PUPPET_TEST_DOCKER_IMAGE = "$Namespace/${Name}:$Version" bundle exec rspec $Name\spec Pop-Location } # removes any temporary containers / images used during builds function Clear-ContainerBuilds { docker container prune --force docker image prune --filter "dangling=true" --force } puppetdb-6.2.0/docker/distelli-manifest.yml000064400000000000000000000003571342042221600207500ustar00rootroot00000000000000pe-and-platform/puppetdb: PreBuild: - make lint Build: - make build - make test AfterBuildSuccess: - docker login -u "$DISTELLI_DOCKER_USERNAME" -p "$DISTELLI_DOCKER_PW" "$DISTELLI_DOCKER_ENDPOINT" - make publish puppetdb-6.2.0/docker/puppetdb/000075500000000000000000000000001342042221600164265ustar00rootroot00000000000000puppetdb-6.2.0/docker/puppetdb/Dockerfile000064400000000000000000000063311342042221600204230ustar00rootroot00000000000000FROM clojure:lein-alpine AS builder RUN apk add --no-cache make # Install only dependencies WORKDIR /app COPY project.clj /app/ COPY resources/puppetlabs/puppetdb/bootstrap.cfg \ /app/resources/puppetlabs/puppetdb/bootstrap.cfg RUN lein with-profile uberjar deps # Build uberjar -- see .dockerignore COPY . /app RUN lein with-profile uberjar uberjar FROM openjdk:8-jre-alpine RUN apk add --no-cache tini curl openssl COPY --from=builder /app/target/puppetdb.jar / ARG vcs_ref ARG build_date ARG version="6.0.0" ENV PUPPETDB_VERSION="$version" ENV PUPPETDB_DATABASE_CONNECTION="//postgres:5432/puppetdb" ENV PUPPETDB_USER=puppetdb ENV PUPPETDB_PASSWORD=puppetdb ENV PUPPETDB_NODE_TTL=7d ENV PUPPETDB_NODE_PURGE_TTL=14d ENV PUPPETDB_REPORT_TTL=14d ENV PUPPETDB_JAVA_ARGS="-Djava.net.preferIPv4Stack=true -Xms256m -Xmx256m" # used by entrypoint to determine if puppetserver should be contacted for config # set to false when container tests are run ENV USE_PUPPETSERVER=true LABEL org.label-schema.maintainer="Puppet Release Team " \ org.label-schema.vendor="Puppet" \ org.label-schema.url="https://github.com/puppetlabs/puppetdb" \ org.label-schema.name="PuppetDB" \ org.label-schema.license="Apache-2.0" \ org.label-schema.version="$PUPPETDB_VERSION" \ org.label-schema.vcs-url="https://github.com/puppetlabs/puppetdb" \ org.label-schema.vcs-ref="$vcs_ref" \ org.label-schema.build-date="$build_date" \ org.label-schema.schema-version="1.0" \ org.label-schema.dockerfile="/Dockerfile" # Values from /etc/default/puppetdb ENV JAVA_BIN="/usr/bin/java" ENV USER="puppetdb" ENV GROUP="puppetdb" ENV INSTALL_DIR="/opt/puppetlabs/server/apps/puppetdb" ENV CONFIG="/etc/puppetlabs/puppetdb/conf.d" ENV BOOTSTRAP_CONFIG="/etc/puppetlabs/puppetdb/bootstrap.cfg" ENV SERVICE_STOP_RETRIES=60 COPY docker/puppetdb/logging /etc/puppetlabs/puppetdb/logging/ COPY docker/puppetdb/conf.d /etc/puppetlabs/puppetdb/conf.d/ COPY resources/puppetlabs/puppetdb/bootstrap.cfg /etc/puppetlabs/puppetdb/ COPY resources/ext/config/logback.xml /etc/puppetlabs/puppetdb/ COPY resources/ext/config/request-logging.xml /etc/puppetlabs/puppetdb/ RUN mkdir -p /opt/puppetlabs/server/data/puppetdb RUN addgroup $GROUP && adduser -S $USER -G $GROUP ADD https://raw.githubusercontent.com/puppetlabs/pupperware/b651119f16a1c18d5e9174c283a4e535d35a128a/shared/ssl.sh /ssl.sh RUN chmod +x /ssl.sh COPY docker/puppetdb/ssl-setup.sh / RUN chmod +x /ssl-setup.sh VOLUME /etc/puppetlabs/puppet/ssl/ COPY docker/puppetdb/docker-entrypoint.sh / RUN chmod +x /docker-entrypoint.sh COPY docker/puppetdb/docker-entrypoint.d /docker-entrypoint.d EXPOSE 8080 8081 ENTRYPOINT ["/sbin/tini", "-g", "--", "/docker-entrypoint.sh"] CMD ["services"] # The start-period is just a wild guess how long it takes PuppetDB to come # up in the worst case. The other timing parameters are set so that it # takes at most a minute to realize that PuppetDB has failed. HEALTHCHECK --interval=10s --timeout=10s --retries=6 \ --start-period=5m CMD \ curl --fail --silent \ --resolve 'puppetdb:8080:127.0.0.1' \ http://puppetdb:8080/status/v1/services/puppetdb-status \ | grep -q '"state":"running"' \ || exit 1 COPY docker/puppetdb/Dockerfile / puppetdb-6.2.0/docker/puppetdb/Dockerfile.nightly000064400000000000000000000045521342042221600221030ustar00rootroot00000000000000FROM ubuntu:16.04 ARG vcs_ref ARG build_date ARG version="puppet6-nightly" ENV PUPPETDB_VERSION="$version" ENV DUMB_INIT_VERSION="1.2.1" ENV UBUNTU_CODENAME="xenial" ENV PUPPETDB_DATABASE_CONNECTION="//postgres:5432/puppetdb" ENV PUPPETDB_USER=puppetdb ENV PUPPETDB_PASSWORD=puppetdb ENV PUPPETDB_NODE_TTL=7d ENV PUPPETDB_NODE_PURGE_TTL=14d ENV PUPPETDB_REPORT_TTL=14d ENV PUPPETDB_JAVA_ARGS="-Djava.net.preferIPv4Stack=true -Xms256m -Xmx256m" ENV USE_PUPPETSERVER=true ENV PATH="/opt/puppetlabs/server/bin:/opt/puppetlabs/puppet/bin:/opt/puppetlabs/bin:$PATH" LABEL org.label-schema.maintainer="Puppet Release Team " \ org.label-schema.vendor="Puppet" \ org.label-schema.url="https://github.com/puppetlabs/puppetdb" \ org.label-schema.name="PuppetDB" \ org.label-schema.license="Apache-2.0" \ org.label-schema.version="$PUPPETDB_VERSION" \ org.label-schema.vcs-url="https://github.com/puppetlabs/puppetdb" \ org.label-schema.vcs-ref="$vcs_ref" \ org.label-schema.build-date="$build_date" \ org.label-schema.schema-version="1.0" \ org.label-schema.dockerfile="/Dockerfile.nightly" RUN apt-get update && \ apt-get install --no-install-recommends -y wget netcat lsb-release ca-certificates && \ wget https://apt.puppetlabs.com/puppet6-nightly/puppet6-nightly-release-"$UBUNTU_CODENAME".deb && \ wget https://github.com/Yelp/dumb-init/releases/download/v"$DUMB_INIT_VERSION"/dumb-init_"$DUMB_INIT_VERSION"_amd64.deb && \ dpkg -i puppet6-nightly-release-"$UBUNTU_CODENAME".deb && \ dpkg -i dumb-init_"$DUMB_INIT_VERSION"_amd64.deb && \ rm puppet6-nightly-release-"$UBUNTU_CODENAME".deb dumb-init_"$DUMB_INIT_VERSION"_amd64.deb && \ apt-get update && \ apt-get install --no-install-recommends -y puppet-agent puppetdb && \ apt-get clean && \ rm -rf /var/lib/apt/lists/* COPY puppetdb /etc/default/ COPY logging /etc/puppetlabs/puppetdb/logging RUN rm -fr /etc/puppetlabs/puppetdb/conf.d COPY conf.d /etc/puppetlabs/puppetdb/conf.d # Persist the agent SSL certificate. VOLUME /etc/puppetlabs/puppet/ssl/ # /etc/puppetlabs/puppetdb/ssl is automatically populated from here and # doesn't need a separate volume. COPY docker-entrypoint.sh / RUN chmod +x /docker-entrypoint.sh EXPOSE 8080 8081 ENTRYPOINT ["dumb-init", "/docker-entrypoint.sh"] CMD ["foreground"] COPY Dockerfile.nightly / puppetdb-6.2.0/docker/puppetdb/conf.d/000075500000000000000000000000001342042221600175755ustar00rootroot00000000000000puppetdb-6.2.0/docker/puppetdb/conf.d/config.conf000064400000000000000000000001721342042221600217110ustar00rootroot00000000000000global: { vardir: /opt/puppetlabs/server/data/puppetdb logging-config: /etc/puppetlabs/puppetdb/logging/logback.xml } puppetdb-6.2.0/docker/puppetdb/conf.d/database.conf000064400000000000000000000003521342042221600222100ustar00rootroot00000000000000database: { subname: ${PUPPETDB_DATABASE_CONNECTION} username: ${PUPPETDB_USER} password: ${PUPPETDB_PASSWORD} node-ttl: ${PUPPETDB_NODE_TTL} node-purge-ttl: ${PUPPETDB_NODE_PURGE_TTL} report-ttl: ${PUPPETDB_REPORT_TTL} } puppetdb-6.2.0/docker/puppetdb/conf.d/jetty.ini000064400000000000000000000020131342042221600214310ustar00rootroot00000000000000[jetty] # IP address or hostname to listen for clear-text HTTP. To avoid resolution # issues, IP addresses are recommended over hostnames. # Default is `localhost`. host = 0.0.0.0 # Port to listen on for clear-text HTTP. port = 8080 # The following are SSL specific settings. They can be configured # automatically with the tool `puppetdb ssl-setup`, which is normally # ran during package installation. # IP address to listen on for HTTPS connections. Hostnames can also be used # but are not recommended to avoid DNS resolution issues. To listen on all # interfaces, use `0.0.0.0`. # ssl-host = # The port to listen on for HTTPS connections # ssl-port = # Private key path # ssl-key = # Public certificate path # ssl-cert = # Certificate authority path # ssl-ca-cert = # Access logging configuration path. To turn off access logging # comment out the line with `access-log-config=...` access-log-config = /etc/puppetlabs/puppetdb/logging/request-logging.xml puppetdb-6.2.0/docker/puppetdb/docker-entrypoint.d/000075500000000000000000000000001342042221600223305ustar00rootroot00000000000000puppetdb-6.2.0/docker/puppetdb/docker-entrypoint.d/10-configure-ssl.sh000075500000000000000000000010741342042221600256670ustar00rootroot00000000000000#!/bin/sh master_running() { status=$(curl --silent --fail --insecure "https://${PUPPETSERVER_HOSTNAME}:8140/status/v1/simple") test "$status" = "running" } PUPPETSERVER_HOSTNAME="${PUPPETSERVER_HOSTNAME:-puppet}" if [ ! -f "/etc/puppetlabs/puppet/ssl/certs/${HOSTNAME}.pem" ] && [ "$USE_PUPPETSERVER" = true ]; then # if this is our first run, run puppet agent to get certs in place while ! master_running; do sleep 1 done set -e /ssl.sh fi if [ ! -d "/etc/puppetlabs/puppetdb/ssl" ] && [ "$USE_PUPPETSERVER" = true ]; then /ssl-setup.sh -f fi puppetdb-6.2.0/docker/puppetdb/docker-entrypoint.sh000075500000000000000000000003771342042221600224540ustar00rootroot00000000000000#!/bin/sh set -e for f in /docker-entrypoint.d/*.sh; do echo "Running $f" chmod +x "$f" "$f" done exec java $PUPPETDB_JAVA_ARGS -cp /puppetdb.jar \ clojure.main -m puppetlabs.puppetdb.core "$@" \ -c /etc/puppetlabs/puppetdb/conf.d/ puppetdb-6.2.0/docker/puppetdb/logging/000075500000000000000000000000001342042221600200545ustar00rootroot00000000000000puppetdb-6.2.0/docker/puppetdb/logging/logback.xml000064400000000000000000000004511342042221600222000ustar00rootroot00000000000000 %d %-5p [%c{2}] %m%n puppetdb-6.2.0/docker/puppetdb/logging/request-logging.xml000064400000000000000000000003651342042221600237160ustar00rootroot00000000000000 combined puppetdb-6.2.0/docker/puppetdb/postgres-custom/000075500000000000000000000000001342042221600216045ustar00rootroot00000000000000puppetdb-6.2.0/docker/puppetdb/postgres-custom/extensions.sql000064400000000000000000000001211342042221600245160ustar00rootroot00000000000000CREATE EXTENSION IF NOT EXISTS pg_trgm; CREATE EXTENSION IF NOT EXISTS pgcrypto; puppetdb-6.2.0/docker/puppetdb/spec/000075500000000000000000000000001342042221600173605ustar00rootroot00000000000000puppetdb-6.2.0/docker/puppetdb/spec/puppetdb_spec.rb000064400000000000000000000104551342042221600225470ustar00rootroot00000000000000require 'timeout' require 'json' require 'rspec' require 'net/http' describe 'puppetdb container specs' do def count_database(container, database) %x(docker exec #{container} psql -t --username=puppetdb --command="SELECT count(datname) FROM pg_database where datname = '#{database}'").strip end def wait_on_postgres_db(container, database) Timeout::timeout(120) do while count_database(container, database) != '1' sleep(1) end end rescue Timeout::Error STDOUT.puts("database #{database} never created") raise end def run_postgres_container image_name = File::ALT_SEPARATOR.nil? ? 'postgres:9.6' : 'stellirin/postgres-windows:9.6' %x(docker pull #{image_name}) postgres_custom_source = File.join(File.expand_path(__dir__), '..', 'postgres-custom') postgres_custom_target = File::ALT_SEPARATOR.nil? ? '/docker-entrypoint-initdb.d' : 'c:\docker-entrypoint-initdb.d' id = %x(docker run --rm --detach \ --env POSTGRES_PASSWORD=puppetdb \ --env POSTGRES_USER=puppetdb \ --env POSTGRES_DB=puppetdb \ --name postgres \ --hostname postgres \ --publish-all \ --mount type=bind,source=#{postgres_custom_source},target=#{postgres_custom_target} \ #{image_name}).chomp # this is necessary to add a wait for database creation wait_on_postgres_db(id, 'puppetdb') return id end def run_puppetdb_container # skip Postgres SSL initialization for tests with USE_PUPPETSERVER %x(docker run --rm --detach \ --env USE_PUPPETSERVER=false \ --name puppetdb \ --hostname puppetdb \ --publish-all \ --link postgres \ #{@pdb_image}).chomp end def get_container_port(container, port) @mapped_ports["#{container}:#{port}"] ||= begin service_ip_port = %x(docker port #{container} #{port}/tcp).chomp uri = URI("http://#{service_ip_port}") uri.host = 'localhost' if uri.host == '0.0.0.0' STDOUT.puts "determined #{container} endpoint for port #{port}: #{uri}" uri end @mapped_ports["#{container}:#{port}"] end def get_puppetdb_state pdb_uri = URI::join(get_container_port(@pdb_container, 8080), '/status/v1/services/puppetdb-status') status = Net::HTTP.get_response(pdb_uri).body STDOUT.puts "retrieved raw puppetdb status: #{status}" return JSON.parse(status)['state'] unless status.empty? rescue STDOUT.puts "Failure querying #{pdb_uri}: #{$!}" return '' end def get_postgres_extensions extensions = %x(docker exec #{@postgres_container} psql --username=puppetdb --command="SELECT * FROM pg_extension").chomp STDOUT.puts("retrieved extensions: #{extensions}") extensions end def start_puppetdb status = get_puppetdb_state # since pdb doesn't have a proper healthcheck yet, this could spin forever # add a timeout so it eventually returns. # puppetdb entrypoint waits on a response from the master Timeout::timeout(240) do while status != 'running' sleep(1) status = get_puppetdb_state end end rescue Timeout::Error STDOUT.puts('puppetdb never entered running state') return '' else return status end before(:all) do @mapped_ports = {} @postgres_container = run_postgres_container @pdb_image = ENV['PUPPET_TEST_DOCKER_IMAGE'] if @pdb_image.nil? error_message = <<-MSG * * * * * PUPPET_TEST_DOCKER_IMAGE environment variable must be set so we know which image to test against! * * * * * MSG fail error_message end @pdb_container = run_puppetdb_container end after(:all) do [ @postgres_container, @pdb_container, ].each do |id| STDOUT.puts("Killing container #{id}") %x(docker container kill #{id}) end end it 'should have started postgres' do expect(@postgres_container).to_not be_empty end it 'should have installed postgres extensions' do installed_extensions = get_postgres_extensions expect(installed_extensions).to match(/^\s+pg_trgm\s+/) expect(installed_extensions).to match(/^\s+pgcrypto\s+/) end it 'should have started puppetdb' do expect(@pdb_container).to_not be_empty end it 'should have a "running" puppetdb container' do status = start_puppetdb expect(status).to eq('running') end end puppetdb-6.2.0/docker/puppetdb/ssl-setup.sh000075500000000000000000000331521342042221600207300ustar00rootroot00000000000000#!/bin/sh ssl_command="puppetdb ssl-setup" ############# # FUNCTIONS # ############# # Display usage information and exit # # This function simply displays the usage information to the screen and exits # with exit code 0. # # Example: # # usage # function usage() { echo "Usage: ${ssl_command} [-if]" echo "Configuration helper for enabling SSL for PuppetDB." echo echo "This tool will attempt to copy the necessary Puppet SSL PEM files into "\ "place for use by the PuppetDB HTTPS service. It also is able to update "\ "the necessary PuppetDB configuration files if necessary to point to "\ "the location of these files and also configures the host and port for "\ "SSL to listen on." echo echo "Options:" echo " -i Interactive mode" echo " -f Force configuration file update. By default if the configuration "\ "already exists in your jetty.ini or if your configuration is otherwise "\ "in a state we believe we shouldn't touch by default, you must use this "\ "option to override it" echo " -h Help" exit 0 } # Backs up a file, if it hasn't already been backed up # # $1 - file to backup # # Example: # # backupfile "/etc/myconfig" # function backupfile() { # Create the global array if it doesn't already exist if [ -z $backupfile_list ]; then # backupfile_list=() backupfile_list="" fi # We check the array to make sure the file isn't already backed up # if ! contains ${backupfile_list[@]} $1; then if ! contains $backupfile_list $1; then local backup_path="$1.bak.`date +%s`" echo "Backing up $1 to ${backup_path} before making changes" cp -p $1 $backup_path # Append to the array, so we don't need to back it up again later # backupfile_list+=($1) backupfile_list="$backupfile_list $1" fi } # This function searches for an element in an array returning 1 if it exists. # # $1 - array # $2 - item to search for # # Example: # # myarray=('element1', 'element2') # if contains ${myarray[@]}, "element1'; then # echo "element1 exists in the array" # fi # function contains() { # local n=$# # local value=${!n} # for ((i=1;i < $#;i++)); do # if [ "${!i}" == "${value}" ]; then # return 0 # fi # done # return 1 echo "$1" | grep -q "$2" } # This function wraps sed for a line focused search and replace. # # * Makes sure its atomic by writing to a temp file and moving it _after_ # * Escapes any forward slashes and ampersands on the RHS for you # # $1 - regexp to match # $2 - line to replace # $3 - file to operate on # # Example: # # replaceline "^$mysetting.*" "mysetting = myvalue" /etc/myconfig # function replaceline { backupfile $3 tmp=$3.tmp.`date +%s` sed "s/$1/$(echo $2 | sed -e 's/[\/&]/\\&/g')/g" $3 > $tmp mv $tmp $3 chmod 644 $3 } # This function comments out a line in a file, based on a regexp # # $1 = regexp to match # $2 = file to operate on # # Example: # # commentline "^$mysetting.*" /etc/myconfig # function commentline { backupfile $2 tmp=$2.tmp.`date +%s` sed "/$1/ s/^/# /" $2 > $tmp mv $tmp $2 } # This function appends a line to a file # # $1 - line to append # $2 - file to operate on # # Example: # # appendline "mysetting = myvalue" /etc/myconfig # function appendline { backupfile $2 tmp=$2.tmp.`date +%s` cat $2 > ${tmp} echo $1 >> ${tmp} mv ${tmp} $2 } # This function copies the necessary PEM files from Puppet to the PuppetDB # SSL directory. # # This expects various environment variables to have already been set to work. function copy_pem_from_puppet { # orig_files=($orig_ca_file $orig_private_file $orig_public_file) orig_files="$orig_ca_file $orig_private_file $orig_public_file" # for orig_file in "${orig_files[@]}"; do for orig_file in $orig_files; do if [ ! -e $orig_file ]; then echo "Warning: Unable to find all puppet certificates to copy" echo echo " This tool requires the following certificates to exist:" echo echo " * $orig_ca_file" echo " * $orig_private_file" echo " * $orig_public_file" echo echo " These files may be missing due to the fact that your host's Puppet" echo " certificates may not have been signed yet, probably due to the" echo " lack of a complete Puppet agent run. Try running puppet first, for" echo " example:" echo echo " puppet agent --test" echo echo " Afterwards re-run this tool then restart PuppetDB to complete the SSL" echo " setup:" echo echo " ${ssl_command} -f" exit 1 fi done rm -rf $ssl_dir mkdir -p $ssl_dir echo "Copying files: ${orig_ca_file}, ${orig_private_file} and ${orig_public_file} to ${ssl_dir}" cp -pr $orig_ca_file $ca_file cp -pr $orig_private_file $private_file cp -pr $orig_public_file $public_file } ######## # MAIN # ######## # Gather command line options while getopts "ifh" opt; do case $opt in i) interactive=true ;; f) force=true ;; h) usage ;; *) usage ;; esac done ${interactive:=false} ${force:=false} # Deal with interactive setups differently to non-interactive if $interactive then echo "interactive mode not yet implemented" exit 1 # dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" # cd $dir # answers_file="puppetdb-ssl-setup-answers.txt" # if [ -f "$answers_file" ] # then # echo "Reading answers file '$answers_file'" # . $answers_file # fi # vars=( agent_confdir agent_vardir puppetdb_confdir ) # prompts=( "Puppet Agent confdir" "Puppet Agent vardir" "PuppetDB confdir" ) # for (( i=0; i<${#vars[@]}; i++ )) # do # read -p "${prompts[$i]} [${!vars[$i]}]: " input # export ${vars[$i]}=${input:-${!vars[$i]}} # done # cat /dev/null > $answers_file # for (( i=0; i<${#vars[@]}; i++ )) # do # echo "${vars[$i]}=${!vars[$i]}" >> $answers_file # done else # This should be run on the host with PuppetDB PATH=/opt/puppetlabs/bin:/opt/puppet/bin:$PATH # agent_confdir=`puppet agent --configprint confdir` agent_confdir="/etc/puppetlabs/puppet" # agent_vardir=`puppet agent --configprint vardir` agent_vardir="/etc/puppetlabs/puppet/cache" # user=<%= EZBake::Config[:user] %> # group=<%= EZBake::Config[:group] %> user=$USER group=$GROUP puppetdb_confdir="/etc/puppetlabs/puppetdb" fi set -e # mycertname=`puppet agent --confdir=$agent_confdir --vardir=$agent_vardir --configprint certname` mycertname="${HOSTNAME}" # orig_public_file=`puppet agent --confdir=$agent_confdir --vardir=$agent_vardir --configprint hostcert` orig_public_file="/etc/puppetlabs/puppet/ssl/certs/${mycertname}.pem" # orig_private_file=`puppet agent --confdir=$agent_confdir --vardir=$agent_vardir --configprint hostprivkey` orig_private_file="/etc/puppetlabs/puppet/ssl/private_keys/${mycertname}.pem" # orig_ca_file=`puppet agent --confdir=$agent_confdir --vardir=$agent_vardir --configprint localcacert` orig_ca_file="/etc/puppetlabs/puppet/ssl/certs/ca.pem" ssl_dir=${puppetdb_confdir}/ssl pw_file=${ssl_dir}/puppetdb_keystore_pw.txt keystore_file=${ssl_dir}/keystore.jks truststore_file=${ssl_dir}/truststore.jks private_file=${ssl_dir}/private.pem public_file=${ssl_dir}/public.pem ca_file=${ssl_dir}/ca.pem jettyfile="${puppetdb_confdir}/conf.d/jetty.ini" # Scan through the old settings to see if any are still set, exiting and # prompting the user for the -f switch to force the tool to run anyway. if ! ${force}; then # old_settings=('key-password' 'trust-password' 'keystore' 'truststore') # new_settings=('ssl-key' 'ssl-cert' 'ssl-ca-cert') old_settings="key-password trust-password keystore truststore" new_settings="ssl-key ssl-cert ssl-ca-cert" # for old_setting in "${old_settings[@]}"; do for old_setting in $old_settings; do if grep -qe "^${old_setting}" $jettyfile; then # If we see both old settings and new, it may point to a problem so alert # the user. # for new_setting in "${new_settings[@]}"; do for new_setting in $new_settings; do if grep -qe "^${new_setting}" $jettyfile; then echo "Error: Your Jetty configuration file contains legacy entry '${old_setting}' and a new entry '${new_setting}'" echo echo " By default PuppetDB uses the new settings over the old ones," echo " which indicates your setup is probably okay, but removing" echo " the old settings is recommended for clarity." echo echo " Use the following to ignore this error and force this tool to repair" echo " your setup anyway:" echo echo " ${ssl_command} -f" echo exit 1 fi done # Otherwise cowardly refuse to make a change without -f echo "Error: Your Jetty configuration file contains legacy entry '${old_setting}'" echo echo " PuppetDB now provides a PEM based mechanism for retrieving SSL" echo " related files as opposed to its legacy Java Keystore mechanism." echo echo " Your configuration indicates you may have a legacy keystore based setup," echo " and if we modify this on our own we may break things. Especially if" echo " there has been specialized setup in the past, for example" echo " the keystores may have been created without 'puppetdb ssl-setup'." echo echo " Your can however force this tool to overwrite your existing" echo " configuration with the newer PEM based configuration with:" echo echo " ${ssl_command} -f" echo exit 1 fi done fi # Deal with pem files if [ -f $ca_file -a -f $private_file -a -f $public_file ]; then echo "PEM files in ${ssl_dir} already exists, checking integrity." # filediffs=( # "${orig_ca_file}:${ca_file}" # "${orig_private_file}:${private_file}" # "${orig_public_file}:${public_file}" # ) filediffs=" ${orig_ca_file}:${ca_file} ${orig_private_file}:${private_file} ${orig_public_file}:${public_file} " # for i in "${filediffs[@]}"; do for i in $filediffs; do orig="${i%%:*}" new="${i#*:}" if ! diff -q $orig $new > /dev/null; then echo "Warning: ${new} does not match the file used by Puppet (${orig})" fi done if $force; then echo "Overwriting existing PEM files due to -f flag" copy_pem_from_puppet fi else echo "PEM files in ${ssl_dir} are missing, we will move them into place for you" copy_pem_from_puppet fi # Fix SSL permissions chmod 600 ${ssl_dir}/* chmod 700 ${ssl_dir} chown -R ${user}:${group} ${ssl_dir} if [ -f "$jettyfile" ] ; then # Check settings are correct and fix or warn # settings=( # "ssl-host:0.0.0.0" # "ssl-port:8081" # "ssl-key:${private_file}" # "ssl-cert:${public_file}" # "ssl-ca-cert:${ca_file}" # ) settings=" ssl-host:0.0.0.0 ssl-port:8081 ssl-key:${private_file} ssl-cert:${public_file} ssl-ca-cert:${ca_file} " # for i in "${settings[@]}"; do for i in $settings; do setting="${i%%:*}" value="${i#*:}" if grep -qe "^${setting}" ${jettyfile}; then if grep -qe "^${setting}[[:space:]]*=[[:space:]]*${value}$" ${jettyfile}; then echo "Setting ${setting} in ${jettyfile} already correct." else if $force; then replaceline "^${setting}.*" "${setting} = ${value}" ${jettyfile} echo "Updated setting ${setting} in ${jettyfile}." else echo "Warning: Setting ${setting} in ${jettyfile} should be ${value}. This can be remedied with ${ssl_command} -f." fi fi else if grep -qE "^# ${setting} = <[A-Za-z_]+>$" ${jettyfile}; then replaceline "^# ${setting}.*" "${setting} = ${value}" ${jettyfile} echo "Updated default settings from package installation for ${setting} in ${jettyfile}." else if $force; then echo "Appending setting ${setting} to ${jettyfile}." appendline "${setting} = ${value}" ${jettyfile} else echo "Warning: Could not find active ${setting} setting in ${jettyfile}. Include that setting yourself manually. Or force with ${ssl_command} -f." fi fi fi done # Check old settings are commented out, and fix or warn # settings=('keystore' 'truststore' 'key-password' 'trust-password') settings="keystore truststore key-password trust-password" # for setting in "${settings[@]}"; do for setting in $settings; do if grep -qe "^${setting}" ${jettyfile}; then if $force; then echo "Commenting out setting '${setting}'" commentline "^${setting}" ${jettyfile} else echo "Warning: The setting '${setting}' is not commented out in ${jettyfile}. Allow us to comment it out for you with ${ssl_command} -f." fi fi done else echo "Error: Unable to find PuppetDB Jetty configuration at ${jettyfile} so unable to provide automatic configuration for that file." echo echo " Confirm the file exists in the path specified before running the" echo " tool again. The file should have been created automatically when" echo " the package was installed." fi if $interactive then echo "Certificate generation complete. You will need to make sure that the puppetdb.conf" echo "file on your puppet master looks like this:" echo echo " [main]" echo " server = ${mycertname}" echo " port = 8081" echo echo "And that the config.ini (or other *.ini) on your puppetdb system contains the" echo "following:" echo echo " [jetty]" echo " #host = localhost" echo " port = 8080" echo " ssl-host = 0.0.0.0" echo " ssl-port = 8081" echo " ssl-key = ${private_file}" echo " ssl-cert = ${public_file}" echo " ssl-ca-cert = ${ca_file}" fi puppetdb-6.2.0/documentation/000075500000000000000000000000001342042221600162055ustar00rootroot00000000000000puppetdb-6.2.0/documentation/CONTRIBUTING.md000064400000000000000000000244151342042221600204440ustar00rootroot00000000000000--- title: "PuppetDB: Contributing to PuppetDB" layout: default --- [configure_postgres]: ./configure.markdown#using-postgresql # How to contribute Third-party patches are essential for keeping puppet great. We simply can't access the huge number of platforms and myriad configurations for running puppet. We want to keep it as easy as possible to contribute changes that get things working in your environment. There are a few guidelines that we need contributors to follow so that we can have a chance of keeping on top of things. ## Getting Started * Make sure you have a [Jira account](http://tickets.puppetlabs.com) * Make sure you have a [GitHub account](https://github.com/signup/free) * Submit a ticket for your issue, assuming one does not already exist. * Clearly describe the issue including steps to reproduce when it is a bug. * Make sure you fill in the earliest version that you know has the issue. * Fork the repository on GitHub ## Making Changes * Create a topic branch from where you want to base your work. * This is usually the master branch. * Only target release branches if you are certain your fix must be on that branch. * To quickly create a topic branch based on master; `git checkout -b fix/master/my_contribution master`. Please avoid working directly on the `master` branch. * Make commits of logical units. * Check for unnecessary whitespace with `git diff --check` before committing. * Make sure your commit messages are in the proper format. ``` (PUP-1234) Make the example in CONTRIBUTING imperative and concrete Without this patch applied the example commit message in the CONTRIBUTING document is not a concrete example. This is a problem because the contributor is left to imagine what the commit message should look like based on a description rather than an example. This patch fixes the problem by making the example concrete and imperative. The first line is a real life imperative statement with a ticket number from our issue tracker. The body describes the behavior without the patch, why this is a problem, and how the patch fixes the problem when applied. ``` * Make sure you have added the necessary tests for your changes. * Run _all_ the tests to assure nothing else was accidentally broken. ### Testing Before you do anything else, you may want to consider setting `PUPPET_SUPPRESS_INTERNAL_LEIN_REPOS=1` in your environment. We'll eventually make that the default, but for now that setting may help avoid delays incurred if lein tries to reach unreachable internal repositories. The easiest way to run the tests until you need to do it often is to use the built-in sandbox harness. If you just want to check some changes against "all the normal tests", this should work (assuming you're not running a server on port 34335): $ ext/bin/test-config --set pgport 34335 $ ext/bin/test-config --reset puppet-ref $ ext/bin/test-config --reset puppetserver-ref $ ext/bin/run-normal-tests This will run the core, integration, and external tests, and in some cases may be all that you need, but in many cases, you may want to be able to run tests more selectively as detailed below. Copies of tools like `lein` and `pgbox` may be downloaded and installed to a temporary directory during the process, if you don't already have the expected versions. When using the sandbox, you need to either specify the PostgreSQL port it should use by providing a `--pgport PORT` argument to each relevant test invocation, or you can set a default (as above) for the source tree: $ ./ext/bin/test-config --set pgport 34335 Once you've set the default pgport, you should be able to run the core tests like this: $ ext/bin/boxed-core-tests -- lein test Similarly you should be able to configure and run the integration tests against the default Puppet and Puppetserver versions like this: $ ext/bin/test-config --reset puppet-ref $ ext/bin/test-config --reset puppetserver-ref $ ext/bin/boxed-integration-tests \ -- lein test :integration Note that you only need to configure the puppet-ref and puppetserver-ref once for each tree, but you can also change the refs when you like with `test-config`: $ ext/bin/test-config --set puppet-ref 5.5.x $ ext/bin/test-config --set puppetserver-ref 5.2.x Running `--reset` for an option resets it to the tree default, and at the moment, you'll need to do that manually whenever you're using the default and the relevant `*-default` file in ext/test-conf changes in the source. The sandboxes are destroyed when the commands finish, but you can arrange to inspect the environment after a failure like this: $ ext/bin/boxed-integration-tests \ -- bash -c 'lein test || bash' which will drop you into a shell if anything goes wrong. To run the local rspec tests (e.g. for the PuppetDB terminus code), you must have configured the `puppet-ref` via `ext/bin/test-config` as described above, and then from within the `puppet/` directory you can run: $ bundle exec rspec spec If you'd like to preserve the temporary test databases on failure, you can set `PDB_TEST_PRESERVE_DB_ON_FAIL` to true: $ PDB_TEST_KEEP_DB_ON_FAIL=true lein test The sandboxed tests will try to find and use the version of PostgreSQL specified by: $ ./ext/bin/test-config --get pgver Unless you override that with `test-config`: $ ext/bin/test-config --set pgver 9.6 Given just the version, the tests will try to find a suitable PostgreSQL installation, but you can specify one directly like this: $ ext/bin/test-config --set pgbin /usr/lib/postgresql/9.6/bin at which point the pgver setting will be irrelevant until/unless you reset pgbin: $ ext/bin/test-config --reset pgbin If you're running the tests all the time, you might want to set up your own persistent sandbox instead (`ext/bin/with-pdbbox` does something similar) so you can run tests directly against that: $ ext/bin/pdbbox-init \ --sandbox ./test-sandbox \ --pgbin /usr/lib/postgresql-9.6/bin \ --pgport 17961 Then you can start and stop the included database server like this: $ export PDBBOX="$(pwd)/test-sandbox" $ ext/bin/pdbbox-env pg_ctl start -w $ ext/bin/pdbbox-env pg_ctl stop and when the database server is running you can run the tests like this: $ export PDBBOX="$(pwd)/test-sandbox" $ ext/bin/pdbbox-env lein test Note that in cases where durability and realistic performance aren't important (say for routine `lein test` runs), you may see substantially better performance if you disable postgres' fsync calls with `-F` like this: $ ext/bin/pdbbox-env pg_ctl start -o -F -w Before you can run the integration tests directly, you'll need to configure the puppet and puppetserver versions you want to use. Assuming you have suitable versions of Ruby and Bundler available, you can do this: $ ext/bin/test-config --reset puppet-ref $ ext/bin/test-config --reset puppetserver-ref The default puppet and puppetserver versions are recorded in `ext/test-conf/`. You can request specific versions of puppet or puppetserver like this: $ ext/bin/test-config --set puppet-ref 5.3.x $ ext/bin/test-config --set puppetserver-ref 5.3.x Run the tools again to change the requested versions, and `lein distclean` will completely undo the configurations. After configuration you should be able to run the tests by specifying the `:integration` selector: $ export PDBBOX="$(pwd)/test-sandbox" $ ext/bin/pdbbox-env lein test :integration You can also run puppetdb itself with the config file included in the sandbox: $ export PDBBOX="$(pwd)/test-sandbox" $ ext/bin/pdbbox-env lein run services \ -c test-sandbox/pdb.ini And finally, you can of course set up and [configure your own PostgreSQL server][configure_postgres] for testing, but then you'll need to create the test users: $ createuser -DRSP pdb_test $ createuser -dRsP pdb_test_admin and do the other things that `pdbbox-init` normally handles, like setting environment variables if the default values aren't appropriate, etc.: * `PDB_TEST_DB_HOST` (defaults to localhost) * `PDB_TEST_DB_PORT` (defaults to 5432) * `PDB_TEST_DB_USER` (defaults to `pdb_test`) * `PDB_TEST_DB_PASSWORD` (defaults to `pdb_test`) * `PDB_TEST_DB_ADMIN` (defaults to `pdb_test_admin`) * `PDB_TEST_DB_ADMIN_PASSWORD` (defaults to `pdb_test_admin`) ### Cleaning up Running `lein clean` will clean up the relevant items related to Clojure, but won't affect some other things, including the integration test configuration. To clean up "everything", run `lein distclean`. ## Making Trivial Changes ### Documentation For changes of a trivial nature to comments and documentation, it is not always necessary to create a new ticket in Jira. In this case, it is appropriate to start the first line of a commit with '(doc)' instead of a ticket number. ``` (doc) Add documentation commit example to CONTRIBUTING There is no example for contributing a documentation commit to the Puppet repository. This is a problem because the contributor is left to assume how a commit of this nature may appear. The first line is a real life imperative statement with '(doc)' in place of what would have been the ticket number in a non-documentation related commit. The body describes the nature of the new documentation or comments added. ``` ## Submitting Changes * Sign the [Contributor License Agreement](http://links.puppetlabs.com/cla). * Push your changes to a topic branch in your fork of the repository. * Submit a pull request to the repository in the puppetlabs organization. * Update your Jira ticket to mark that you have submitted code and are ready for it to be reviewed (Status: Ready for Merge). * Include a link to the pull request in the ticket. * After feedback has been given we expect responses within two weeks. After two weeks will may close the pull request if it isn't showing any activity. # Additional Resources * [Puppet community guidelines](https://docs.puppet.com/community/community_guidelines.html) * [Bug tracker (Jira)](http://tickets.puppetlabs.com) * [Contributor License Agreement](http://links.puppetlabs.com/cla) * [General GitHub documentation](http://help.github.com/) * [GitHub pull request documentation](http://help.github.com/send-pull-requests/) * #puppet-dev IRC channel on freenode.org puppetdb-6.2.0/documentation/_puppetdb_nav.html000064400000000000000000000117601342042221600217260ustar00rootroot00000000000000

PuppetDB {{ page.doc.my_versions.puppetdb }}

{% md %} * **General information** * [Overview and requirements]({{puppetdb}}/index.html) * [Contributing to PuppetDB]({{puppetdb}}/CONTRIBUTING.html) * [Frequently asked questions]({{puppetdb}}/puppetdb-faq.html) * [Release notes]({{puppetdb}}/release_notes.html) * [Versioning policy]({{puppetdb}}/versioning_policy.html) * [Known issues]({{puppetdb}}/known_issues.html) * [Community add-ons]({{puppetdb}}/community_add_ons.html) * **Installation** * [Installing via Puppet module]({{puppetdb}}/install_via_module.html) * [Installing from packages]({{puppetdb}}/install_from_packages.html) * [Installing from source]({{puppetdb}}/install_from_source.html) * [Upgrading PuppetDB]({{puppetdb}}/upgrade.html) * [Connecting Puppet masters]({{puppetdb}}/connect_puppet_master.html) * [Connecting standalone Puppet nodes]({{puppetdb}}/connect_puppet_apply.html) * **Configuration** * [Configuring PuppetDB]({{puppetdb}}/configure.html) * [puppetdb.conf: Configuring a Puppet/PuppetDB connection]({{puppetdb}}/puppetdb_connection.html) * [Setting up SSL for PostgreSQL]({{puppetdb}}/postgres_ssl.html) * **Usage/admin** * [Using PuppetDB]({{puppetdb}}/using.html) * [Maintaining and tuning]({{puppetdb}}/maintain_and_tune.html) * [PuppetDB CLI]({{puppetdb}}/pdb_client_tools.html) * [Exporting and anonymizing data]({{puppetdb}}/anonymization.html) * [Scaling recommendations]({{puppetdb}}/scaling_recommendations.html) * [Debugging with remote REPL]({{puppetdb}}/repl.html) * [Load testing]({{puppetdb}}/load_testing_tool.html) * **Troubleshooting** * [General Support Guide]({{puppetdb}}/pdb_support_guide.html) * [Session logging]({{puppetdb}}/trouble_session_logging.html) * **PQL - Puppet Query Language** * [Tutorial]({{puppetdb}}/api/query/tutorial-pql.html) * [Reference guide]({{puppetdb}}/api/query/v4/pql.html) * [Examples]({{puppetdb}}/api/query/examples-pql.html) * **API** * [Overview]({{puppetdb}}/api/index.html) * [Query tutorial]({{puppetdb}}/api/query/tutorial.html) * [Curl tips]({{puppetdb}}/api/query/curl.html) * **Query API version 4** * [Upgrading from version 3]({{puppetdb}}/api/query/v4/upgrading-from-v3.html) * [Query structure]({{puppetdb}}/api/query/v4/query.html) * [Entities]({{puppetdb}}/api/query/v4/entities.html) * [AST query language]({{puppetdb}}/api/query/v4/ast.html) * [Query paging]({{puppetdb}}/api/query/v4/paging.html) * [Root endpoint]({{puppetdb}}/api/query/v4/index.html) * [Nodes endpoint]({{puppetdb}}/api/query/v4/nodes.html) * [Environments endpoint]({{puppetdb}}/api/query/v4/environments.html) * [Producers endpoint]({{puppetdb}}/api/query/v4/producers.html) * [Factsets endpoint]({{puppetdb}}/api/query/v4/factsets.html) * [Facts endpoint]({{puppetdb}}/api/query/v4/facts.html) * [Fact-names endpoint]({{puppetdb}}/api/query/v4/fact-names.html) * [Fact-paths endpoint]({{puppetdb}}/api/query/v4/fact-paths.html) * [Fact-contents endpoint]({{puppetdb}}/api/query/v4/fact-contents.html) * [Inventory endpoint]({{puppetdb}}/api/query/v4/inventory.html) * [Catalogs endpoint]({{puppetdb}}/api/query/v4/catalogs.html) * [Resources endpoint]({{puppetdb}}/api/query/v4/resources.html) * [Edges endpoint]({{puppetdb}}/api/query/v4/edges.html) * [Reports endpoint]({{puppetdb}}/api/query/v4/reports.html) * [Events endpoint]({{puppetdb}}/api/query/v4/events.html) * [Event counts endpoint]({{puppetdb}}/api/query/v4/event-counts.html) * [Aggregate event counts endpoint]({{puppetdb}}/api/query/v4/aggregate-event-counts.html) * [Package endpoints]({{puppetdb}}/api/query/v4/packages.html) * **Extensions API version 1 (PE-only)** * [State overview endpoint]({{puppetdb}}/api/ext/v1/state-overview.html) * **Admin API version 1** * [Archive endpoint]({{puppetdb}}/api/admin/v1/archive.html) * [Command (cmd) endpoint]({{puppetdb}}/api/admin/v1/cmd.html) * [Summary stats endpoint]({{puppetdb}}/api/admin/v1/summary-stats.html) * **Command API version 1** * [Commands endpoint]({{puppetdb}}/api/command/v1/commands.html) * **Status API version 1** * [Status endpoint]({{puppetdb}}/api/status/v1/status.html) * **Metadata API version 1** * [Version endpoint]({{puppetdb}}/api/meta/v1/version.html) * [Server time endpoint]({{puppetdb}}/api/meta/v1/server-time.html) * **Metrics API version 1** * [Metrics endpoint]({{puppetdb}}/api/metrics/v1/mbeans.html) * [Changes from PuppetDB version 3]({{puppetdb}}/api/metrics/v1/changes-from-puppetdb-v3.html) * **Wire formats** * [Catalog wire format - v9]({{puppetdb}}/api/wire_format/catalog_format_v9.html) * [Facts wire format - v5]({{puppetdb}}/api/wire_format/facts_format_v5.html) * [Report wire format - v8]({{puppetdb}}/api/wire_format/report_format_v8.html) * [Deactivate node wire format - v3]({{puppetdb}}/api/wire_format/deactivate_node_format_v3.html) {% endmd %} puppetdb-6.2.0/documentation/acceptance_tests.markdown000064400000000000000000000041151342042221600232620ustar00rootroot00000000000000Acceptance tests ---------------- PuppetDB uses the [Beaker](https://github.com/puppetlabs/beaker) acceptance testing framework. We run acceptance tests on a matrix of machine and database configurations before we merge new code into our stable or master branches, but it can be useful for a variety of reasons to run them yourself. The current recommended way of running acceptance tests is via EC2. Other methods should be possible, as Beaker supports a wide variety of hypervisors. EC2 setup --------- * Create ~/.fog with these contents. Note that 'aws access key id' is the thing otherwise called an AWS access key. It's not your user id. :default: :aws_access_key_id: :aws_secret_access_key: * The included configuration files in `acceptance/config` refer to resources (security groups, VPCs, etc) that exist in the Puppet AWS account. If you're using your own AWS account, you'll have to create the appropriate resources and modify the appropriate configuration file to refer to them. Running the tests ----------------- * If you previously munged your host file as described below, first remove the IP address entry. * Do a normal acceptance test run the first time, so you get a fully provisioned VM to work with. rake "beaker:first_run[acceptance/tests/some/test.rb]" * For now, you have to modify the host file you're using to include the IP addresses for the VMs that were provisioned the first time. You should be able to find these near the top of the console output. You could also ask the hypervisor for help: rake beaker:list_vms Take the IP address of the VM that was created for you and put it in the appropriate beaker hosts file. If you didn't specify one, the default is `acceptance/config/ec2-west-dev.cfg`. It has a commented-out IP address field, you should be able to uncomment it and put in the IP of your running VM. * For subsequent runs, you can reuse the already-provisioned VM for a quicker turnaround time. rake "beaker:rerun[acceptance/tests/some/test.rb]" puppetdb-6.2.0/documentation/anonymization.markdown000064400000000000000000000116171342042221600226560ustar00rootroot00000000000000--- title: "PuppetDB: Export, import and anonymization" layout: default --- This document covers using the export, import and anonymization tools for PuppetDB. The export tool will return an archive of all of your PuppetDB data which can be uploaded to another PuppetDB via the import tool. The export tool also has the ability to anonymize the archive before returning it. This is particularly useful when sharing PuppetDB data that contains sensitive items. Using the `export` command ----- To create an anonymized PuppetDB archive directly, use the Puppet `db` subcommand from any node with puppet-client-tools installed: $ puppet db export my-puppetdb-export.tar.gz --anonymization moderate Using the `import` command ----- To import an anonymized PuppetDB tarball, use the Puppet `db` subcommand from any node with puppet-client-tools installed: $ puppet db import my-puppetdb-export.tar.gz How does it work? ----- The tool walks through your entire data set, applying different rules to each of the leaf data based on the profile you have chosen. The data structure is left intact, and only the data contents are modified. This maintains the "shape" of the data without exposing the underlying data you may wish to scrub. We do this by always ensuring we replace data consistently. For example, if a string is replaced with something random, we ensure that all instances of that original string are replaced with the same random string throughout the data. By keeping its original shape, the data can be anonymized based on your needs but still hold some value to the consumer of your anonymized data. Anonymization profiles ----- You may not need to anonymize all data in every case, so we have provided a number of profiles offering varying levels of anonymization. The profile can be specified on the command line when the command is run. For example, to choose the `low` profile, enter: $ puppet db export ./my-puppetdb-anonymized-export.tar.gz --anonymization low ### Profile: full The `full` profile will anonymize all data (including node names, resource types, resource titles, parameter names, values, any log messages, file names, and file lines) while retaining the data set's shape. The result is a completely anonymized data set. Report metrics under the `resources` and `events` categories are left intact, as these can be inferred from the rest of the data, but names of metrics under the `time` category are anonymized as resource types. This is useful if you are really concerned about limiting the data you expose, but provides the least utility for the consumer depending on the activity they are trying to test. ### Profile: moderate The `moderate` profile attempts to be a bit smarter about what it anonymizes and is **the recommended profile for most cases**. It sorts and anonymizes data by data type: * Node name: is anonymized by default. * Resource type name: the core types that are built into Puppet are not anonymized, including some common types from the modules: `stdlib`, `postgresql`, `rabbitmq`, `puppetdb`, `apache`, `mrep`, `f5`, `apt`, `registry`, `concat`, and `mysql`. Any Puppet Enterprise core type names are also preserved. The goal here is to anonymize any custom or unknown resource type names, as these may contain confidential information. * Resource titles: all titles are anonymized expect for those belonging to Filebucket, Package, Service, and Stage. * Parameter names: are never anonymized. * Parameter values: everything is anonymized except for the values for `provider`, `ensure`, `noop`, `loglevel`, `audit`, and `schedule`. * Report log messages: are always anonymized. * File names: are always anonymized. * File numbers: are left as they are. * Log messages: are always anonymized. * Metrics: metric names in the `time` category are anonymized as resource types. ### Profile: low This profile is aimed at hiding security information specifically, but leaving most of the data in its original state. The following categories are anonymized: * Node name: is always anonymized. * Parameter values: only values and messages for parameter names containing the strings `password`, `pwd`, `secret`, `key`, or `private` are anonymized. * Log messages: are always anonymized. Verifying your anonymized data ----- After anonymizing data with the `puppetdb export` tool, we **strongly recommend** that you analyze the anonymized data before sharing it with another party to ensure that all sensitive data has been scrubbed. Simply untar the export file and analyze the contents: $ tar -xzf my-puppetdb-anonymized-export.tar.gz $ cd puppetdb-bak Inside this directory there is a directory for each content type (reports, catalogs, and facts), and each file inside represents a node (and a report instance for reports). The data is represented as human-readable JSON. You can open these files and use tools such as `grep` to check the status of specific information you wish to anonymize. puppetdb-6.2.0/documentation/api/000075500000000000000000000000001342042221600167565ustar00rootroot00000000000000puppetdb-6.2.0/documentation/api/admin/000075500000000000000000000000001342042221600200465ustar00rootroot00000000000000puppetdb-6.2.0/documentation/api/admin/v1/000075500000000000000000000000001342042221600203745ustar00rootroot00000000000000puppetdb-6.2.0/documentation/api/admin/v1/archive.markdown000064400000000000000000000031171342042221600235630ustar00rootroot00000000000000--- title: "PuppetDB: Archive endpoint" layout: default canonical: "/puppetdb/latest/api/admin/v1/archive.html" --- [curl]: ../../query/curl.html#using-curl-from-localhost-non-sslhttp The `/archive` endpoint can be used for importing and exporting PuppetDB archives. ## `POST /pdb/admin/v1/archive` This endpoint can be used for streaming a PuppetDB archive into PuppetDB. ### Request format The request should be a multipart POST and have `Content-Type: multipart/mixed`. ### URL parameters * `archive`: required. The archive file to import to the PuppetDB. This archive must have a file called `puppetdb-bak/metadata.json` as the first entry in the tarfile with a key `command_versions` which is a JSON object mapping PuppetDB command names to their version. ### Response format The response will be in `application/json`, and will return a JSON map upon successful completion of the importation: {"ok": true} ### Example [Using `curl` from localhost][curl]: curl -X POST http://localhost:8080/pdb/admin/v1/archive \ -F "archive=@example_backup_archive.tgz" {"ok": true} ## `GET /pdb/admin/v1/archive` This endpoint can be used to stream a tarred, gzipped backup archive of PuppetDB to your local machine. ### URL parameters * `anonymization_profile`: optional. The level of anonymization applied to the archive files. ### Response format The response will be a `application/octet-stream`, and will return a `tar.gz` archive. ### Example [Using `curl` from localhost][curl]: curl -X GET http://localhost:8080/pdb/admin/v1/archive -o puppetdb-export.tgz puppetdb-6.2.0/documentation/api/admin/v1/cmd.markdown000064400000000000000000000043041342042221600227040ustar00rootroot00000000000000--- title: "PuppetDB: Admin commands (cmd) endpoint" layout: default canonical: "/puppetdb/latest/api/admin/v1/cmd.html" --- [curl]: ../../query/curl.html#using-curl-from-localhost-non-sslhttp [config-purge-limit]: ../../../configure.markdown#node-purge-gc-batch-limit The `/cmd` endpoint can be used to trigger PuppetDB maintenance operations. Only one maintenance operation can be running at a time. Any request received while an operation is already in progress will return an HTTP conflict status (409). ## `POST /pdb/admin/v1/cmd` The maintenance operations must be triggered by a POST. ### Request format The POST request should specify `Content-Type: application/json` and the request body should look like this: ``` json {"version" : 1, "payload" : [REQUESTED_OPERATION, ...]} ``` where valid `REQUESTED_OPERATION`s are `"expire_nodes"`, `"purge_nodes"`, `"purge_reports"`, `"gc_packages"`, and `"other"`. In addition, a purge_nodes operation can be structured like this to specify a batch_limit: ``` json ["purge_nodes" {"batch_limit" : 50}] ``` When specified, the `batch_limit` restricts the maximum number of nodes purged to the value specified, and if not specified, the limit will be the [`node-purge-gc-batch-limit`][config-purge-limit]. An empty payload vector requests all maintenance operations. ### URL parameters * The POST endpoint accepts no URL parameters. ### Response format The response type will be `application/json`, and upon success will include this JSON map: ``` json {"ok": true} ``` If any other maintenance operation is already in progress the HTTP response status will be 409 (conflict), will include a map like this ``` json {"kind": "conflict", "msg": "Another cleanup is already in progress", "details": null} ``` and no additional maintenance will be performed. The `msg` and `details` may or may not vary, but the `kind` will always be "conflict". ### Example [Using `curl` from localhost][curl]: ``` sh $ curl -X POST http://localhost:8080/pdb/admin/v1/cmd \ -H 'Accept: application/json' \ -H 'Content-Type: application/json' \ -d '{"command": "clean", "version": 1, "payload": ["expire_nodes", "purge_nodes"]}' {"ok": true} ``` puppetdb-6.2.0/documentation/api/admin/v1/summary-stats.markdown000064400000000000000000000072501342042221600247750ustar00rootroot00000000000000--- title: "PuppetDB: Summary-stats endpoint" layout: default --- > **Experimental Endpoint**: The summary-stats endpoint is designated > as experimental. It may be altered or removed in a future release. > **Warning**: This endpoint will execute a number of relatively expensive SQL > commands against your database. It will not meaningfully impede performance > of a running PDB instance, but the request may take several minutes to > complete. ## `/pdb/admin/v1/summary-stats` The `/summary-stats` endpoint is used to generate information about the way your PDB installation is using postgres. Its intended purpose at this time is to aid in diagnosis of support issues, though users may find it independently useful. ### Response format The response is a JSON map containing the following keys: * `table_usage` (json): Postgres statistics related to table usage. Equivalent to the postgres query select * from pg_stat_user_tables; * `index_usage` (json): Postgres statistics related to index usage. Equivalent to the postgres query select * from pg_stat_user_indexes; * `database_usage` (json): Postgres statistics related to usage of the puppetdb database. Equivalent to the postgres query select now() as current_time, * from pg_stat_database where datname=current_database(); * `node_activity` (json): Counts of active and inactive nodes. * `fact_path_counts_by_depth` (json): Number of fact paths for each fact path depth represented in the database. * `num_shared_value_path_combos` (json): The number of fact value/fact path combinations that are shared across multiple nodes. * `num_shared_name_value_combos` (json): The number of fact name/fact value combinations that are shared across multiple nodes. * `num_unshared_value_path_combos` (json): The number of fact value/fact path combinations that are only present on one node. * `num_unshared_name_value_combos` (json): The number of fact name/fact value combinations that are only present on one node. * `num_times_paths_values_shared_given_sharing` (json): Across fact path/fact value combinations shared across multiple nodes, the 0th through 20th 20-quantiles of the number of nodes sharing. * `num_unique_fact_values_over_nodes` (json): Across all nodes, the 0th through 20th 20-quantiles of the number of unique fact values. * `string_fact_value_character_lengths` (json): 0th through 20th 20-quantiles of the character length of string-valued facts. * `structured_fact_value_character_lengths` (json): 0th through 20th 20-quantiles of the character length of structured facts. * `report_metric_size_dist` (json): 0th through 20th 20-quantiles of the character lengths of report metrics. * `report_log_size_dist` (json): 0th through 20th 20-quantiles of the character lengths of report logs. * `fact_values_by_type` (json): Number of fact values of each type present. * `num_associated_factsets_over_fact_paths` (json): 0th through 20th 20-quantiles of the number of nodes sharing a given fact. * `num_resources_per_node` (json): 0th through 20th 20-quantiles over the number of resources per node. * `num_resources_per_file` (json): 0th through 20th 20-quantiles over the number of resources per manifest. * `num_distinct_edges_source_target` (json): Number of distinct source/target vertex pairs in the resource graph. * `file_resource_per_catalog` (json): 0th through 20th 20-quantiles of the number of resources of type File over the most recent catalogs for all nodes. * `file_resources_per_catalog_with_source` (json): 0th through 20th 20-quantiles of the number of file resources with 'source' parameters across the most recent catalogs for all nodes. ### Parameters This endpoint supports no parameters. puppetdb-6.2.0/documentation/api/command/000075500000000000000000000000001342042221600203745ustar00rootroot00000000000000puppetdb-6.2.0/documentation/api/command/v1/000075500000000000000000000000001342042221600207225ustar00rootroot00000000000000puppetdb-6.2.0/documentation/api/command/v1/commands.markdown000064400000000000000000000222721342042221600242740ustar00rootroot00000000000000--- title: "PuppetDB: Commands endpoint" layout: default canonical: "/puppetdb/latest/api/command/v1/commands.html" --- [factsv4]: ../../wire_format/facts_format_v4.html [factsv5]: ../../wire_format/facts_format_v5.html [catalogv6]: ../../wire_format/catalog_format_v6.html [catalogv7]: ../../wire_format/catalog_format_v7.html [catalogv8]: ../../wire_format/catalog_format_v8.html [catalogv9]: ../../wire_format/catalog_format_v9.html [reportv5]: ../../wire_format/report_format_v5.html [reportv6]: ../../wire_format/report_format_v6.html [reportv7]: ../../wire_format/report_format_v7.html [reportv8]: ../../wire_format/report_format_v8.html [deactivatev3]: ../../wire_format/deactivate_node_format_v3.html Commands are used to change PuppetDB's model of a population. Commands are represented by `command objects`, which have the following JSON wire format: {"command": "...", "version": 123, "payload": } `command` is a string identifying the command. `version` is a JSON integer describing what version of the given command you're attempting to invoke. The version of the command also indicates the version of the wire format to use for the command. `payload` must be a valid JSON object of any sort. It's up to an individual handler function to determine how to interpret that object. The entire command **must** be encoded as UTF-8. ## Command submission Commands must be submitted via HTTP to the `/pdb/cmd/v1` endpoint via one of two mechanisms: * Payload and parameters: This method entails POSTing the certname, command name, command version, and optionally the checksum as parameters, with the POST body containing the given command's wire format. This mechanism allows PuppetDB to provide better validation and feedback at time of POSTing without inspecting the command payload itself, and should be preferred over the alternative due to lower memory consumption. * Payload only (deprecated): This method entails POSTing a single JSON body containing the certname, command name, and command version along side a `payload` key valued with the given command's wire format. The checksum is optionally provided as a parameter. In either case, the checksum should contain a SHA-1 hash of the payload which will be used for content verification with the server. When a command is successfully submitted, the submitter will receive the following: * A response code of 200. * A content-type of `application/json`. * A response body in the form of a JSON object, containing a single key, 'uuid', whose value is a UUID corresponding to the submitted command. This can be used, for example, by clients to correlate submitted commands with server-side logs. The PuppetDB termini for Puppet masters use this command API to update facts, catalogs, and reports for nodes, and will always include the checksum. ### Blocking command submission >**Experimental feature:** This is an experimental feature, and it may be changed or removed at any >time. Although convenient, it should be used with caution. Always prefer >non-blocking command submission. When submitting a command, you may specify the "secondsToWaitForCompletion" query parameter. If you do, PuppetDB will block the request until the command has been processed, or until the specified timeout has passed, whichever comes first. The response will contain the following additional keys: * `timed_out`: true when your timeout was hit before the command finished processing. * `processed`: true when the command has been processed, successfully or not. Will be set to false if the timeout was hit first. * `error`, `exception`: If the command was processed but an error occurred, these two fields provide the specifics of what went wrong. ## Command semantics Commands are processed _asynchronously_. If PuppetDB returns a 200 when you submit a command, that only indicates that the command has been _accepted_ for processing. There are no guarantees as to when that command will be processed, nor that when it is processed it will be successful. Commands that fail processing will be stored in files in the "dead letter office", located under the MQ data directory in `discarded/`. These files contain the command and diagnostic information that may be used to determine why the command failed to be processed. ## List of commands ### "replace catalog", version 9 * The nullable `producer` property has been added. The payload is expected to be a Puppet catalog, as a JSON object, conforming exactly to the [catalog wire format v9][catalogv9]. Extra or missing fields are an error. ### "replace facts", version 5 * The nullable `producer` property has been added. See [fact wire format v5][factsv5] for more information on the payload of this command. ### "deactivate node", version 3 * Previous versions of deactivate node required only the certname, as a raw JSON string. It is now formatted as a JSON map, and the `producer_timestamp` property has been added. See [deactivate node wire format v3][deactivatev3] for more information on the payload of this command. ### "store report", version 8 * The nullable `producer`, `noop_pending`, and `corrective_change` fields have been added. The payload is expected to be a report, containing events that occurred on Puppet resources. It is structured as a JSON object, conforming to the [report wire format v8][reportv8]. ## Deprecated commands ### "replace catalog", version 8 * The nullable `catalog_uuid` property has been added. The payload is expected to be a Puppet catalog, as a JSON object, conforming exactly to the [catalog wire format v8][catalogv8]. Extra or missing fields are an error. ### "replace catalog", version 7 * The nullable `code_id` property has been added. The payload is expected to be a Puppet catalog, as a JSON object, conforming exactly to the [catalog wire format v7][catalogv7]. Extra or missing fields are an error. ### "replace catalog", version 6 * All field names that were previously separated by dashes are now separated by underscores. * The catalog `name` field has been renamed to `certname`. The payload is expected to be a Puppet catalog, as a JSON object, conforming exactly to the [catalog wire format v6][catalogv6]. Extra or missing fields are an error. ### "replace facts", version 4 * Similar to version 6 of replace catalog, previously dashed fields are now underscore-separated. * The `name` field has been renamed to `certname`, for consistency. See [fact wire format v4][factsv4] for more information on the payload of this command. ### "store report", version 7 * The nullable `catalog_uuid`, `code_id`, and `cached_catalog_status` properties have been added. The payload is expected to be a report, containing events that occurred on Puppet resources. It is structured as a JSON object, conforming to the [report wire format v7][reportv7]. ### "store report", version 6 The version 6 store report command differs from previous versions by changing from a `resource_events` property to a `resources` property. The `resource_events` property was a merged version of `resources` and their associated events `events`. This new version moves the command to use a similar format to a raw Puppet report, with a list of `resources`, each with an `events` property containing a list of the resource's events. The payload is expected to be a report, containing events that occurred on Puppet resources. It is structured as a JSON object, conforming to the [report wire format v6][reportv6]. ### "store report", version 5 The version 5 store report command differs from version 4 in the addition of a "noop" flag, which is a Boolean indicating whether the report was produced by a puppet run with `--noop`, as well as in the conversion of dash-separated fields to underscored. The payload is expected to be a report, containing events that occurred on Puppet resources. It is structured as a JSON object, conforming to the [report wire format v5][reportv5]. ## Examples using `curl` To post a `replace facts` command you can use the following curl command: curl -X POST \ -H 'Content-Type:application/json' \ -H 'Accept:application/json' \ -d '{"certname":"test1","environment":"DEV","values":{"myfact":"myvalue"},"producer_timestamp":"2015-01-01", "producer":"master1"}' \ "http://localhost:8080/pdb/cmd/v1?command=replace_facts&version=5&certname=test1" or equivalently (with the deprecated mechanism): curl -X POST \ -H "Accept: application/json" \ -H "Content-Type: application/json" \ -d '{"command":"replace facts","version":5,"payload":{"certname":"test1","environment":"DEV","values":{"myfact":"myvalue"},"producer_timestamp":"2015-01-01", "producer":"master1"}}' \ http://localhost:8080/pdb/cmd/v1 An example of `deactivate node`: curl -X POST \ -H 'Content-Type:application/json' \ -H 'Accept:application/json' \ -d '{"certname":"test1","producer_timestamp":"2015-01-01"}' \ "http://localhost:8080/pdb/cmd/v1?certname=test1&command=deactivate_node&version=3" or equivalently: curl -X POST \ -H "Accept: application/json" \ -H "Content-Type: application/json" \ -d '{"command":"deactivate node","version":3,"payload":{"certname":"test1","producer_timestamp":"2015-01-01"}}' \ http://localhost:8080/pdb/cmd/v1 puppetdb-6.2.0/documentation/api/ext/000075500000000000000000000000001342042221600175565ustar00rootroot00000000000000puppetdb-6.2.0/documentation/api/ext/v1/000075500000000000000000000000001342042221600201045ustar00rootroot00000000000000puppetdb-6.2.0/documentation/api/ext/v1/managed-packages.markdown000064400000000000000000000057641342042221600250340ustar00rootroot00000000000000--- title: "PE PuppetDB» Managed Packages endpoints" layout: default canonical: "/puppetdb/latest/api/query/v4/managed-packages.html" --- [curl]: ../curl.html#using-curl-from-localhost-non-sslhttp [paging]: ./paging.html > **PE-only**: The Managed Packages endpoints are only available for Puppet > Enterprise. ## `managed-packages` Returns all installed packages along with the certname of the nodes they are installed on. For the puppet managed packages provides even resource hash and manifest file. ### Query fields * `certname` (string): The certname of the node the package data was collected from. * `package_name` (string): The name of the package. (e.g. `emacs24`) * `version` (string): The version of the package, in the format used by the package provider. (e.g. `24.5+1-6ubuntu1`) * `provider` (string): The name of the provider which the package data came from; typically the name of the packaging system. (e.g. `apt`) * `resource` (string): a SHA-1 hash of the managed resource's type, title, and parameters, for identification. * `file` (string): the manifest file in which the managed resource was declared. * `line` (number): the line of the manifest on which the managed resource was declared. * `managed_version` (string): The version of the package that Puppet is trying to maintain ### Response format The response is a JSON array of hashes, where each hash has the form: {"certname":, "package_name":, "version":, "provider":, "resource":, "file":, "line":, "managed_version":} The array is unsorted by default. ### Example [You can use `curl`][curl] to query information about managed packages: curl -G http://localhost:8080/pdb/query/v4 --data-urlencode 'query=["from", "managed-packages", ["~", "package_name", "ssl"]]' ## `extended-managed-packages` Returns the same data as managed-packages but adds environment, os and os_release fields. ### Query fields Same as managed-packages plus the following * `environment` (string): The environment of the node * `os` (string): The operating system of the node * `os_release` (string): The operating system release of the node ### Response format The response is a JSON array of hashes, where each hash has the form: {"certname":, "package_name":, "version":, "provider":, "resource":, "file":, "line":, "managed_version":, "environment":, "os":, "os_release":} The array is unsorted by default. ### Example [You can use `curl`][curl] to query information about managed packages: curl -G http://localhost:8080/pdb/query/v4 --data-urlencode 'query=["from", "extended-managed-packages", ["~", "package_name", "ssl"]]' ## Paging This query endpoint supports paged results via the common PuppetDB paging URL parameters. For more information, please see the documentation on [paging][paging]. puppetdb-6.2.0/documentation/api/ext/v1/resource-graphs.markdown000064400000000000000000000117201342042221600247620ustar00rootroot00000000000000--- title: "PuppetDB» Extensions API (PE only)" layout: default canonical: "/puppetdb/latest/api/ext/v1/resource-graphs.html" --- [paging]: ../../query/v4/paging.html [query]: ../../query/v4/query.html [subqueries]: ../../query/v4/ast.html#subquery-operators [ast]: ../../query/v4/ast.html [environments]: ../../query/v4/environments.html [nodes]: ../../query/v4/nodes.html [statuses]: {{puppet}}/format_report.html#puppettransactionreport You can query resource-graphs by making an HTTP request to the `/pdb/ext/v1/resource-graphs` endpoint. ## `/pdb/ext/v1/resource-graphs` This will return a JSON array containing all the resource-graphs for each node in your infrastructure. This resource-graph is a unified view of data from the report and the catalog so that all resource information from a puppet run (parameters from catalogs, events from reports, etc.) is present in one place. ### URL Parameters * `query`: Optional. A JSON array containing the query in prefix notation (`["", "", ""]`). See the sections below for the supported operators and fields. For general info about queries, see [the page on query structure.][query] If a query parameter is not provided, all results will be returned. ### Query Operators See [the AST query language page][ast]. ### Query Fields * `certname` (string): the certname associated with the resource-graph * `environment` (string): the environment assigned to the node that submitted the report. * `transaction_uuid` (string): string used to identify a puppet run. * `catalog_uuid` (string): a string used to tie a catalog to its associated reports * `code_id` (string): a string used to tie a catalog to the Puppet code which generated the catalog * `producer_timestamp` (timestamp): is the time of catalog submission from the master to PuppetDB, according to the clock on the master. Timestamps are always [ISO-8601][8601] compatible date/time strings. * `status` (string): the status associated to report's node. Possible values for this field come from Puppet's report status, which can be found [here][statuses]. * `noop` (boolean): a flag indicating whether the report was produced by a noop run. ### Subquery Relationships Here is a list of related entities that can be used to constrain the result set using implicit subqueries. For more information consult the documentation for [subqueries][subqueries]. * [`nodes`][nodes]: Node for a catalog. * [`environments`][environments]: Environment for a catalog. ### Response Format Successful responses will be in `application/json`. The result will be a JSON array with one entry per certname. Each entry is of the form: { "certname" : , "environment" : , "catalog_uuid" : , "transaction_uuid" : , "code_id" : , "producer_timestamp":