pax_global_header00006660000000000000000000000064146375407260014530gustar00rootroot0000000000000052 comment=e57a80219d9911a27f2ad7fb5bc158bc6b127762 bdii-6.0.3/000077500000000000000000000000001463754072600124455ustar00rootroot00000000000000bdii-6.0.3/.github/000077500000000000000000000000001463754072600140055ustar00rootroot00000000000000bdii-6.0.3/.github/CODE_OF_CONDUCT.md000066400000000000000000000076701463754072600166160ustar00rootroot00000000000000# Code of Conduct This code of conduct applies to the maintainers and contributors alike. ## Dealing with issues and support requests _We wish to add a specific section on dealing with issues opened against the repository here._ This repository exists in the context of the EGI Federation. While that scope does not restrict the usage, it does inform the priority we assign to issues and the order we deal with them. We welcome issues reported by the public, and more specifically the community of people using this repository. The EGI team is small and cannot support all requests equally. While we undertake to do everything in our power to respond to issues in a timely manner, and to prioritise issues based on reasonable requests from submitters, the maintainers expect that the prioritisation of issues as decided by them is respected. ## Our Pledge In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation. ## Our Standards Examples of behaviour that contributes to creating a positive environment include: - Using welcoming and inclusive language - Being respectful of differing viewpoints and experiences - Gracefully accepting constructive criticism - Focusing on what is best for the community - Showing empathy towards other community members Examples of unacceptable behaviour by participants include: - The use of sexualized language or imagery and unwelcome sexual attention or advances - Trolling, insulting/derogatory comments, and personal or political attacks - Public or private harassment - Publishing others' private information, such as a physical or electronic address, without explicit permission - Other conduct which could reasonably be considered inappropriate in a professional setting ## Our Responsibilities Project maintainers are responsible for clarifying the standards of acceptable behaviour and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behaviour. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviours that they deem inappropriate, threatening, offensive, or harmful. ## Scope This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. ## Enforcement Instances of abusive, harassing, or otherwise unacceptable behaviour may be reported by contacting the EGI Foundation team at contact@egi.eu. The team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. ## Attribution This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [http://contributor-covenant.org/version/1/4][version] [homepage]: http://contributor-covenant.org [version]: http://contributor-covenant.org/version/1/4/ bdii-6.0.3/.github/ISSUE_TEMPLATE.md000066400000000000000000000024171463754072600165160ustar00rootroot00000000000000 # Short Description of the issue ## Environment - Operating System: - Other related components versions: ## Steps to reproduce ## Logs, stacktrace, or other symptoms ```shell output ``` # Summary of proposed changes bdii-6.0.3/.github/PULL_REQUEST_TEMPLATE.md000066400000000000000000000005631463754072600176120ustar00rootroot00000000000000 # Summary --- **Related issue :** bdii-6.0.3/.github/dependabot.yml000066400000000000000000000002461463754072600166370ustar00rootroot00000000000000--- version: 2 updates: # Maintain dependencies for GitHub Actions - package-ecosystem: "github-actions" directory: "/" schedule: interval: "daily" bdii-6.0.3/.github/linters/000077500000000000000000000000001463754072600154655ustar00rootroot00000000000000bdii-6.0.3/.github/linters/.flake8000066400000000000000000000002211463754072600166330ustar00rootroot00000000000000[flake8] # https://black.readthedocs.io/en/stable/guides/using_black_with_other_tools.html#flake8 extend-ignore = E203,W503 max-line-length = 88 bdii-6.0.3/.github/linters/.markdownlint.json000066400000000000000000000002541463754072600211500ustar00rootroot00000000000000{ "MD013": { "line_length": 120, "code_blocks": false, "tables": false }, "MD014": false, "MD024": false, "MD026": { "punctuation": ".,:;!" } } bdii-6.0.3/.github/linters/.yaml-lint.yml000066400000000000000000000000731463754072600201740ustar00rootroot00000000000000--- extends: "default" rules: line-length: max: 180 bdii-6.0.3/.github/linters/mlc_config.json000066400000000000000000000002041463754072600204540ustar00rootroot00000000000000{ "ignorePatterns": [ { "pattern": "^http://localhost" }, { "pattern": "^https://example.com" } ] } bdii-6.0.3/.github/workflows/000077500000000000000000000000001463754072600160425ustar00rootroot00000000000000bdii-6.0.3/.github/workflows/build.yml000066400000000000000000000062201463754072600176640ustar00rootroot00000000000000--- name: Create packages and test installation on: pull_request: jobs: build-centos7: name: Build CentOS 7 RPMs runs-on: ubuntu-latest container: quay.io/centos/centos:7 steps: - uses: actions/checkout@v3 with: fetch-depth: 0 - name: Install build requisites run: | yum install -y rpm-build rpmlint make rsync - name: build rpm run: | make clean rpm rpmlint --file .rpmlint.ini build/RPMS/noarch/*.rpm - name: Upload RPMs uses: actions/upload-artifact@v3 with: name: rpms7 path: | build/RPMS/noarch/bdii-*.el7.noarch.rpm build-almalinux8: name: Build AlmaLinux 8 RPMs runs-on: ubuntu-latest container: almalinux:8 steps: - uses: actions/checkout@v4 with: fetch-depth: 0 - name: Install build requisites run: | yum install -y rpm-build rpmlint make rsync - name: build rpm run: | make clean rpm rpmlint --file .rpmlint.ini build/RPMS/noarch/*.rpm - name: Upload RPMs uses: actions/upload-artifact@v3 with: name: rpms8 path: | build/RPMS/noarch/bdii-*.el8.noarch.rpm build-almalinux9: name: Build AlmaLinux 9 RPMs runs-on: ubuntu-latest container: almalinux:9 steps: - uses: actions/checkout@v4 with: fetch-depth: 0 - name: Install build requisites run: | yum install -y rpm-build rpmlint make rsync systemd-rpm-macros - name: build rpm run: | make clean rpm rpmlint --file .rpmlint.ini build/RPMS/noarch/*.rpm - name: Upload RPMs uses: actions/upload-artifact@v3 with: name: rpms9 path: | build/RPMS/noarch/bdii-*.el9.noarch.rpm # XXX Dependency from EPEL: glue-schema install-centos7: name: Install CentOS 7 RPMs needs: build-centos7 runs-on: ubuntu-latest container: quay.io/centos/centos:7 steps: - uses: actions/download-artifact@v3 with: name: rpms7 - name: Install generated RPMs run: | yum install -y epel-release yum localinstall -y bdii-*.el7.noarch.rpm # XXX Dependency from EPEL: glue-schema install-almalinux8: name: Install AlmaLinux 8 RPMs needs: build-almalinux8 runs-on: ubuntu-latest container: almalinux:8 steps: - uses: actions/download-artifact@v3 with: name: rpms8 - name: Install generated RPMs run: | yum install -y epel-release dnf config-manager --set-enabled powertools yum localinstall -y bdii-*.el8.noarch.rpm # XXX Dependencies from EPEL: glue-schema, openldap-servers install-almalinux9: name: Install AlmaLinux 9 RPMs needs: build-almalinux9 runs-on: ubuntu-latest container: almalinux:9 steps: - uses: actions/download-artifact@v3 with: name: rpms9 - name: Install generated RPMs run: | yum install -y epel-release yum localinstall -y bdii-*.el9.noarch.rpm bdii-6.0.3/.github/workflows/check-links.yml000066400000000000000000000015271463754072600207650ustar00rootroot00000000000000--- name: Check links on: pull_request: jobs: markdown-link-check: name: Check links using markdown-link-check runs-on: ubuntu-latest steps: # Checks out a copy of your repository on the ubuntu-latest machine - name: Checkout code uses: actions/checkout@v4 with: # Make sure the actual branch is checked out when running on PR ref: ${{ github.event.pull_request.head.sha }} # Full git history needed to get proper list of changed files fetch-depth: 0 - name: Check links on new changes uses: gaurav-nelson/github-action-markdown-link-check@v1 with: config-file: ".github/linters/mlc_config.json" check-modified-files-only: "yes" use-quiet-mode: "yes" use-verbose-mode: "yes" base-branch: "main" bdii-6.0.3/.github/workflows/lint.yml000066400000000000000000000020431463754072600175320ustar00rootroot00000000000000--- name: Lint on: pull_request: jobs: super-lint: name: Lint with Super-Linter runs-on: ubuntu-latest steps: # Checks out a copy of your repository on the ubuntu-latest machine - name: Checkout code uses: actions/checkout@v4 with: # Make sure the actual branch is checked out when running on PR ref: ${{ github.event.pull_request.head.sha }} # Full git history needed to get proper list of changed files fetch-depth: 0 # Runs the Super-Linter action - name: Run Super-Linter on new changes uses: docker://ghcr.io/github/super-linter:slim-v4 env: DEFAULT_BRANCH: main GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} MARKDOWN_CONFIG_FILE: .markdownlint.json # Only check new or edited files VALIDATE_ALL_CODEBASE: false # Fail on errors DISABLE_ERRORS: false # shellcheck conf SHELLCHECK_OPTS: -e SC1090,SC2046,SC2164,SC2166,SC1091,SC2086,SC2001,SC2219,SC2181 bdii-6.0.3/.github/workflows/release.yml000066400000000000000000000114471463754072600202140ustar00rootroot00000000000000--- # When a tag is created # - create a new release from the tag # - build and attach packages to the release name: Create packages and release on: push: tags: - "v*" jobs: build-centos7: name: Build centOS 7 RPMs runs-on: ubuntu-latest container: quay.io/centos/centos:7 steps: # XXX: Checkout v4 do not work on RHEL 7 - uses: actions/checkout@v3 with: fetch-depth: 0 - name: install build requisites run: | yum install -y rpm-build make rsync - name: build rpm run: | make clean rpm - name: Upload RPMs uses: actions/upload-artifact@v3 with: name: rpms7 path: | build/RPMS/noarch/bdii-*.el7.noarch.rpm build/SRPMS/bdii-*.el7.src.rpm build-almalinux8: name: Build AlmaLinux 8 RPMs runs-on: ubuntu-latest container: almalinux:8 steps: - uses: actions/checkout@v4 with: fetch-depth: 0 - name: Install build requisites run: | yum install -y rpm-build make rsync - name: build rpm run: | make clean rpm - name: Upload RPMs uses: actions/upload-artifact@v3 with: name: rpms8 path: | build/RPMS/noarch/bdii-*.el8.noarch.rpm build/SRPMS/bdii-*.el8.src.rpm build-almalinux9: name: Build AlmaLinux 9 RPMs runs-on: ubuntu-latest container: almalinux:9 steps: - uses: actions/checkout@v4 with: fetch-depth: 0 - name: Install build requisites run: | yum install -y rpm-build make rsync systemd-rpm-macros - name: build rpm run: | make clean rpm - name: Upload RPMs uses: actions/upload-artifact@v3 with: name: rpms9 path: | build/RPMS/noarch/bdii-*.el9.noarch.rpm build/SRPMS/bdii-*.el9.src.rpm release7: name: Upload CentOS 7 release artefacts permissions: contents: write # to upload release asset (softprops/action-gh-release) needs: build-centos7 runs-on: ubuntu-latest steps: - uses: actions/download-artifact@v3 with: name: rpms7 - name: Find package name id: package_name_centos7 run: | rpm_path=$(find . -name 'bdii-*.el7.noarch.rpm') src_path=$(find . -name 'bdii-*.el7.src.rpm') echo "rpm_path=${rpm_path}" >> "$GITHUB_OUTPUT" echo "src_path=${src_path}" >> "$GITHUB_OUTPUT" - name: Attach CentOS 7 RPMs to the release uses: softprops/action-gh-release@v2 env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} with: fail_on_unmatched_files: true files: | ${{ steps.package_name_centos7.outputs.rpm_path }} ${{ steps.package_name_centos7.outputs.src_path }} release8: name: Upload AlmaLinux 8 release artefacts permissions: contents: write # to upload release asset (softprops/action-gh-release) needs: build-almalinux8 runs-on: ubuntu-latest steps: - uses: actions/download-artifact@v3 with: name: rpms8 - name: Find package name id: package_name_almalinux8 run: | rpm_path=$(find . -name 'bdii-*.el8.noarch.rpm') src_path=$(find . -name 'bdii-*.el8.src.rpm') echo "rpm_path=${rpm_path}" >> "$GITHUB_OUTPUT" echo "src_path=${src_path}" >> "$GITHUB_OUTPUT" - name: Attach AlmaLinux 8 RPMs to the release uses: softprops/action-gh-release@v2 env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} with: fail_on_unmatched_files: true files: | ${{ steps.package_name_almalinux8.outputs.rpm_path }} ${{ steps.package_name_almalinux8.outputs.src_path }} release9: name: Upload AlmaLinux 9 release artefacts permissions: contents: write # to upload release asset (softprops/action-gh-release) needs: build-almalinux9 runs-on: ubuntu-latest steps: - uses: actions/download-artifact@v3 with: name: rpms9 - name: Find package name id: package_name_almalinux9 run: | rpm_path=$(find . -name 'bdii-*.el9.noarch.rpm') src_path=$(find . -name 'bdii-*.el9.src.rpm') echo "rpm_path=${rpm_path}" >> "$GITHUB_OUTPUT" echo "src_path=${src_path}" >> "$GITHUB_OUTPUT" - name: Attach AlmaLinux 9 RPMs to the release uses: softprops/action-gh-release@v2 env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} with: fail_on_unmatched_files: true files: | ${{ steps.package_name_almalinux9.outputs.rpm_path }} ${{ steps.package_name_almalinux9.outputs.src_path }} bdii-6.0.3/.gitignore000066400000000000000000000000221463754072600144270ustar00rootroot00000000000000build docs/_build bdii-6.0.3/.rpmlint.ini000066400000000000000000000002771463754072600147170ustar00rootroot00000000000000from Config import * # BDII config files are protected as they include a password addFilter(".*non-readable.*") # Check broken for this specific init script addFilter(".*subsys-not-used.*") bdii-6.0.3/0001-Use-mdb-slapd-backend.patch000066400000000000000000000052021463754072600200250ustar00rootroot00000000000000From a3312f93c372f9a8dd420fb991d04383531faae6 Mon Sep 17 00:00:00 2001 From: Mattias Ellert Date: Sun, 4 Dec 2022 08:52:00 +0100 Subject: [PATCH] Use mdb slapd backend The bdb and hdb backends were removed from slapd in openldap 2.5 --- etc/bdii-slapd.conf | 9 +++------ etc/bdii-top-slapd.conf | 9 +++------ 2 files changed, 6 insertions(+), 12 deletions(-) diff --git a/etc/bdii-slapd.conf b/etc/bdii-slapd.conf index 841dbf3..984a111 100644 --- a/etc/bdii-slapd.conf +++ b/etc/bdii-slapd.conf @@ -25,9 +25,8 @@ moduleload back_relay # GLUE 1.3 database definitions ####################################################################### -database hdb +database mdb suffix "o=grid" -cachesize 30000 checkpoint 1024 0 dbnosync rootdn "o=grid" @@ -78,9 +77,8 @@ suffixmassage "GLUE2GroupID=resource,GLUE2DomainID=*,GLUE2GroupID=grid,o=glue" # GLUE 2.0 database definitions ####################################################################### -database hdb +database mdb suffix "o=glue" -cachesize 30000 checkpoint 1024 0 dbnosync rootdn "o=glue" @@ -114,9 +112,8 @@ index objectClass eq,pres ####################################################################### # Stats database definitions ####################################################################### -database hdb +database mdb suffix "o=infosys" -cachesize 10 checkpoint 1024 0 dbnosync rootdn "o=infosys" diff --git a/etc/bdii-top-slapd.conf b/etc/bdii-top-slapd.conf index c4113bb..df295bd 100644 --- a/etc/bdii-top-slapd.conf +++ b/etc/bdii-top-slapd.conf @@ -26,8 +26,7 @@ moduleload back_relay # GLUE 1.3 database definitions ####################################################################### -database hdb -cachesize 300000 +database mdb dbnosync suffix "o=shadow" checkpoint 1024 0 @@ -87,8 +86,7 @@ suffixmassage "GLUE2GroupID=resource,GLUE2DomainID=*,GLUE2GroupID=grid,o=glue" # GLUE 2.0 database definitions ####################################################################### -database hdb -cachesize 300000 +database mdb dbnosync suffix "o=glue" checkpoint 1024 0 @@ -123,8 +121,7 @@ index objectClass eq,pres ####################################################################### # Stats database definitions ####################################################################### -database hdb -cachesize 10 +database mdb dbnosync suffix "o=infosys" checkpoint 1024 0 -- 2.38.1 bdii-6.0.3/AUTHORS.md000066400000000000000000000010651463754072600141160ustar00rootroot00000000000000# Authors ## Maintainers - Andrea Manzi - Baptiste Grenier - Enol Fernandez - Mattias Ellert ## Original Authors David Groep ## Contributors - Maria Alandes Pradillo - Maarten Litmaath - Felix Ehm - Andrew Elwell - Daniel Johansson [GitHub contributors](https://github.com/EGI-Federation/bdii/graphs/contributors). bdii-6.0.3/CHANGELOG000066400000000000000000000026641463754072600136670ustar00rootroot00000000000000# Changelog All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). ## [Unreleased] ## [6.0.3] - 2024-06-28 ### Fixed - Fix tmpfs permissions. (#62) (samuraiii) ## [6.0.2] - 2024-06-14 ### Changed - Fix for IPv6 support. (#51) (Mattias Ellert) - Replace obsolete -h and -p paremeters in ldap CLI tools. (#49) (Baptiste Grenier) - Fix deprecation warning due to log.warn. (#58) (Daniela Bauer) ## [6.0.1] - 2023-03-28 ### Changed - Build and release using AlmaLinux 8 and 9. (#45) (Baptiste Grenier) - Align Makefile with other repositories. (#45) (Baptiste Grenier) - Allow long yaml files for GitHub Actions (#45) (Baptiste Grenier) ## [6.0.0] - Drop debian-specific files and reference official packages (#44) (Baptiste Grenier) - Migrate to MDB backend for OpenLDAP 2.5 on recent OS (#42) (Mattias Ellert) - Fix runtime errors while iterating dictionary in python 3 (#39) (Andrea Manzi) - Migrate to Python 3 (#25) (Laurence Field, Mattias Ellert) - Quality control using GitHub actions, update community files (#26) (Baptiste Grenier) ## [5.2.26] - Truncate LDIF password file before updating (#14) (Petr Vokac) - Preserve base64 entries (#21) (Enol Fernández, Andrea Manzi) - Allow BDII_HOSTNAME configuration and default to localhost (#22) (Andrea Manzi) bdii-6.0.3/CODEOWNERS000066400000000000000000000007611463754072600140440ustar00rootroot00000000000000# https://help.github.com/en/github/creating-cloning-and-archiving-repositories/about-code-owners # https://github.blog/2017-07-06-introducing-code-owners/ # Assign code owners that will automatically get asked to review Pull Requests # The last matching pattern takes the most precedence. # These owners will be the default owners for everything in the repo. # Unless a later match takes precedence, they will be requested for # review when someone opens a pull request. * @EGI-Federation/bdii bdii-6.0.3/CONTRIBUTING.md000066400000000000000000000077371463754072600147140ustar00rootroot00000000000000# Contributing Thank you for taking the time to contribute to this project. The maintainers greatly appreciate the interest of contributors and rely on continued engagement with the community to ensure that this project remains useful. We would like to take steps to put contributors in the best possible position to have their contributions accepted. Please take a few moments to read this short guide on how to contribute; bear in mind that contributions regarding how to best contribute are also welcome. ## Feedback and Questions If you wish to discuss anything related to the project, please open an issue or start a topic on the [EGI Community Forum](https://community.egi.eu). The maintainers will sometimes move issues off of GitHub to the community forum if it is thought that longer, more open-ended discussion would be beneficial, including a wider community scope. ## Contribution Process Before proposing a contribution via pull request, ideally there is an open issue describing the need for your contribution (refer to this issue number when you submit the pull request). We have a 3 steps process for contributions. 1. Fork the project if you have not, and commit changes to a git branch 1. Create a GitHub Pull Request for your change, following the instructions in the pull request template. 1. Perform a [Code Review](#code-review-process) with the maintainers on the pull request. ### Pull Request Requirements 1. **Explain your contribution in plain language.** To assist the maintainers in understanding and appreciating your pull request, please use the template to explain _why_ you are making this contribution, rather than just _what_ the contribution entails. ### Code Review Process Code review takes place in GitHub pull requests. See [this article](https://help.github.com/articles/about-pull-requests/) if you're not familiar with GitHub Pull Requests. Once you open a pull request, maintainers will review your code using the built-in code review process in GitHub PRs. The process at this point is as follows: 1. A maintainer will review your code and merge it if no changes are necessary. Your change will be merged into the repository's `main` branch. 1. If a maintainer has feedback or questions on your changes then they will set `request changes` in the review and provide an explanation. ## Using git For collaboration purposes, it is best if you create a GitHub account and fork the repository to your own account. Once you do this you will be able to push your changes to your GitHub repository for others to see and use, and it will be easier to send pull requests. ### Branches and Commits You should submit your patch as a git branch named after the GitHub issue, such as `#3`\. This is called a _topic branch_ and allows users to associate a branch of code with the issue. It is a best practice to have your commit message have a _summary line_ that includes the issue number, followed by an empty line and then a brief description of the commit. This also helps other contributors understand the purpose of changes to the code. ```text #3 - platform_family and style * use platform_family for platform checking * update notifies syntax to "resource_type[resource_name]" instead of resources() lookup * GH-692 - delete config files dropped off by packages in conf.d * dropped debian 4 support because all other platforms have the same values, and it is older than "old stable" debian release ``` ## Release cycle Main branch is always available. Tagged versions may be created as needed following [Semantic Versioning](https://semver.org/) as far as applicable. ## Community EGI benefits from a strong community of developers and system administrators, and vice-versa. If you have any questions or if you would like to get involved in the wider EGI community you can check out: - [EGI Community Forum](https://community.egi.eu/) - [EGI website](https://www.egi.eu) **This file has been modified from the Chef Cookbook Contributing Guide**. bdii-6.0.3/COPYRIGHT000066400000000000000000000002561463754072600137430ustar00rootroot00000000000000This project is licensed under Apache 2.0. Copyrights in this project are retained by their contributors. No copyright assignment is required to contribute to this project. bdii-6.0.3/LICENSE.txt000066400000000000000000000261161463754072600142760ustar00rootroot00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright 2018 The authors Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. bdii-6.0.3/Makefile000066400000000000000000000051371463754072600141130ustar00rootroot00000000000000NAME=$(shell grep Name: *.spec | sed 's/^[^:]*:[^a-zA-Z]*//') VERSION=$(shell grep Version: *.spec | sed 's/^[^:]*:[^0-9]*//') RELEASE=$(shell grep Release: *.spec | cut -d"%" -f1 | sed 's/^[^:]*:[^0-9]*//') build=$(shell pwd)/build dist=$(shell rpm --eval '%dist') init_dir=$(shell rpm --eval '%{_initrddir}' || echo '/etc/init.d/') default: @echo "Nothing to do" install: @echo installing ... @mkdir -p $(prefix)/usr/sbin/ @mkdir -p $(prefix)/run/bdii/ @mkdir -p $(prefix)/var/lib/bdii/gip/ldif/ @mkdir -p $(prefix)/var/lib/bdii/gip/provider/ @mkdir -p $(prefix)/var/lib/bdii/gip/plugin/ @mkdir -p $(prefix)/etc/bdii/ @mkdir -p $(prefix)/etc/sysconfig/ @mkdir -p $(prefix)$(init_dir)/ @mkdir -p $(prefix)/etc/logrotate.d/ @mkdir -p $(prefix)/var/log/bdii/ @mkdir -p $(prefix)/usr/share/doc/bdii/ @mkdir -p $(prefix)/usr/share/man/man1 @install -m 0755 etc/init.d/bdii $(prefix)/$(init_dir)/ @install -m 0644 etc/sysconfig/bdii $(prefix)/etc/sysconfig/ @install -m 0755 bin/bdii-update $(prefix)/usr/sbin/ @install -m 0644 etc/bdii.conf $(prefix)/etc/bdii/ @install -m 0644 etc/BDII.schema $(prefix)/etc/bdii/ @install -m 0640 etc/bdii-slapd.conf $(prefix)/etc/bdii/ @install -m 0640 etc/bdii-top-slapd.conf $(prefix)/etc/bdii/ @install -m 0644 etc/DB_CONFIG $(prefix)/etc/bdii/ @install -m 0644 etc/DB_CONFIG_top $(prefix)/etc/bdii/ @install -m 0644 etc/default.ldif $(prefix)/var/lib/bdii/gip/ldif/ @install -m 0644 etc/logrotate.d/bdii $(prefix)/etc/logrotate.d @install -m 0644 man/bdii-update.1 $(prefix)/usr/share/man/man1/ @install -m 0644 README.md $(prefix)/usr/share/doc/bdii/ @install -m 0644 AUTHORS.md $(prefix)/usr/share/doc/bdii/ @install -m 0644 COPYRIGHT $(prefix)/usr/share/doc/bdii/ @install -m 0644 LICENSE.txt $(prefix)/usr/share/doc/bdii/ dist: @mkdir -p $(build)/$(NAME)-$(VERSION)/ rsync -HaS --exclude ".git" --exclude "$(build)" * $(build)/$(NAME)-$(VERSION)/ cd $(build); tar --gzip -cf $(NAME)-$(VERSION).tar.gz $(NAME)-$(VERSION)/; cd - sources: dist cp $(build)/$(NAME)-$(VERSION).tar.gz . prepare: dist @mkdir -p $(build)/RPMS/noarch @mkdir -p $(build)/SRPMS/ @mkdir -p $(build)/SPECS/ @mkdir -p $(build)/SOURCES/ @mkdir -p $(build)/BUILD/ cp $(build)/$(NAME)-$(VERSION).tar.gz $(build)/SOURCES cp $(NAME).spec $(build)/SPECS srpm: prepare rpmbuild -bs --define="dist $(dist)" --define="_topdir $(build)" $(build)/SPECS/$(NAME).spec rpm: srpm rpmbuild --rebuild --define="dist $(dist)" --define="_topdir $(build)" $(build)/SRPMS/$(NAME)-$(VERSION)-$(RELEASE)$(dist).src.rpm clean: rm -f *~ $(NAME)-$(VERSION).tar.gz rm -rf $(build) .PHONY: dist srpm rpm sources clean bdii-6.0.3/README.md000066400000000000000000000103411463754072600137230ustar00rootroot00000000000000# BDII Documentation: [bdii.readthedocs.io](http://bdii.readthedocs.io). ## Function The Berkeley Database Information Index (BDII) consists of two or more standard LDAP databases that are populated by an update process. Port forwarding is used to enable one or more databases to serve data while one database is being refreshed. The databases are refreshed cyclically. Any incoming connection is forwarded to the most recently updated database, while old connections are allowed to linger until it is the turn of their database to be refreshed and restarted. The update process obtains LDIF from either doing an `ldapsearch` on LDAP URLs or by running a local script (given by a URL with "file" protocol) that generates LDIF. The LDIF is then inserted into the LDAP database. Options exist to update the list of LDAP URLs from a web page and to use an LDIF file from a web page to modify the data before it is inserted into the database. ## Cache use Whenever a remote server is contacted and the `ldapsearch` command times out the update process tries to find an (old) cached entry in the `/var/cache/` directory. If no entry is found a message is printed to the logfile. _Attention!_ If the remote host cannot be contacted due to a connection problem no cached entry is taken. No message is printed to the logfile. ## Compressed Content Exchange Mechanism (CCEM) The Compressed Content Exchange Mechanism is intended to speed up the gathering of information in case of a `ldapsearch` to another BDII instance. The update process first tries to find the entry containing the compressed content of the queried instance and subsequently adds the information to its upcoming database. If the CCEM fails the normal procedure as described in the previous paragraph is executed. The CCEM function is enabled by default in version `>= 3.9.1`. To disable, add the following to your `bdii.conf`: ```shell BDII_CCEM=no ``` ## BDII Status Information Mechanism (BSIM) The BDII Status Information Mechanism is intended to allow better monitoring possibilities, spotting of upraising problems and resulting error prevention. It adds status information about the BDII instance into the `o=infosys` root containing metrics like the number of entries added in the last cycle, the time to do so, etc. The description of those metrics can be found in the `etc/BDII.schema` file. ## Installing from source ```shell $ make install ``` - Build dependencies: None - Runtime dependencies: openldap, python3 ## Installing from packages ### On RHEL-based systems For RHEL-based systems, it's possible to install packages from two sources: - [EGI UMD packages](https://go.egi.eu/umd) build from this repository, and tested to work with other components part of the Unified Middleware Distribution. - [Fedora and EPEL packages](https://packages.fedoraproject.org/search?query=bdii) maintained by Mattias Ellert. ### On debian [Official debian packages](https://packages.debian.org/search?keywords=bdii) are maintained by Mattias Ellert. ## Building packages ### Building a RPM The required build dependencies are: - rpm-build - make - rsync - systemd-rpm-macros, for RHEL >= 8 ```shell # Checkout tag to be packaged $ git clone https://github.com/EGI-Federation/bdii.git $ cd bdii $ git checkout X.X.X # Building in a container $ docker run --rm -v $(pwd):/source -it quay.io/centos/centos:7 [root@2fd110169c55 /]# yum install -y rpm-build make rsync [root@2fd110169c55 /]# cd /source && make rpm ``` The RPM will be available into the `build/RPMS` directory. ### Building a deb Debian build files maintained by Mattias Ellert are available in the [Debian Salsa GitLab](https://salsa.debian.org/ellert/bdii/). ## Preparing a release - Prepare a changelog from the last version, including contributors' names - Prepare a PR with - Updating version and changelog in `CHANGELOG` and `bdii.spec` - Updating `codemeta.json`, if needed - Once the PR has been merged tag and release a new version in GitHub - Packages will be built using GitHub Actions and attached to the release page ## History This work started under the EGEE project, and was hosted and maintained for a long time by CERN. This is now hosted here on GitHub, maintained by the BDII community with support of members of the EGI Federation. bdii-6.0.3/bdii.spec000066400000000000000000000265641463754072600142450ustar00rootroot00000000000000%if %{?fedora}%{!?fedora:0} >= 25 || %{?rhel}%{!?rhel:0} >= 8 %global use_systemd 1 %else %global use_systemd 0 %endif %if %{?fedora}%{!?fedora:0} >= 36 || %{?rhel}%{!?rhel:0} >= 9 %global use_mdb 1 %else %global use_mdb 0 %endif Name: bdii Version: 6.0.3 Release: 1%{?dist} Summary: The Berkeley Database Information Index (BDII) License: ASL 2.0 URL: https://github.com/EGI-Federation/bdii Source: %{name}-%{version}.tar.gz BuildArch: noarch BuildRequires: make %if %{use_systemd} BuildRequires: systemd-rpm-macros %endif Requires: openldap-clients Requires: openldap-servers Requires: glue-schema >= 2.0.0 Requires: python3 Requires: logrotate Requires(post): /usr/bin/mkpasswd %if %{use_systemd} %{?systemd_requires} %else Requires(post): chkconfig Requires(preun): chkconfig Requires(preun): initscripts Requires(postun): initscripts %endif %if %{?fedora}%{!?fedora:0} >= 23 || %{?rhel}%{!?rhel:0} >= 8 Requires(post): policycoreutils-python-utils Requires(postun): policycoreutils-python-utils %else Requires(post): policycoreutils-python Requires(postun): policycoreutils-python %endif %description The Berkeley Database Information Index (BDII) consists of a standard LDAP database which is updated by an external process. The update process obtains LDIF from a number of sources and merges them. It then compares this to the contents of the database and creates an LDIF file of the differences. This is then used to update the database. %prep %setup -q %if %{use_mdb} # Use mdb on recent systems patch -p1 -f < 0001-Use-mdb-slapd-backend.patch %endif %build %install make install prefix=%{buildroot} %if %{use_systemd} rm %{buildroot}%{_initrddir}/%{name} mkdir -p %{buildroot}%{_unitdir} install -m 644 -p etc/systemd/bdii.service etc/systemd/bdii-slapd.service %{buildroot}%{_unitdir} mkdir -p %{buildroot}%{_datadir}/%{name} install -p etc/systemd/bdii-slapd-start %{buildroot}%{_datadir}/%{name} %endif rm -rf %{buildroot}%{_docdir}/%{name} %if %{use_systemd} %pre # Remove old init config when systemd is used /sbin/chkconfig --del %{name} >/dev/null 2>&1 || : %endif %post sed "s/\(rootpw *\)secret/\1$(mkpasswd -s 0 | tr '/' 'x')/" \ -i %{_sysconfdir}/%{name}/bdii-slapd.conf \ %{_sysconfdir}/%{name}/bdii-top-slapd.conf %if %{use_systemd} %systemd_post %{name}.service %else /sbin/chkconfig --add %{name} %endif semanage port -a -t ldap_port_t -p tcp 2170 2>/dev/null || : semanage fcontext -a -t slapd_db_t "%{_localstatedir}/lib/%{name}/db(/.*)?" 2>/dev/null || : semanage fcontext -a -t slapd_var_run_t "%{_localstatedir}/run/%{name}/db(/.*)?" 2>/dev/null || : # Remove selinux labels for old bdii var dir semanage fcontext -d -t slapd_db_t "%{_localstatedir}/run/%{name}(/.*)?" 2>/dev/null || : %preun %if %{use_systemd} %systemd_preun %{name}.service %else if [ $1 -eq 0 ]; then service %{name} stop > /dev/null 2>&1 /sbin/chkconfig --del %{name} fi %endif %postun %if %{use_systemd} %systemd_postun_with_restart %{name}.service %else if [ $1 -ge 1 ]; then service %{name} condrestart > /dev/null 2>&1 fi %endif if [ $1 -eq 0 ]; then semanage port -d -t ldap_port_t -p tcp 2170 2>/dev/null || : semanage fcontext -d -t slapd_db_t "%{_localstatedir}/lib/%{name}/db(/.*)?" 2>/dev/null || : semanage fcontext -d -t slapd_var_run_t "%{_localstatedir}/run/%{name}/db(/.*)?" 2>/dev/null || : fi %files %attr(-,ldap,ldap) %{_localstatedir}/lib/%{name} %attr(-,ldap,ldap) %{_localstatedir}/log/%{name} %dir %{_sysconfdir}/%{name} %config(noreplace) %{_sysconfdir}/%{name}/DB_CONFIG %config(noreplace) %{_sysconfdir}/%{name}/DB_CONFIG_top %config(noreplace) %{_sysconfdir}/%{name}/bdii.conf %config(noreplace) %{_sysconfdir}/%{name}/BDII.schema %attr(-,ldap,ldap) %config %{_sysconfdir}/%{name}/bdii-slapd.conf %attr(-,ldap,ldap) %config %{_sysconfdir}/%{name}/bdii-top-slapd.conf %config(noreplace) %{_sysconfdir}/sysconfig/%{name} %config(noreplace) %{_sysconfdir}/logrotate.d/%{name} %if %{use_systemd} %{_unitdir}/bdii.service %{_unitdir}/bdii-slapd.service %dir %{_datadir}/%{name} %{_datadir}/%{name}/bdii-slapd-start %else %{_initrddir}/%{name} %endif %{_sbindir}/bdii-update %{_mandir}/man1/bdii-update.1* %doc AUTHORS.md README.md %license COPYRIGHT LICENSE.txt %changelog * Fri Jun 28 2024 Baptiste Grenier - 6.0.3-1 - Fix tmpfs permissions. (#62) (samuraiii) * Fri Jun 14 2024 Baptiste Grenier - 6.0.2-1 - Fix for IPv6 support. (#51) (Mattias Ellert) - Replace obsolete -h and -p paremeters in ldap CLI tools. (#49) (Baptiste Grenier) - Fix deprecation warning due to log.warn. (#58) (Daniela Bauer) * Tue Mar 28 2023 Baptiste Grenier - 6.0.1-1 - Build and release using AlmaLinux 8 and 9. (#45) (Baptiste Grenier) - Align Makefile with other repositories. (#45) (Baptiste Grenier) - Allow long yaml files for GitHub Actions (#45) (Baptiste Grenier) * Thu Dec 15 2022 Baptiste Grenier - 6.0.0-1 - Migrate to MDB backend for OpenLDAP 2.5 on recent OS (#42) (Mattias Ellert) - Fix runtime errors while iterating dictionary in python 3 (#39) (Andrea Manzi) - Migrate to Python 3 (#25) (Laurence Field, Mattias Ellert) - Quality control using GitHub actions, update community files (#26) (Baptiste Grenier) * Wed Sep 23 2020 Baptiste Grenier - 5.2.26-1 - Truncate LDIF password file before updating (Petr Vokac) - Preserve base64 entries (Enol Fernández, Andrea Manzi) - Allow BDII_HOSTNAME configuration and default to localhost (Andrea Manzi) * Tue Oct 2 2018 Baptiste Grenier - 5.2.25-1 - Import product card JSON in codemeta.json format (Bruce Becker) - Lint, build, test install and attach packages to GitHub tags using Travis. (Baptiste Grenier) * Mon Aug 27 2018 Baptiste Grenier - 5.2.24-1 - Fix #3: init script failing on stale PID (Paolo Andreetto) - Update build, documetation and link to new GitHub repository (Baptiste Grenier) * Wed Aug 27 2014 Maria Alandes - 5.2.23-1 - #GRIDINFO-55: Increase the number of simultaneous threads * Mon Sep 9 2013 Maria Alandes - 5.2.22-1 - BUG #102503: Make /var/run/bdii configurable * Fri Aug 2 2013 Maria Alandes - 5.2.21-1 - Add plugin modifications to LDIF modify instead of LDIF new for cached objects - Do not clean glite-update-endpoints cache files - Fixed wrong 'if' check in init.d script - BUG #99298: Set status attributes of delayed delete entries to 'Unknown' - BUG #102014: Clean caches after a BDII restart - BUG #101709: Start bdii-update daemon with -l option - BUG #102140: Start daemons from "/" - BUG #101389: RAM size can be now configured - BUG #101398: Defined the max log file size for the LDAP DB backend in top level BDIIs * Fri May 31 2013 Maria Alandes - 5.2.20-1 - Changed URL in spec file to point to new Information System web pages - Added missing dist in the rpm target of the Makefile * Fri May 31 2013 Maria Alandes - 5.2.19-1 - BUG #101090: added missing symlink to DB_CONFIG_top for GLUE2 DB backend * Fri May 03 2013 Maria Alandes - 5.2.18-1 - BUG #101237: bdii-update: GLUE2 entries marked for deletion keep the correct case and can be deleted * Tue Jan 15 2013 Maria Alandes - 5.2.17-1 - BUG #99622: Add dependency on openldap2.4-clients in SL5 * Thu Jan 10 2013 Maria Alandes - 5.2.16-1 - BUG #99622: Add dependency on openldap2.4-servers in SL5 * Wed Nov 28 2012 Maria Alandes - 5.2.15-1 - Fixes after testing: Load rwm and back_relay modules in the slapd configuration for site and resource BDII * Tue Nov 20 2012 Maria Alandes - 5.2.14-1 - BUG #98931: /sbin/runuser instead of runuser - BUG #98711: Optimise LDAP queries in GLUE 2.0 - BUG #98682: Delete delayed_delete.pkl when BDII is restarted - BUG #97717: Relay database created to be able to define the GLUE2GroupName and services alias * Wed Aug 15 2012 Laurence Field - 5.2.13-1 - Included Fedora patches upstream: - BUG #97223: Changes needed for EPEL - BUG #97217: Issues with lsb dependencies * Fri Jul 20 2012 Maria Alandes - 5.2.12-1 - Fixed BDII_IPV6_SUPPORT after testing * Wed Jul 18 2012 Maria Alandes - 5.2.11-1 - BUG 95122: Created SLAPD_DB_DIR directoy with correct ownership if it doesn't exist - BUG 95839: Added BDII_IPV6_SUPPORT * Thu Mar 8 2012 Laurence Field - 5.2.10-1 - New upsteam version that includes a new DB_CONFIG * Wed Feb 8 2012 Laurence Field - 5.2.9-1 - Fixed /var/run packaging issue * Wed Feb 8 2012 Laurence Field - 5.2.8-1 - Fixed a base64 encoding issue and added /var/run/bdii to the package * Tue Feb 7 2012 Laurence Field - 5.2.7-1 - Performance improvements to reduce memory and disk usage * Wed Jan 25 2012 Laurence Field - 5.2.6-1 - New upstream version that includes fedora patches and fix for EGI RT 3235 * Thu Jan 12 2012 Fedora Release Engineering - 5.2.5-2 - Rebuilt for https://fedoraproject.org/wiki/Fedora_17_Mass_Rebuild * Sun Sep 4 2011 Mattias Ellert - 5.2.5-1 - New upstream version 5.2.5 * Tue Jul 26 2011 Mattias Ellert - 5.2.4-1 - New upstream version 5.2.4 - Drop patch accepted upstream: bdii-mdsvo.patch - Move large files away from /var/run in order not to fill up /run partition * Mon Jun 27 2011 Mattias Ellert - 5.2.3-2 - Revert upstream hack that breaks ARC infosys * Mon Jun 13 2011 Mattias Ellert - 5.2.3-1 - New upstream version 5.2.3 - Drop patches accepted upstream: bdii-runuser.patch, bdii-context.patch, bdii-default.patch, bdii-shadowerr.patch, bdii-sysconfig.patch * Mon Feb 07 2011 Fedora Release Engineering - 5.1.13-2 - Rebuilt for https://fedoraproject.org/wiki/Fedora_15_Mass_Rebuild * Sat Jan 01 2011 Mattias Ellert - 5.1.13-1 - New upstream version 5.1.13 - Move restorecon from post sctiptlet to startup script in order to support /var/run on tmpfs * Thu Sep 23 2010 Mattias Ellert - 5.1.9-1 - New upstream version 5.1.9 * Thu Sep 02 2010 Mattias Ellert - 5.1.8-1 - New upstream version 5.1.8 * Fri Jun 18 2010 Mattias Ellert - 5.1.7-1 - New upstream version 5.1.7 * Sun May 23 2010 Mattias Ellert - 5.1.5-1 - New upstream release 5.1.5 - Get rid of lsb initscript dependency * Mon Apr 05 2010 Mattias Ellert - 5.1.0-1 - New upstream verison 5.1.0 - Add SELinux context management to scriptlets * Thu Mar 25 2010 Mattias Ellert - 5.0.8-4.460 - Update (svn revision 460) - Use proper anonymous svn checkout instead of svnweb generated tarball * Fri Feb 26 2010 Mattias Ellert - 5.0.8-3.443 - Update (svn revision 443) * Wed Feb 24 2010 Mattias Ellert - 5.0.8-2.436 - Update (svn revision 436) * Mon Feb 08 2010 Mattias Ellert - 5.0.8-1.375 - Initial package (svn revision 375) bdii-6.0.3/bin/000077500000000000000000000000001463754072600132155ustar00rootroot00000000000000bdii-6.0.3/bin/bdii-update000077500000000000000000001132071463754072600153360ustar00rootroot00000000000000#!/usr/bin/env python3 ############################################################################## # Copyright (c) Members of the EGEE Collaboration. 2004. # See http://www.eu-egee.org/partners/ for details on the copyright # holders. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS # OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ############################################################################## import base64 import getopt import logging import os import pickle import re import signal import sys import tempfile import time def parse_options(): config = {} try: opts, args = getopt.getopt(sys.argv[1:], "dc:", ["config"]) except getopt.GetoptError: sys.stderr.write("Error: Invalid option specified.\n") print_usage() sys.exit(2) for o, a in opts: if o in ("-c", "--config"): config['BDII_CONFIG_FILE'] = a if o in ("-d", "--daemon"): config['BDII_DAEMON'] = True if 'BDII_CONFIG_FILE' not in config: sys.stderr.write("Error: Configuration file not specified.\n") print_usage() sys.exit(1) if not os.path.exists(config['BDII_CONFIG_FILE']): sys.stderr.write("Error: Configuration file %s does not exist.\n" % config['BDII_CONFIG_FILE']) sys.exit(1) return config def get_config(config): for line in open(config['BDII_CONFIG_FILE']).readlines(): index = line.find("#") if index > -1: line = line[:index] index = line.find("=") if index > -1: key = line[:index].strip() value = line[index+1:].strip() config[key] = value if 'SLAPD_CONF' in os.environ: config['SLAPD_CONF'] = os.environ['SLAPD_CONF'] if 'BDII_DAEMON' not in config: config['BDII_DAEMON'] = False if 'BDII_RUN_DIR' not in config: config['BDII_RUN_DIR'] = '/run/bdii' if 'BDII_PID_FILE' not in config: config['BDII_PID_FILE'] = "%s/bdii-update.pid" % config['BDII_RUN_DIR'] if 'BDII_HOSTNAME' not in config: config['BDII_HOSTNAME'] = 'localhost' for parameter in ['BDII_LOG_FILE', 'BDII_LOG_LEVEL', 'BDII_LDIF_DIR', 'BDII_PROVIDER_DIR', 'BDII_PLUGIN_DIR', 'BDII_READ_TIMEOUT']: if parameter not in config: sys.stderr.write(("Error: Configuration parameter %s is not" " specified in the configuration file %s.\n") % ( parameter, config['BDII_CONFIG_FILE'])) sys.exit(1) for parameter in ['BDII_LDIF_DIR', 'BDII_PROVIDER_DIR', 'BDII_PLUGIN_DIR']: if not os.path.exists(config[parameter]): sys.stderr.write("Error: %s %s does not exist.\n" % ( parameter, config[parameter])) sys.exit(1) if 'BDII_LOG_LEVEL' not in config: config['BDII_LOG_LEVEL'] = 'ERROR' else: log_levels = ['CRITICAL', 'ERROR', 'WARNING', 'INFO', 'DEBUG'] try: log_levels.index(config['BDII_LOG_LEVEL']) except ValueError: sys.stderr.write(("Error: Log level %s is not an" " allowed level. %s\n") % ( config['BDII_LOG_LEVEL'], log_levels)) sys.exit(1) config['BDII_READ_TIMEOUT'] = int(config['BDII_READ_TIMEOUT']) if config['BDII_DAEMON'] is True: for parameter in ['BDII_PORT', 'BDII_BREATHE_TIME', 'BDII_VAR_DIR', 'BDII_ARCHIVE_SIZE', 'BDII_DELETE_DELAY', 'SLAPD_CONF']: if parameter not in config: sys.stderr.write(("Error: Configuration parameter %s is not" " specified in the configuration file %s.\n") % (parameter, config['BDII_CONFIG_FILE'])) sys.exit(1) if os.path.exists(config['SLAPD_CONF']): config['BDII_PASSWD'] = {} config['BDII_PASSWD_FILE'] = {} if not os.path.exists(config['BDII_RUN_DIR']): os.makedirs(config['BDII_RUN_DIR']) rootdn = False rootpw = False filename = "" for line in open(config['SLAPD_CONF']): if line.find("rootdn") > -1: rootdn = line.replace("rootdn", "").strip() rootdn = rootdn.replace('"', '').replace(" ", "") filename = rootdn.replace('o=', '') if rootpw: config['BDII_PASSWD'][rootdn] = rootpw config['BDII_PASSWD_FILE'][rootdn] = "%s/%s" % ( config['BDII_RUN_DIR'], filename) pf = os.open(config['BDII_PASSWD_FILE'][rootdn], os.O_WRONLY | os.O_CREAT | os.O_TRUNC, 0o600) os.write(pf, rootpw) os.close(pf) rootdn = False rootpw = False if line.find("rootpw") > -1: rootpw = line.replace("rootpw", "").strip() if rootdn: config['BDII_PASSWD'][rootdn] = rootpw config['BDII_PASSWD_FILE'][rootdn] = "%s/%s" % ( config['BDII_RUN_DIR'], filename) pf = os.open(config['BDII_PASSWD_FILE'][rootdn], os.O_WRONLY | os.O_CREAT | os.O_TRUNC, 0o600) os.write(pf, rootpw.encode()) os.close(pf) rootdn = False rootpw = False config['BDII_BREATHE_TIME'] = float(config['BDII_BREATHE_TIME']) config['BDII_ARCHIVE_SIZE'] = int(config['BDII_ARCHIVE_SIZE']) config['BDII_DELETE_DELAY'] = int(config['BDII_DELETE_DELAY']) return config def print_usage(): sys.stderr.write('''Usage: %s [ OPTIONS ] -c --config BDII configuration file -d --daemon Run BDII in daemon mode ''' % str(sys.argv[0])) def create_daemon(log_file): try: pid = os.fork() except OSError as e: return((e.errno, e.strerror)) if pid == 0: os.setsid() signal.signal(signal.SIGHUP, signal.SIG_IGN) try: pid = os.fork() except OSError as e: return((e.errno, e.strerror)) if pid == 0: os.umask(0o022) else: os._exit(0) else: os._exit(0) try: maxfd = os.sysconf("SC_OPEN_MAX") except (AttributeError, ValueError): maxfd = 256 for fd in range(3, maxfd): try: os.close(fd) except OSError: pass os.close(0) os.open("/dev/null", os.O_RDONLY) os.close(1) os.open("/dev/null", os.O_WRONLY) # connect stderr to log file e = os.open(log_file, os.O_WRONLY | os.O_APPEND | os.O_CREAT, 0o644) os.dup2(e, 2) os.close(e) sys.stderr = os.fdopen(2, 'a') # Write PID pid_file = open(config['BDII_PID_FILE'], 'w') pid_file.write("%s\n" % str(os.getpid())) pid_file.close() def get_logger(log_file, log_level): log = logging.getLogger('bdii-update') hdlr = logging.StreamHandler(sys.stderr) formatter = logging.Formatter('%(asctime)s: [%(levelname)s] %(message)s') hdlr.setFormatter(formatter) log.addHandler(hdlr) log.setLevel(logging.__dict__.get(log_level)) return log def handler(signum, frame): if signum == 14: # Commit suicide process_group = os.getpgrp() os.killpg(process_group, signal.SIGTERM) sys.exit(1) def read_ldif(source): # Get pipe file descriptors read_fd, write_fd = os.pipe() # Fork pid = os.fork() if pid: # Close write file descriptor as we don't need it. os.close(write_fd) read_fh = os.fdopen(read_fd) raw_ldif = read_fh.read() result = os.waitpid(pid, 0) if result[1] > 0: log.error("Timed out while reading %s", source) return "" raw_ldif = raw_ldif.replace("\n ", "") return raw_ldif else: # Close read file d os.close(read_fd) # Set process group os.setpgrp() # Setup signal handler signal.signal(signal.SIGALRM, handler) signal.alarm(config['BDII_READ_TIMEOUT']) # Open pipe to LDIF if source[:7] == 'ldap://': url = source.split('/') command = "ldapsearch -LLL -x -H ldap://%s -b %s 2>/dev/null" % ( url[2], url[3]) pipe = os.popen(command) elif source[:7] == 'file://': pipe = open(source[7:]) else: pipe = os.popen(source) raw_ldif = pipe.read() # Close LDIF pipe pipe.close() try: write_fh = os.fdopen(write_fd, 'w') write_fh.write(raw_ldif) write_fh.close() except IOError: log.error("Information provider %s terminated unexpectedly." % source) # Disable the alarm signal.alarm(0) sys.exit(0) def get_dns(ldif): dns = {} last_dn_index = len(ldif) while True: dn_index = ldif.rfind("dn:", 0, last_dn_index) if dn_index == -1: break end_dn_index = ldif.find("\n", dn_index, last_dn_index) dn = ldif[dn_index + 4:end_dn_index].lower() dn = re.sub("\\s*,\\s*", ",", dn) dn = re.sub("\\s*=\\s*", "=", dn) # Replace encoded slash dn = dn.replace("\\5c", "\\\\") # Replace encoded comma dn = dn.replace("\\2c", "\\,") # Replace encoded equals dn = dn.replace("\\3d", "\\=") # Replace encoded plus dn = dn.replace("\\2b", "\\+") # Replace encoded semi colon dn = dn.replace("\\3b", "\\;") # Replace encoded quote dn = dn.replace("\\22", "\\\"") # Replace encoded greater than dn = dn.replace("\\3e", "\\>") # Replace encoded less than dn = dn.replace("\\3c", "\\<") dns[dn] = (dn_index, last_dn_index, end_dn_index) last_dn_index = dn_index return dns def group_dns(dns): grouped = {} for dn in dns: index = dn.rfind(",") root = dn[index + 1:].strip() if root in grouped: grouped[root].append(dn) else: if root in config['BDII_PASSWD']: grouped[root] = [dn] else: if "o=shadow" in config['BDII_PASSWD'] and root == "o=grid": grouped[root] = [dn] elif root != "o=shadow": log.error(("dn suffix %s in not specified in the slapd" " configuration file.") % root) return grouped def convert_entry(entry_string): entry = {} for line in entry_string.split("\n"): index = line.find(":") if index > -1: attribute = line[:index].lower() value = line[index + 1:].strip() if value and line[index + 1] == ":": value = base64.b64decode(line[index + 2:].strip()).decode() if attribute in entry: if value not in entry[attribute]: entry[attribute].append(value) else: entry[attribute] = [value] return entry # From RFC2849 # SAFE-CHAR = %x01-09 / %x0B-0C / %x0E-7F # ; any value <= 127 decimal except NUL, LF, # ; and CR # SAFE-INIT-CHAR = %x01-09 / %x0B-0C / %x0E-1F / # %x21-39 / %x3B / %x3D-7F # ; any value <= 127 except NUL, LF, CR, # ; SPACE, colon (":", ASCII 58 decimal) # ; and less-than ("<" , ASCII 60 decimal) # SAFE-STRING = [SAFE-INIT-CHAR *SAFE-CHAR] safe_string = re.compile("^[\x01-\x09\x0B-\x0C\x0E-\x1F\x21-\x39\x3B\x3D-\x7F]" "[\x01-\x09\x0B-\x0C\x0E-\x7F]*$") def needs_encoding(value): if not value: return False return safe_string.search(value) is None def convert_back(entry): entry_string = "dn: %s\n" % entry["dn"][0] entry.pop("dn") for attribute in entry.keys(): attribute = attribute.lower() for value in entry[attribute]: if needs_encoding(value): entry_string += "%s:: %s\n" % (attribute, base64.b64encode(value.encode()).decode()) else: entry_string += "%s: %s\n" % (attribute, value) return entry_string def ldif_diff(dn, old_entry, new_entry): add_attribute = {} delete_attribute = {} replace_attribute = {} old_entry = convert_entry(old_entry) new_entry = convert_entry(new_entry) dn_perserved_case = None for attribute in new_entry.keys(): attribute = attribute.lower() if attribute == "dn": dn_perserved_case = new_entry['dn'][0] continue # If the old entry has the attribue we need to compare values if attribute in old_entry: # If the old entries are different find the modify. if not new_entry[attribute] == old_entry[attribute]: replace_attribute[attribute] = new_entry[attribute] # The old entry does not have the attribute so add it. else: add_attribute[attribute] = new_entry[attribute] # Checking for removed attributes for attribute in old_entry.keys(): if (attribute.lower() == "dn"): continue if attribute not in new_entry: delete_attribute[attribute] = old_entry[attribute] # Create LDIF modify statement ldif = ['dn: %s' % dn_perserved_case] ldif.append('changetype: modify') for attribute in add_attribute.keys(): attribute = attribute.lower() ldif.append('add: %s' % attribute) for value in add_attribute[attribute]: ldif.append('%s: %s' % (attribute, value)) ldif.append('-') for attribute in replace_attribute.keys(): attribute = attribute.lower() ldif.append('replace: %s' % attribute) for value in replace_attribute[attribute]: ldif.append('%s: %s' % (attribute, value)) ldif.append('-') for attribute in delete_attribute.keys(): attribute = attribute.lower() ldif.append('delete: %s' % attribute) ldif.append('-') if len(ldif) > 3: ldif = "\n".join(ldif) + "\n\n" else: ldif = "" return ldif def modify_entry(entry, mods): mods = convert_entry(mods) entry = convert_entry(entry) if 'changetype' in mods: # Handle LDIF delete attribute if 'delete' in mods: for attribute in mods['delete']: attribute = attribute.lower() if attribute in entry: if attribute in mods: for value in mods[attribute]: try: entry[attribute].remove(value) if len(entry[attribute]) == 0: entry.pop(attribute) except ValueError: pass except KeyError: pass else: entry.pop(attribute) # Handle LDIF replace attribute if 'replace' in mods: for attribute in mods['replace']: attribute = attribute.lower() if attribute in entry: if attribute in mods: entry[attribute] = mods[attribute] # Handle LDIF add attribute if 'add' in mods: for attribute in mods['add']: attribute = attribute.lower() if attribute not in entry: log.debug("attribute: %s" % attribute) entry[attribute] = mods[attribute] else: entry[attribute].extend(mods[attribute]) # Just old style just change else: for attribute in mods.keys(): if attribute in entry: entry[attribute] = mods[attribute] entry_string = convert_back(entry) return entry_string def fix(dns, ldif): response = [] append = response.append for dn in dns.keys(): entry = convert_entry(ldif[dns[dn][0]:dns[dn][1]]) if dn[:11].lower() == "mds-vo-name": if 'objectclass' in entry: if 'mds' in [x.lower() for x in entry['objectclass']]: if 'gluetop' in [x.lower() for x in entry['objectclass']]: value = dn[12:dn.index(",")] entry = {'dn': [dn], 'objectclass': ['MDS'], 'mds-vo-name': [value]} entry = convert_back(entry) append(entry) response = "".join(response) return response def log_errors(error_file, dns): log.debug("Logging Errors") request = 0 dn = None error_counter = 0 for line in open(error_file).readlines(): if line[:7] == 'request': request += 1 else: if request > 1: try: if not dn == dns[request - 2]: error_counter += 1 dn = dns[request - 2] log.warning("dn: %s" % dn) except IndexError: log.error("Problem with error reporting ...") log.error("Request Num: %i, Line: %s, dns: %i" % (request, line, len(dns))) if len(line) > 5: log.warning(line.strip()) return error_counter def main(config, log): log.info("Starting Update Process") while True: log.info("Starting Update") stats = {} stats['update_start'] = time.time() new_ldif = "" log.info("Reading static LDIF files ...") stats['read_start'] = time.time() ldif_files = os.listdir(config['BDII_LDIF_DIR']) for file_name in ldif_files: if file_name[-5:] == '.ldif': if file_name[0] not in ('#', '.'): file_url = "file://%s/%s" % (config['BDII_LDIF_DIR'], file_name) log.debug("Reading %s" % file_url[7:]) response = read_ldif(file_url) new_ldif = new_ldif + response stats['read_stop'] = time.time() log.info("Running Providers") stats['providers_start'] = time.time() providers = os.listdir(config['BDII_PROVIDER_DIR']) for provider in providers: if provider[-1:] != '~' or (provider[0] in ('#', '.')): log.debug("Running %s/%s" % (config['BDII_PROVIDER_DIR'], provider)) response = read_ldif("%s/%s" % (config['BDII_PROVIDER_DIR'], provider)) new_ldif = new_ldif + response stats['providers_stop'] = time.time() new_dns = get_dns(new_ldif) ldif_modify = "" log.info("Running Plugins") stats['plugins_start'] = time.time() plugins = os.listdir(config['BDII_PLUGIN_DIR']) for plugin in plugins: if plugin[-1:] != '~' or (plugin[0] in ('#', '.')): log.debug("Running %s/%s" % (config['BDII_PLUGIN_DIR'], plugin)) response = read_ldif("%s/%s" % (config['BDII_PLUGIN_DIR'], plugin)) modify_dns = get_dns(response) for dn in modify_dns.keys(): if dn in new_dns: mod_entry = modify_entry( new_ldif[new_dns[dn][0]:new_dns[dn][1]], response[modify_dns[dn][0]:modify_dns[dn][1]]) start = len(new_ldif) end = start + len(mod_entry) new_dns[dn] = (start, end) new_ldif = new_ldif + mod_entry else: ldif_modify += response[ modify_dns[dn][0]:modify_dns[dn][1] ] stats['plugins_stop'] = time.time() log.debug("Doing Fix") new_ldif = fix(new_dns, new_ldif) log.debug("Writing new_ldif to disk") if config['BDII_LOG_LEVEL'] == 'DEBUG': dump_fh = open("%s/new.ldif" % (config['BDII_VAR_DIR']), 'w') dump_fh.write(new_ldif) dump_fh.close() if not config['BDII_DAEMON']: print(new_ldif) sys.exit(0) log.info("Reading old LDIF file ...") stats['read_old_start'] = time.time() old_ldif_file = "%s/old.ldif" % (config['BDII_VAR_DIR']) if os.path.exists(old_ldif_file): old_ldif = read_ldif("file://%s" % (old_ldif_file)) else: old_ldif = "" stats['read_old_stop'] = time.time() log.debug("Starting Diff") ldif_add = [] ldif_delete = [] new_dns = get_dns(new_ldif) old_dns = get_dns(old_ldif) for dn in new_dns.keys(): if dn in old_dns: old = old_ldif[old_dns[dn][0]:old_dns[dn][1]].strip() new = new_ldif[new_dns[dn][0]:new_dns[dn][1]].strip() # If the entries are different we need to compare them if not new == old: entry = ldif_diff(dn, old, new) ldif_modify += entry else: ldif_add.append(dn) # Checking for removed entries for dn in old_dns.keys(): if dn not in new_dns: ldif_delete.append(old_ldif[old_dns[dn][0] + 4:old_dns[dn][2]].strip()) log.debug("Finished Diff") log.debug("Sorting Add Keys") ldif_add.sort(key=lambda x: len(x)) log.debug("Writing ldif_add to disk") if config['BDII_LOG_LEVEL'] == 'DEBUG': dump_fh = open("%s/add.ldif" % (config['BDII_VAR_DIR']), 'w') for dn in ldif_add: dump_fh.write(new_ldif[new_dns[dn][0]:new_dns[dn][1]]) dump_fh.write("\n") dump_fh.close() log.debug("Adding New Entries") stats['db_update_start'] = time.time() if config['BDII_LOG_LEVEL'] == 'DEBUG': error_file = "%s/add.err" % config['BDII_VAR_DIR'] else: error_file = tempfile.mktemp() roots = group_dns(ldif_add) suffixes = list(roots.keys()) if "o=shadow" in suffixes: index = suffixes.index("o=shadow") if index > 0: suffixes[index] = suffixes[0] suffixes[0] = "o=shadow" add_error_counter = 0 for root in suffixes: try: bind = root if "o=shadow" in config['BDII_PASSWD']: if root == "o=grid": bind = "o=shadow" input_fh = os.popen(("ldapadd -d 256 -x -c -H ldap://%s:%s" " -D %s -y %s >/dev/null 2>%s") % ( config['BDII_HOSTNAME'], config['BDII_PORT'], bind, config['BDII_PASSWD_FILE'][bind], error_file), 'w') for dn in roots[root]: input_fh.write(new_ldif[new_dns[dn][0]:new_dns[dn][1]]) input_fh.write("\n") input_fh.close() except (IOError, KeyError): log.error("Could not add new entries to the database.") add_error_counter += log_errors(error_file, ldif_add) if not config['BDII_LOG_LEVEL'] == 'DEBUG': os.remove(error_file) log.debug("Writing ldif_modify to disk") if config['BDII_LOG_LEVEL'] == 'DEBUG': dump_fh = open("%s/modify.ldif" % (config['BDII_VAR_DIR']), 'w') dump_fh.write(ldif_modify) dump_fh.close() log.debug("Modify New Entries") if config['BDII_LOG_LEVEL'] == 'DEBUG': error_file = "%s/modify.err" % config['BDII_VAR_DIR'] else: error_file = tempfile.mktemp() ldif_modify_dns = get_dns(ldif_modify) roots = group_dns(ldif_modify_dns) modify_error_counter = 0 for root in roots.keys(): try: bind = root if "o=shadow" in config['BDII_PASSWD']: if root == "o=grid": bind = "o=shadow" input_fh = os.popen(("ldapmodify -d 256 -x -c -H ldap://%s:%s -D" " %s -y %s >/dev/null 2>%s") % ( config['BDII_HOSTNAME'], config['BDII_PORT'], bind, config['BDII_PASSWD_FILE'][bind], error_file), 'w') for dn in roots[root]: input_fh.write(ldif_modify[ ldif_modify_dns[dn][0]:ldif_modify_dns[dn][1] ]) input_fh.write("\n") input_fh.close() except (IOError, KeyError): log.error("Could not modify entries in the database.") modify_error_counter += log_errors(error_file, list(ldif_modify_dns.keys())) if config['BDII_LOG_LEVEL'] != 'DEBUG': os.remove(error_file) log.debug("Sorting Delete Keys") ldif_delete.sort(key=lambda x: len(x)) log.debug("Writing ldif_delete to disk") if config['BDII_LOG_LEVEL'] == 'DEBUG': dump_fh = open("%s/delete.ldif" % config['BDII_VAR_DIR'], 'w') for dn in ldif_delete: dump_fh.write("%s\n" % (dn)) dump_fh.close() # Delayed delete Function if config['BDII_DELETE_DELAY'] > 0: log.debug("Doing Delayed Delete") delete_timestamp = time.time() # Get DNs of entries to be deleted not yet in delayed delete so # their status can be updated new_delayed_delete_file = '%s/new_delayed_delete.pkl' % config[ 'BDII_VAR_DIR' ] try: nfh = open(new_delayed_delete_file, 'w') nfh.write("") except IOError: log.error("Unable to open new_delayed_delete file %s" % new_delayed_delete_file) delayed_delete_file = '%s/delayed_delete.pkl' % config[ 'BDII_VAR_DIR' ] if os.path.exists(delayed_delete_file): file_handle = open(delayed_delete_file, 'rb') delay_delete = pickle.load(file_handle) file_handle.close() else: delay_delete = {} # Add remove cache timestamps that have been readded for dn in list(delay_delete.keys()): if dn not in ldif_delete: log.debug("Removing %s from cache (readded)" % dn) delay_delete.pop(dn) # Add current timestamp for new deletes for dn in ldif_delete: if dn not in delay_delete: delay_delete[dn] = delete_timestamp nfh.write("%s\n" % (dn)) nfh.close() # Remove delayed deletes from LDIF or remove from cache for dn in list(delay_delete.keys()): if delay_delete[dn] + config[ 'BDII_DELETE_DELAY'] >= delete_timestamp: ldif_delete.remove(dn) else: delay_delete.pop(dn) # Store Delayed Deletes log.debug("Storing delayed deletes") file_handle = open(delayed_delete_file, 'wb') pickle.dump(delay_delete, file_handle) file_handle.close() log.debug("Deleting Old Entries") if config['BDII_LOG_LEVEL'] == 'DEBUG': error_file = "%s/delete.err" % config['BDII_VAR_DIR'] else: error_file = tempfile.mktemp() roots = group_dns(ldif_delete) delete_error_counter = 0 for root in roots.keys(): try: bind = root if "o=shadow" in config['BDII_PASSWD']: if root == "o=grid": bind = "o=shadow" input_fh = os.popen(("ldapdelete -d 256 -x -c -H ldap://%s:%s" " -D %s -y %s >/dev/null 2>%s") % ( config['BDII_HOSTNAME'], config['BDII_PORT'], bind, config['BDII_PASSWD_FILE'][bind], error_file), 'w') for dn in roots[root]: input_fh.write("%s\n" % dn) log.debug("Deleting %s" % dn) input_fh.close() except (IOError, KeyError): log.error("Could not delete old entries in the database.") delete_error_counter += log_errors(error_file, ldif_delete) if config['BDII_LOG_LEVEL'] != 'DEBUG': os.remove(error_file) roots = group_dns(new_dns) stats['query_start'] = time.time() if os.path.exists("%s/old.ldif" % config['BDII_VAR_DIR']): os.remove("%s/old.ldif" % config['BDII_VAR_DIR']) if os.path.exists("%s/old.err" % config['BDII_VAR_DIR']): os.remove("%s/old.err" % config['BDII_VAR_DIR']) for root in roots.keys(): # Stop flapping due to o=shadow if root == "o=shadow": command = ("ldapsearch -LLL -x -H ldap://%s:%s -b %s -s base" " >> %s/old.ldif 2>> %s/old.err") % ( config['BDII_HOSTNAME'], config['BDII_PORT'], root, config['BDII_VAR_DIR'], config['BDII_VAR_DIR']) else: command = ("ldapsearch -LLL -x -H ldap://%s:%s -b %s" " >> %s/old.ldif 2>> %s/old.err") % ( config['BDII_HOSTNAME'], config['BDII_PORT'], root, config['BDII_VAR_DIR'], config['BDII_VAR_DIR']) result = os.system(command) if result > 0: log.error("Query to self failed.") stats['query_stop'] = time.time() out_file = "%s/archive/%s-snapshot.gz" % ( config['BDII_VAR_DIR'], time.strftime('%y-%m-%d-%H-%M-%S')) log.debug("Creating GZIP file") os.system("gzip -c %s/old.ldif > %s" % (config['BDII_VAR_DIR'], out_file)) infosys_output = "" if len(old_ldif) == 0: log.debug("ldapadd o=infosys compression") command = "ldapadd" infosys_output += "dn: o=infosys\n" infosys_output += "objectClass: organization\n" infosys_output += "o: infosys\n\n" infosys_output += "dn: CompressionType=zip,o=infosys\n" infosys_output += "objectClass: CompressedContent\n" infosys_output += "Hostname: %s\n" % config['BDII_HOSTNAME'] infosys_output += "CompressionType: zip\n" infosys_output += "Data: file://%s\n\n" % out_file else: log.debug("ldapmodify o=infosys compression") command = "ldapmodify" infosys_output += "dn: CompressionType=zip,o=infosys\n" infosys_output += "changetype: Modify\n" infosys_output += "replace: Data\n" infosys_output += "Data: file://%s\n\n" % out_file try: output_fh = os.popen(("%s -x -c -H ldap://%s:%s -D o=infosys -y %s" " >/dev/null") % ( command, config['BDII_HOSTNAME'], config['BDII_PORT'], config['BDII_PASSWD_FILE']['o=infosys']), 'w') output_fh.write(infosys_output) output_fh.close() except (IOError, KeyError): log.error("Could not add compressed data to the database.") old_files = os.popen("ls -t %s/archive" % config[ 'BDII_VAR_DIR']).readlines() log.debug("Deleting old GZIP files") for file in old_files[config['BDII_ARCHIVE_SIZE']:]: os.remove("%s/archive/%s" % (config['BDII_VAR_DIR'], file.strip())) stats['db_update_stop'] = time.time() stats['update_stop'] = time.time() stats['UpdateTime'] = int(stats['update_stop'] - stats['update_start']) stats['ReadTime'] = int(stats['read_old_stop'] - stats['read_old_start']) stats['ProvidersTime'] = int(stats['providers_stop'] - stats['providers_start']) stats['PluginsTime'] = int(stats['plugins_stop'] - stats['plugins_start']) stats['QueryTime'] = int(stats['query_stop'] - stats['query_start']) stats['DBUpdateTime'] = int(stats['db_update_stop'] - stats['db_update_start']) stats['TotalEntries'] = len(old_dns) stats['NewEntries'] = len(ldif_add) stats['ModifiedEntries'] = len(ldif_modify_dns.keys()) stats['DeletedEntries'] = len(ldif_delete) stats['FailedAdds'] = add_error_counter stats['FailedModifies'] = modify_error_counter stats['FailedDeletes'] = delete_error_counter for key in stats.keys(): if key.find("_") == -1: log.info("%s: %i" % (key, stats[key])) infosys_output = "" if len(old_ldif) == 0: log.debug("ldapadd o=infosys updatestats") command = "ldapadd" infosys_output += "dn: Hostname=%s,o=infosys\n" % config[ 'BDII_HOSTNAME'] infosys_output += "objectClass: UpdateStats\n" infosys_output += "Hostname: %s\n" % config['BDII_HOSTNAME'] for key in stats.keys(): if key.find("_") == -1: infosys_output += "%s: %i\n" % (key, stats[key]) infosys_output += "\n" else: log.debug("ldapmodify o=infosys updatestats") command = "ldapmodify" infosys_output += "dn: Hostname=%s,o=infosys\n" % config[ 'BDII_HOSTNAME'] infosys_output += "changetype: Modify\n" for key in stats.keys(): if key.find("_") == -1: infosys_output += "replace: %s\n" % key infosys_output += "%s: %i\n" % (key, stats[key]) infosys_output += "-\n" infosys_output += "\n" try: output_fh = os.popen(("%s -x -c -H ldap://%s:%s -D o=infosys -y %s" " >/dev/null") % ( command, config['BDII_HOSTNAME'], config['BDII_PORT'], config['BDII_PASSWD_FILE']['o=infosys']), 'w') output_fh.write(infosys_output) output_fh.close() except (IOError, KeyError): log.error("Could not add stats entries to the database.") old_ldif = None new_ldif = None new_dns = None ldif_delete = None ldif_add = None ldif_modify = None log.info("Sleeping for %i seconds" % int(config['BDII_BREATHE_TIME'])) time.sleep(config['BDII_BREATHE_TIME']) if __name__ == '__main__': config = parse_options() config = get_config(config) if config['BDII_DAEMON']: create_daemon(config['BDII_LOG_FILE']) # Giving some time for the init.d script to finish time.sleep(3) else: # connect stderr to log file e = os.open(config['BDII_LOG_FILE'], os.O_WRONLY | os.O_APPEND | os.O_CREAT, 0o644) os.dup2(e, 2) os.close(e) sys.stderr = os.fdopen(2, 'a') log = get_logger(config['BDII_LOG_FILE'], config['BDII_LOG_LEVEL']) main(config, log) bdii-6.0.3/codemeta.json000066400000000000000000000043471463754072600151310ustar00rootroot00000000000000{ "@context": "https://doi.org/10.5063/schema/codemeta-2.0", "@type": "Code", "name": "Berkeley Database Information Index (BDII)", "description": "The BDII implementation of the information", "provider": { "@type": "Organization", "name": "", "url": "" }, "maintainer": [ { "@type": "Person", "@id": "https://orcid.org/0000-0002-5686-3193", "name": "Baptiste", "familyName": "Grenier", "affiliation": { "@type": "Organization", "name": "EGI Foundation", "url": "https://www.egi.eu" } }, { "@type": "Person", "@id": "https://orcid.org/0000-0002-5152-5902", "name": "Enol", "familyName": "Fernández", "affiliation": { "@type": "Organization", "name": "EGI Foundation", "url": "https://www.egi.eu" } }, { "@type": "Person", "@id": "https://orcid.org/0000-0001-7949-2199", "name": "Andrea", "familyName": "Manzi", "affiliation": { "@type": "Organization", "name": "EGI Foundation", "url": "https://www.egi.eu" } }, { "@type": "Person", "@id": "https://orcid.org/0000-0001-5265-3175", "name": "Mattias", "familyName": "Ellert", "affiliation": { "@type": "Organization", "name": "Uppsala Universitet", "url": "https://www.physics.uu.se" } }, { "@type": "Role", "roleName": "Security and Vulnerability", "url": "" } ], "operatingSystem": [ "centos 7", "centos 8", "centos 9" ], "installUrl": "https://github.com/EGI-Federation/glite-info-update-endpoints/releases", "buildInstructions": "http://gridinfo-documentation.readthedocs.io/", "releaseNotes": "", "codeRepository": "https://github.com/EGI-Federation/bdii", "contIntegration": "https://travis-ci.org/EGI-Federation/bdii", "networkStack": "", "supportUnit": { "@type": "contactPoint", "contactType": "GGUS", "name": "Information System Development" }, "developmentStatus": "supported", "author": [], "citation": "", "dateCreated": "", "dateModified": "", "keywords": [ "grid", "information", "index" ], "license": "Apache-2.0" } bdii-6.0.3/docs/000077500000000000000000000000001463754072600133755ustar00rootroot00000000000000bdii-6.0.3/docs/Makefile000066400000000000000000000011311463754072600150310ustar00rootroot00000000000000# Minimal makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build SPHINXPROJ = BDII SOURCEDIR = . BUILDDIR = _build # Put it first so that "make" without argument is like "make help". help: @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) .PHONY: help Makefile # Catch-all target: route all unknown targets to Sphinx using the new # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). %: Makefile @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)bdii-6.0.3/docs/conf.py000066400000000000000000000112461463754072600147000ustar00rootroot00000000000000#!/usr/bin/env python3 # -*- coding: utf-8 -*- # # BDII documentation build configuration file, created by # sphinx-quickstart on Wed Dec 21 17:13:25 2016. # # This file is execfile()d with the current directory set to its # containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # # import os # import sys # sys.path.insert(0, os.path.abspath('.')) # -- General configuration ------------------------------------------------ # If your documentation needs a minimal Sphinx version, state it here. # # needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix(es) of source filenames. # You can specify multiple suffix as a list of string: # # source_suffix = ['.rst', '.md'] source_suffix = '.rst' # The master toctree document. master_doc = 'index' # General information about the project. project = 'BDII' copyright = '2016, Laurence Field, Maria Alandes Pradillo' author = 'Laurence Field, Maria Alandes Pradillo' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = '5.2.23' # The full version, including alpha/beta/rc tags. release = '5.2.23-1' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # # This is also used if you do content translation via gettext catalogs. # Usually you set "language" from the command line for these cases. language = None # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. # This patterns also effect to html_static_path and html_extra_path exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store'] # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # If true, `todo` and `todoList` produce output, else they produce nothing. todo_include_todos = False # -- Options for HTML output ---------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. # #html_theme = 'alabaster' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # # html_theme_options = {} # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". #html_static_path = ['_static'] # -- Options for HTMLHelp output ------------------------------------------ # Output file base name for HTML help builder. htmlhelp_basename = 'BDIIdoc' # -- Options for LaTeX output --------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). # # 'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). # # 'pointsize': '10pt', # Additional stuff for the LaTeX preamble. # # 'preamble': '', # Latex figure (float) alignment # # 'figure_align': 'htbp', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ (master_doc, 'BDII.tex', 'BDII Documentation', 'Laurence Field, Maria Alandes Pradillo', 'manual'), ] # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ (master_doc, 'bdii', 'BDII Documentation', [author], 1) ] # -- Options for Texinfo output ------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ (master_doc, 'BDII', 'BDII Documentation', author, 'BDII', 'One line description of project.', 'Miscellaneous'), ] bdii-6.0.3/docs/index.rst000066400000000000000000000007171463754072600152430ustar00rootroot00000000000000.. BDII documentation master file, created by sphinx-quickstart on Wed Dec 21 17:13:25 2016. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. Welcome to BDII's documentation! ================================ .. toctree:: :maxdepth: 2 :caption: Contents: source/intro source/metadata Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` bdii-6.0.3/docs/source/000077500000000000000000000000001463754072600146755ustar00rootroot00000000000000bdii-6.0.3/docs/source/images/000077500000000000000000000000001463754072600161425ustar00rootroot00000000000000bdii-6.0.3/docs/source/images/InfoSysStructure.jpg000066400000000000000000001443571463754072600221750ustar00rootroot00000000000000JFIF``C    $ &%# #"(-90(*6+"#2D26;=@@@&0FKE>J9?@=C  =)#)==================================================" }!1AQa"q2#BR$3br %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz w!1AQaq"2B #3Rbr $4%&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz ?j( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (f R{PXW3VKq nEq75iN["HQ^k/}bLOYp֫HTd:=RLK?pߥ rċ{w)Vը1ֶsP U %whGxzJS/: lǴɏdWCosBKi3ѣ`I=hQ@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@T3]oES'd4vDUQs4xpp?!#K]\S>e~?[aIN4-ޔZ;X[fLKǏ֤ơ?V}ou/Gu,a0+U"a~6P%Ŵ,.2 ׌0=l'XGףf&?\U.hTdzQ\'XQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEV~{~I;&@kB;B?l~>~c_oRW;~%= ܆KkyF+Gevy+Y0,M,?%GI+}*[<˥\A Bq\w^|?|ڼE#$}ګL\wH~?1'%vZ{֢;.N{#즺cR|mKsј5ʴVHw*QC9-[aڶۓ46O)SPv'g_˽Nk&l+=[BD6P)ҕywU )Ui0,"My>:Tw4QExQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQUwMJi9;"I6OH^i9~֤r$Ѭ0e9Q*pf6n$| %w&gT(1kjޘ=cCM_k ɽc x2ir];Ąlj|kwLOIg-ٴmyxXDCvfɝ~] =k{x9s$@l_R2µuiXx7t\\+f^Ss|׎wS+&e-6VWMSmҶ( &v9Ē-nd]:$Sa%ԫ*>3]%?Wv0~SS Ԭ ǻثnȫW/ukss,EfD 댁K o?nn%%3J̼へ zwuWK;'\.rVX}V%6I5(ɟ!;Oۧ a)&h9ݰ1ۚ(w{ZIGwMeStf?ev>º*()SjUj{G{QZQ@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@o{]ۤr ˺ǸKemY<Ȼdg+̾"i^Bz5r 1م+'zMy?-ZB-7_ r9aӥtai&cZ^QEzGQEQV"Y`6* m|v}1JvdtU4BIl.e`8-Ģ8#y$=E 2:*EWH)!s'G@(Ri2DdT䰁'Yp EOΨf $Wq.P!沺'=GEu77sB\({m9+=2I%-fqqͿֈgPx沅-v.73z|F?:EPԤ3BmC򍿭$7ZK:$ScTb4(yg 6h^\^jA$:g3E_/<ТE4+oyy~uz–N3q(@h1]_>ɧv;n<;>^ד{dټc;Eg{2Ǥ,Yc Ԗ Xt@ ~,3f/TME M6i 8Ȓ0QĚ6yK]3C/ U ]Bwdom`i^%]}b J[9}#;^_JW"4(+c_|f=sl >O]Eg ɣ_$zxiDBPXہ 't9IIOtl(O-~6JERPMENxH`[}>l]JK[MD (LIc/vaj/pM>lۑ.~%ݦAo5r8X*[Os5>L:4h27 3]˱ޡH0ˑ!჏4?YY.x]rĺ͂,kmjpn2O@Ey}~K5 NRMumF)ALj4_!Ojq8<Fq@ֵdhڏݷv'OJvY_~WڠInݷrCƟ$k7kֵ7ӬF5脗>ZĤ?{ Nzlb [MvJH:ddcI"ESylnN$O'l@Hj3^eNM|2hRκI_6fhʌ URlgGKm +2p˿z0z$վonpzgej_5*]'5rCv4υM5Idx|n<>YxL<-6lҵLHYJǾ+ >*?Y8Rog*ы<}&O?wǫ[-ėeH08¨x'GI26eaG2 .?Iksb2f [z\4Rc>%12| wחL> kq[E2[`ybqֻKo,PgEPEPEPEPEUԭ4s=pF;R~kVWZL yp8?T`.IC=ݽ|O__\߉_UMXoCSAGO\KɌB+v.i=/tHNT'*@RAԢXJ"N鱟ٛx[DN=c~="Ƨni78u+F'YBxL*??>O|{+8~a:ȡee= uq|:̚N{e! ~X?0= huhSG$^9id5"+wvfg\'qr(Co^ԬrFY1ւդF#[<탠,Ϲ@Q]ZIi v̬9c6^ou-Y'b!GںW@NWԋM[[zkX[I&&͆kx-_茤AEU&#._qkQԚ?}EOu^#nⓣ!#Vc ( ( ( ( ( ( ( ( ( ( ( ( (1fZ~iOo\`[ S~)Ҭ.]kVI3c zλz(qMdeM勈KJU%wbOZЏ|OyirZI#Syd|O5Q@u;cDӼ+p<>fw)Fz6ൽc5-o\2ܕ'8sOEsXZ4)+l񌑒s0ON}jׁ"^!k,JRI,7Lv|IY5]KMҵ[U1sbEd=UR&̺ƿ><sj3''[P?F<7i?gݛ7et_ZĽ^; JOcq],}G;mf&:6A۴p4M]r+sɉ(O4z>hh(k^/~[!/奿Vs?:r7}g{k(꬀RR{E(r/xz\f=' UƧ*y%|mXM]IR#XJ?M/q,B@sKӷF/n`?ΐ=pI -\'?:"Ӵ(^(|eb^]+LeUz:][Us>]𶈘ƕiǬ@:UtRoY Җ) +?5ț}CZ|H,@YkKHZ4(C ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ((BU@֟U|LQ!4T٫*qd$*M9VPe᧔Ei uQI&Ɖl0.+˳e_eG 4oڋ,˛[~JWYY[֩mi *IԔާL QEaEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEP\w`~ :kGZRe&4&Uԯۢ`cj@iOq0m5Ԓ<4)GQjrodtG4y$m2'8Vޫj^2дm{Ɠ/EFrh8>Ʃ-^k]:PjF0{GO(-|-g4j Q;rI=8z#^Q-9n>Y-"^1Qd#DGOEHΞ.j>Q(8ힵ(-|/g,@nM/VےIzJMl^6Y̳A ʺ_ڲZMwUxܞS229 GC[l42[j($=Ms&zEM 3V-Ɲs]ʞTcYW>=Զ;%:08#!qULdh"з pCֱf oP̭r0PGOj"!w)4,8 !5kKI\C7Sc%8>:7P'1VME+EŵZ٘c'_OFȵפ_RG?Qje*QG?Qje*s0E+kn'hY@hР = ?ή? '*#QE# (+j6lܠO&j+>+7ϿO3u?R)O&UuFI_*}3޲s_'Mh?k/ Mtz~mXC{e'o2GW#piuj[EI|G.O?m]ԟ4}~SEwO&]ԟ4(cu?R)O&E}w.O?>u?R) GWҍ7S!Y98S]EI|G.O?ʹm564Ob#23=hQyw{ G}~S_'M6.]ԟ4}~SEw,Nע9&i͆ 0A_@=kNl?"ֵuPmSHlQElQGE5rzBUKb(Q@Q@Q@e{MEWi(S nŠ(QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEW_Qe>&ݫ0GY/4kMMOI|vG9r6\]iFNv:dqX3i>"afMbGֺ))48Ѵ5)umb.QmJy<ڇԖ?uIWI8!&fm>ڇj/ZTT=gۇ}/u%,#TNt j٪"KpOс'֚q#v_N20q*9!WޮUm?Kxb Xf;$&%WO3[Q\r%i ?uBnujzFcmeInZE8mg|_f:xۓw81z̛AxjNkk>U񅷇GWdm=mƏ9ۈU m"ZKs*Q:ZkUݦ. &{xΰʃog>VgkغҒ,*W"}&_W\ah~3@I|zӚlqű@;𝏋']:fl7 bM+w sSCzɆSE$r.c{97zm {jS50cx$v^f_ 6?'Idɐ\ #ɸRБ> m ^5{;\`W`_;g<uVѬnH2Ʋp1SBݧ3=^BP<7V+kXEƑ1i’Opm€c֏Is}EÒ2yکjo)u XQ,W0^lmu^s1I4_./4xY綂wRETbB@3\E6R_ڭ 5yqߦ{]SWpݍR$KV@9$&Y3#bTWSE';$j@uj[,;Ry]8|pIkӠ+뿶\{z ͹ngf "0[33c czfڲ,(Q#K7j3w_@jŧ'U?"ֵva\obŧ'Gŧ'WRՓGVR" 0GWhwwQE!Q@Q@Q@e{MEWi(S nŠ(QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEW_b)AYdQEhPQEQEQEQEQEF_Z״Uz!OBU=(Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@eCLr6ájQM6К8riS q}OkGY^6 2NAWRl&&A=Wz叄u 5 95#֞|֊Nj3ִ H^[[̠,縒"q|旲ahh]J2y:xF k}njyeخeҢĚ2 VO5VݩF5gBI>̻W!7}8akwy)0*#x[8??1{)uОuSԵ[-"}pmǖ5VDP?z>׮]I]~rvuh*FT`NëP;REdhQEQEQKJTlRUKqG`*FQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQI@{MEWi(9TB[hQ@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@U,n76vHTQ{/N[Lr^$O**ܞX9/8ӗY\6Px[DǗڜ~05E}C=6K~-TQEQEQEQEQKJTlRUKqG`*FQEQEQEQEQEQEQEQEQEQEQEQEQEQTP K-+({G٠1"~?}}(wh{G٠1">ߏ}}(wi `<k4?Qh?ȣ9b)$o! Bֺx8b".z5X3",fx|bKcAߏcA~?:+4?Qh?ȣ9cA~?:+4?U!;lC1ݪ]J6ݍJ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ((]E%^6?uIWQE# ( ( ( ( ( ( ( ( ( ( ( ( ( ǃOT}?1QW3袊12tsgqyo]1>-zEKXC[ PiIPNۀZ]Jh"3X;oیǵmvնiűْ)%.0|@&R VPմjV]^SV\ʙWqcW"ac(=m_þ 3\W -=jKYxCĚ΄&.s&B jyB}Er3tXX$𵗅*Kbg׼;DΤD_ʎ^Wo{yaMO2nM)(i_ 9I3St\iFI톪9S,zmoluO6hmn%d@vנIXL*ΗW'jVtq?S4袊5 ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( JZwy kڃr}@[CʌUjnAVa\uez!<Ň @=(sVֆQ~l cvDj6rL6S{Q =71;~ hW=VC0V\=qqPqx$Ubpϱ[rGnDAXW7FI>ɰ$\JY?ݺTOY%qhWp8~Cq'WSe RV6l5xw;Gluk%Df hV@ih(((j_RUcڗw_z[;QR0(((((((((((((zx7TRI?ί/F)l*/=i{G} ]nv֥u[;q=sԷ {KVYo#=r=:V`B=W1x6F?ak,.H;Teq'5D5]^WQ?У0hGcWMޑՔ.$F\ci^ 54%Ե -}peA뷁~}(Zs|#=ŘRF~moRշZ\]:SI0hG`BiedØo>/&`e$l #>G[{/lov}o*FO52Om⏴=.{kq\м7e3O4nd%krdfZF2 3k}(L?Q* x;I.܍wn1ޱ5Σ˗䥼ܣ{G?У0hBns"iqd2FؒŸ `*}:u]b۝GyvdDlcpP8$~&}(aTk}6Pi//> 0hG`B2}BgKQL?U!KlAsݪ4/FQ]EPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPE",AԚΜxͣr66 29[y%2u8#~5[HΘK40@ ҟG7>Y OZy >5xTǗ9MMetyqz~Ujqsq}ypɧ핉kmtAco%'/qsp1S{Y/c{8!xG@H=HmD'aT!s^*zuqb2 BOQ@Nws8{uqbC]gghzչco+.2YWҪH% E%Rq"#^n #a.2˰ߚuoHnDF9'RɪKsdy:?ҋ+DOm6{0e S\c ]pZ 84-d4Ff\'AO=ey+5txp?f+ܳERQEQEF?/)*QKJU-(QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQE?_?hV|0Y׼uU-+vQEIAEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPMsN4GcRV+(8&o#<J;Y4 ҲvDĈd;XZ;)Qb]Uik_OoU2xK:}ɵ]A$wP+ׂv pqrZZt5OlF]?UB3߅<@:yǖplr=j=s2XnFiL#$|p3ОG]m]'R+vc2V*3cd qqz9t9u{[;BtrN>I/t'Tfp>P3q޴^)KR[1`d\VY^qh*S FDAcsT|m7?|8l߲ }$1@Eyό|_uɴ _Ko6VI$JsORz.D|٘FB;( ( ( ( ( ( ( ( ( ( ( ( (3У ϟF /B*%nŠ((((((((((((((++Ind0H@?rgRKK#v4kGyi61sx@ʅOFR0EyZ Q.W}[tl. ӿ_oz&qӜ}v[{I=`.'{⹹`.f\3GOÿpjv8uY:OݠP@Exm}^"UH)r{׭UuH1`a$Ag#2 #EPEPEPEPEPEPEPEPEPEPEPEPL'9$<*ľ&g˶z)9=? ڼ%}qee?ʶ :nvH$:dqV_9Ѥ`8sL4Qj^on3 RAS^ėJQW^˧ пߪG23'&:_ixKCKTEOuƧY B"M3(;c/N ר|F5Īr7~UB yiao[,"70 /_ u*1YPGq ᱎz{WN45}L XUͺ5>l=۩5&пߡU{>TD?hdxa4wTڍfn/Уϋz'*Keϕ_ IB>/Ш??/Qa>Vt5'} <B˰+oG][ߥ =ԟϋz'(} .|vm~(CR>/Уϋz'*Keϕ_ Jݔ0n3¬miSemnෆ&#D(miG֝?UzWaQӿ;!^=R֝?TmiWkN諾E`ԣ?ZwC}U({;!ӿQE5(miG֝?Uz/GkN?Qx J?ZwC}Q?^ǰjQӿc<WQ3¨nM[:߲MMr;QPPQEQE[PT,нf92`+kqo%j۰xL/mvP`mλoOٳ~} oKO˭jp<f\d'^Q@Ë o.l5]oI.d >ʉm< }/O [۠5,Np9Xtm-xLI#`rA¿C_$~ImgA+CE%moxkFmdYc-=C8;YH>> - {MK),%M}|Q,uh=B;FܡCZ4&IGm̨0 :R-7Q~5j\Ӗ-[P*J ( ((]E%^6?uIWQE# ( ( (n[9㷐E3FG45'@ee4/OWH&rA?ήQKgrSothQXZ} W:zev ̇SU`Ja~6QEqx7Wy|W|HP 3ҊOwa(-A'5A]B˻ se]SO6X7rΆpƅ?MO{5L˟SIl$s[֣j_LaÏP,.'-qaǡ˟O6.LX$ErT0k'X9J)=݁GgTl?S1}H /5IvZ\F: :w|z{%,g,<8aTWoGcBu'B9vU Xc2Il4mKz=%Fܰl qmZ ~0y2'.>XpGEUԭe緂p&r`\!: QI ;:+?DӮ4<[_K} bt$:7ڼ𽞵s)VXaz0%{\wvݢ|v.ll*v*y\Ԛhig#Y݀{T!-a)5jI-|2##9O@|;@'Jc.E C_bN :Ю~?~)DC)tK삿S=!j*pu;eea|ɓG|9'Wti$I[y/ fI-Ĺv/6xoŒMTGږikI Wq^GNA?٪vXkq7Vg >wGr[[t*He# JmoKRD8dy}5r8(4DTzX 乺4s'n m_NʵB39>ՙ&y8e5WG.͍0CyEh^Yi%oQ$d}G4>[ ԫ Ak OvVy;?jZM]aET(((((@4S֨.BN&:nsa=-MTq=z+WвͺPƊI?J^KZN~Fun~ r ~&}Joܛ.wӯAxxR~x@uM5uu{ \_ 3_Q\=YeI=~"ִˠ )?j\$CROf^X0H %Ui;\,u$ Yh6yea0qY񞫫eO{?ZNO.qG]wwocnLĝ]yOuAm'#9QmVnQ=ˎEg>$Q%Dj֟,dՙ֊(s`((j_RUcڗw_z[;QR0((M wI 9A"T{,kp$I##!Iʚ-mi gj/A<+Or쩚]+^&LIhQE!Q@Q@UMWPM+KeaBGs~&zM!nj?Fdw-g|J\kFC#1PzRS¤#xK^">v$J?fnӝJr͟SQ]*VOvW04b課&la=~5,Ѓg}s6HG>N)J"#).kҾ$L-ZE!&]pX t4y#a+RX$Y#qt9}jJkkSy|XsC~VuŠ(((3VgYΉiZ%sH$ MhU'Kl((((٥ozFG1]*Q'p*FQEQEQEEsu s*E j3EZyq0J.)Kid5@x,8$#+)T*; MsH]h5oS$Rg}2IrORMX#'"f?_<(١?*a>Qddp$mr?nXx]ӈ>]#qPkREF=7@񎝯*60]wSt~տ^42RU9VS/2kx#x"+gWFoEnQEQE6bH򥔀|+Jɮj0*FQEQEQEQEQEQEQEQEQEQEiUO_]qmSiVDzQYS4C?k'X>&[GԚhSى=ƟZZ`ui?]"~nΊk>88I ޼bXGꨣϸ{823cu̻950- I%W;EƸ,_K]/SgcEVeQ@Q@lRUԿ褫RQ(Q@Q@QA!K%:T*#2b@$4 ¶vq4S&F] nSc%@ ӨlIYX(C ( ( |{"]ы]s>.ūgcnp3ޭUqn[ǸaG^ݗ0}%_~2T_XMH/!?Ǐt+2j*1qD 7h5+DAfOGJL,}D}ե~n7 &9PrO{x=k 6(:B(( /sKRfy[\0UJݮ?\2u7ڜ"q밫irJoQPPQEQEQEiKo+h̆;DB0A9χ[\IBk 񏗿VT%nET((( bV%>(lic%\RX?6dm^;͝b,֪XEV*wAfOEWt>?W'EYhpcݨ<JtB@_ :]~ bC&# ߭>Kn}9b;yv{3Hƹb&cZ;ݍhڍ{jӋYa Bu f5D1 WS8TԗRǪW4gP?KuO.m%yw SG\bZ/ΏboboT=ԣi7i7({OYO&OYO&QE53\_̪9&K!\4(w`(0((ȵפj}ͣ1U&3Z1=_ȗam?cWET4M)4M" if',[swh" (aEPEP\K1kw%?t4Ly#t*}2?>y: 7X@MYQd;icE QLE]CB {xNڸ1 (((sKm67j2AǮ©1'vQE# ( ( (83JƯ^ΪǦEI~"ʻQՇ~URLU(((((}CF]cڣ5uCj XQ|ƛ(IFKX#Xc@=i+z iV1éMB$ %`cZ먦!(5>צY'ȳ}/;EDjбEQEQEQEQEQEQEU{nK,4GRҬVwZ?_Z=hkzDѤ۰rFDi*sVH" (aEPEP\K1kw%?t4Ly#t*}2?>y+&iYQKoI+Z=r]펽sKΰ 88Eݎ#;V5W5ԡce`|tmH| =pEm_\-ؤe~[9ZqM>0rH!+4דafZV&|;D`#ʼ`ME'~Yủ/$18<`CU I}on۳8ϥhY.kHI,\ dM_!4EWAWPPC^^_v flŠ(#((n/!!dK#XcuM.oAع9 kBeJ (aEPEPEP9eCIreH/9}F:4۸$jwʉ)ʺKv&7ETQEQEQEx#._rmQԚ}G^?d&xe%DՖ 3c !ҹ'l>$*jMEu.6=g$G S5Zf%𡱈B+żdwץOm$B~"𗇡%AҒD`g*G ^lH!d\i7Gѯm=KTCko)pKP>\ǀ-$w ]vzKimHsoFX{5[+h r9v#z TeסEODm$ (Š((4ݒTIr?P+FiǾFFxFz&nc)8,pڇ59h0vO&xZZ3%EG;Od/'l:h>0qh\D~t|_?NaTf!-4ƽ,KikD ?O`z>ӷG9!yBC1zֳhGv\iJ[#4|CCel iE^iZ^gڋ{$GV>5v}J" (4 ( (9wz ͉A#,gz`+[wh0Cc"ܫNmaWSVʉWaETQEQEQEd[Z9!skZmϊ-[]S)ݴQEIAEPEPEPXU1,(0\(_ŸU)s`*J ( ( ( ( ( sJXf;1\b0?WvLEVi3L4 l;URw (Q@Q@Q@Q@dZZG LA r#j<#/g)8KviQPPQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEU]N4P/(SЕRqU՟ú,P[3Úzo6쪜++Nx>$FGfUWANji :QEIEKA-R;m vsG}?B-\C\>gwYsYsTTY3.k/OTk/OUJ=3.k/OTk/OUJ=3.k/OTk/OUJ=3,ͨCq -Vo ;F`egvf 'N}zU*w.k/OTk/OUJk>eei*ei*QGpeei*ei*QGpeei*ei*QGpeei*ei*QGpf2Ф} >IX3E=/M3©(rN@#kVO9w4/2-@9 Z;vkQR0(((Ԯ<ֱ[+GG:k.8On#B>]\p* +N${wn֨߳US<g-i5;'Mܛރ*z]goۜn54v\}gރ*:, M2yZ9.z>y?(p&}gރ*iw oz jRIqn}ӧSMTwM7??UoU wdoT}߳T5o-)q鞣nCk 'q,3JKWYkmmk)mfMS$j61ֿеݜwHPӭPF&ep3=+|q~G5h N6^GNo4])]GE)x@.N~PXmθ:qcpmskmե{ |'?98?d|CU?>MsVp\IE\c,N_ŽUX/[kRhRH{lcJAG~k^*nZ_2,7xW>-'o5Kخ#;o2HOuEPE`8(I h̓SOurO*Wm\ ]Q>(ȵo_k=vtau 4i":3OaETQEQEQEURP'FwC9GV?j*XQRQOT}?1Yǃ{r%i ?uuu0k9i@x#Qõϧn5|O\)gmqۤ%+myuᨵkei9b]1ס^-uZ 7YJUnKq4l#x4m=c.iPX4[E܄nb0' 뚵d$pK>j]%#qNƳ7!~s#BpϙA9v"1#Yg+zUln+ >GN܊v׺i-;NJe Y p 7l]Kx~FT31\J-OIIK`q}V猿L?OԴ`\xŶJ돥LTYdX#sH{[>XVGRyQ%Bt!@$xV𭾊44#: Q,ͻo4;|[\[M4HF3қw]{"O^8cqx֗Muuk)uq Miqh\z GZw4ZomA dc 4kW$ηeO%rύ8V sZhW!DB$df.S>PkiehYP#$G8=sY׾e*t zJA]Iⶵ}.^-ւNٹ^ @Ч> OY宛m>A޻W%P/Lt|-d>&u+cŨ$U_pq!ַ+KTgJXZ# =Gҡyگ ֡v$I A$;ck=LJ'EFsc<*/??MqJ2z4CEM G/??MO, ̆݃&^v9e,k+L?V?$Gٳ^ی G/??Md5R^~|Evtnݻ?+o݃&^v|]̆݃&Y KJh䕯`EM G/??M.Yv 2*o^v>y?h`!`݃&Yv 2*o^v>y?h`!`݃&Yv 2 oU#11I8-vZ)[w0Mc;#\NmcWS\K{F!.C0֮ܨJ(((( tKH5dF{EhV-njtS 9$?ҷjQ+t (e=S<gֽ sW+?3MsU)J$)QWW~g??W~g?OTU&&e>TU&&e>TU&&e>,?"ֵTo4K#ҿ60Ǡn)E:dx'F_%vuh]Ͱq%d ǧ5$4(((((((((((((((((((((((((C"֩^iCC2E(zFvF5tUmi gj/AadQE# ( ( ( ( ( CЫ^<=P*_čz((((((i_NFͰ3'5jϊ-[]S)QE%Q@Q@Q@~ OV6P8]UJ\JET(((n.@JNX'ZwC}SQod&/QTkN|˹z?ZwC}Q.̻3RƑ]MRmiYs$rr6?e}DJۛU;!ӿ꧖]]U;!ӿYveܽEQӿ;!`]Uk}FCr8d֬iƝŠ(EPXZ.jBRK 2ݮ?NFw,WY-* ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( g_\Á$6H HUrpq¥DEI*@{+Or쩚]+^g=lhMܧ;s; ;+QRPQEQEAyt,ΤƤo#Ws[H299,ǩxW%y(Ea[B?YZX/mQ$2:c+UUT`"L贏GZ,O,~2@&m&wJ9+=wג;ĚNBBz!;X+di^+gEvJs#Xdr??ӧҹa&'=f+h-uYK q[ r -yCsjiW24wd[1bxǽz}pԦ3SWAEVeQ@d'ODO!cxėL06˫IuEǹ7>TǤS ~k*)mr>cH$=Ɗxc_c09_?,WW+3RWAXZ.jrL-j pݬ;[.m,ܸ=~餛ݮ (aEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPUVOf*@`FZUOe,DҕHPN?Jk}hRhD$;0NXCDS[ 6&݅c0_WQExqVHf ~I7q?qjyLEU\_8*e*oʚov?J9Y"ƥU:OpJu×>v ܹ"wנן|.O ^^]3ѥ +@y?jN GcօgxES% 8fQEAAEPEPEPXH /[Wc:~Uj-nK,Dy#5I5O`*Fy\6̸+%(ѿޗqG7,0}GZhnrZ*٧Tz5ת#V0j܎Q.-S^Er.:K$t7adMLg>@!@g+Ŵ4`^\(:(.̾)d"N?o\7>S]O?O\ܶs+ԡf>5Wȸ\ۇQ]XѾV-QU|GvzD\,ZI2D2nu'ޔ Gg͗o #SR|q``&>[WWxKG;?[;+O)J꭮ie=$r\Yz9?+&vj((((((((((((((((((((((((((+;?-j/kF]J `8"vwK1D#EDFQ'v%e`) }CF]cڣ5uCjJ;ZtrL8' 2JRuYwEEr vmFZ}0}ARL~s b/J袴ҵyO$.L(y<ڑEoދzy7@.QEVwZ?_֍gxES% 8+UEgϿ8]mk sŗ6>,BiPLQZFq#ޗCgP4tWOO2kI$ J\p}k&}K~4}0 eDR6#J~zۍ> >imp@ ̣Z'ώ!jKjuUnPItwӚM"_vUrZKbʅY nz vQj˝f5HOLTI'Ҁ=扷G"VoԮV壄<0ppH'n9>2]cF!C`Ă&:NѬ$w7jy3)ءzxkՌu/B}J[#rF~Ҁ(jĚ< LW2q[+%?}S<56x7Lka4dۜ=3ZKgs q>7N"8QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEUni-BhXHè!IU=^K&4Rmvp+r<+Ϥxr(-]k^(QEIG2'oC|'*#._rmMGExّYN>ױxXd;w:׈%ytƕՄ',mX#̾lb%@X8X}u$m^f n}hw}-Rii&OZ}SP6U.֑{fd˛Vb||#'v/qۻqWۻ'%dlx:Yo#ɱB'j{Tup*2ww:υo^^~_+˯Fz~QEbhO@5Y!kTIN;=Tt^p v=Suj嶔K#J<бi\t}E!t38 sYpwࠖK3Jƅ6uy+ۨˉ9$?Bh/-(NJ!POk5MmY͢z4-u N0ICȹo V4]I\K$cb4<qv)bBj{K ZV$t*i<$՝'5$t*XLs|"]э]s.Ʈ=(|(j,7$o MZsރCsbPH ޹* 4WQE# ( ( (83JƯ^βmOu s@o9~W6ؘ\(?Fz_YC~+S 4pb>3-CŸduxd2qiZz`x23w" \~[܎}6:oJX 6j Df9+KK7:#u_;OY`F\7EQ).W$e? k+tCZH|!EW!QEyO?OX5FO zέ4UӦUa#KXuclpLPMq!?1ֹ )-,{MnhV7)s2An2;{ӬX5 nwHŤ 0W9E?g}ع6Ly}wn۳ǥcQEZV!>NvO<%#ޝ׭ם?Ě)Y9+TO)JFƭ٤vtQEIAEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPLhi$qwc2M>C"֩^Wb{ KIVX_;]zTDi*rVm]aETQ/j n],H|[\@C^{y /Kt GCQE.ZrNQOEx[RUh^-V_^ek[SiF8OJqeF[%MBRp9w=]Zԗ4G * Fhh2k*%M_v`<2ȃj9z|G+ndЀdsOwFIS 2[ʽUԂ+էV3Zl,b}S֦aE֖5ΣTw y3LZZW<AGxPզIUkK0s$lv*Ԍ'Y<8sb+C v0hUT`: }ywmVV9o:mީ >YIN."D2?=x#29(O$_8O֫ORGWb{3Ţ i # wB'oceqpBo5 :Xz;Vݗrȃ=ߌ%&+Xp2GC^Y6gY k}I kTNVNEjs</,p?֬#_W5+Hx_)!Xd5 v-QR0((((((((((((((((((((((((nlFppF󩪮t]ڨf)J8)ii|VV1ghc$5n<9[_nʩbҴ%khQE!2HhJUA)PׂKѡ%er^F=EkTK_C_O&ABog^=:i i<֕T6$ERQEQETv6wvNݑ֧9˟h،DFqTᆌ͑=A"5QVMu'/Ə0 yb?礭1]5:}EK-.N\YZAz~[XQE|Pj@z'z^#㴚gRQ.F?Z֫mrJ^aETQEQEQE0C҈HŒ)XjWrHwkX#N#?u5RŠ(QEUi;\o*@OǢip$>WjE `1NC ( ( ( ( 6as_VbEw#?l??¤Fma#> *qYP Z(0(3?2y+$1DƈrTީJɮv(((((((((((((((((((((((((}Y;me; hQM;;fF+:L$=Fy[6G ^d)o q!%=O֙c~זv e\TrmQ2cׇ#MC.*&NIAYiJV t?hR/S^G :<8#WtܒG\e@;m*Řm$U5c`vv bPM[Ks_Q5-NH77PܞMd{ ?ܹʺ)';$@?CU?aGڃZ;W r6k?RV]k*`=9ԆX#@{Ut:_6y!V8IrQC>ZvַEdJⶤbs@YNKeMD9WH}u]J `8"r~= #$kOJ5NH;p)h d kai`v[0|S|\H5]nD95S'?=Mm]XZ_ *%85[6 B䶷п,i"}A[hvr[]_l#s UTPSh]ĭ,m9;H^wV|Ub͘|^^H//!Cp>HӬml-` oSfhb47du 0?=Mk麝h.lea$myx 6 Vm`(S-R|GhGs伃r-#G"F]ULk2,F"]HF U9[A\\:Rz(J>ojO"//U z5^yEiwaz )FNޣq[ǜ{smcvtQE-P4̫K{q=I"|ʱ:^mt ƒG!uI4m;nro5KB y5yma]4A#eRBqҟEi&g Bnt bWgYs2BIٓzD>2ŜS5 Bn:(RZi]7d(̠=Ts:5K-B{x<5ysm aǨ]M g9xQ5ou]T:MvJ[->{-H+ g.}UjM'm3mctz&qi[ ɜ; )QkD$ݘZ}O Y:)C5yg—Ȭ%=~vP "RXgܮŠQin6٘Z}2^h:z.1Aw[n4 /dev/null 2>&1 if [ $? -eq 0 ]; then return 0 fi if [ -n "${1}" -a "${1}" == "reset" ]; then rm -f ${SLAPD_PID_FILE} rm -f ${SLAPD_LOCK_FILE} fi return 2 } function check_updater(){ if [ ! -f "${UPDATE_PID_FILE}" ]; then [ ! -f "${UPDATE_LOCK_FILE}" ] || return 2 return 1 fi ps $(cat ${UPDATE_PID_FILE}) >/dev/null 2>&1 if [ $? -eq 0 ]; then return 0 fi if [ -n "${1}" -a "${1}" == "reset" ]; then rm -f ${UPDATE_PID_FILE} rm -f ${UPDATE_LOCK_FILE} fi return 2 } function check_updater_hanging(){ # Check for hanging process response=$(ldapsearch -LLL -x -h ${SLAPD_HOST} -p ${SLAPD_PORT} -b o=infosys objectClass=UpdateStats modifyTimestamp 2>/dev/null | grep modifyTimestamp ) if [ $? -eq 0 ]; then time_stamp=$(echo ${response} | cut -d" " -f2) time_string=$(echo ${time_stamp} | sed 's/^\([0-9][0-9][0-9][0-9]\)\([0-9][0-9]\)\([0-9][0-9]\)\([0-9][0-9]\)\([0-9][0-9]\).*/\1-\2-\3 \4:\5/') time_int=$(date --utc --date "${time_string}" +%s) let time_threshold=${time_int}+1200 time_now=$(date --utc +%s) if [ ${time_now} -gt ${time_threshold} ]; then return 1 fi fi return 0 # TODO failsafe? } function start(){ # Check status check_slapd reset RETVAL1=$? check_updater reset RETVAL2=$? if [ $RETVAL1 -eq 0 -a $RETVAL2 -eq 0 ] ; then echo "BDII already started" exit 0 fi # Create RAM Disk if [ "${BDII_RAM_DISK}" = "yes" ]; then mkdir -p ${SLAPD_DB_DIR} mount -t tmpfs -o size=${BDII_RAM_SIZE},mode=0744 tmpfs ${SLAPD_DB_DIR} fi # Remove delayed_delete.pkl if it exists if [ -f "${DELAYED_DELETE}" ] ; then rm -f ${DELAYED_DELETE} fi #Initialize the database directory. mkdir -p ${SLAPD_DB_DIR}/stats mkdir -p ${SLAPD_DB_DIR}/glue mkdir -p ${SLAPD_DB_DIR}/grid mkdir -p ${BDII_VAR_DIR}/archive chown -R ${BDII_USER}:${BDII_USER} ${BDII_VAR_DIR} chown -R ${BDII_USER}:${BDII_USER} ${SLAPD_DB_DIR} [ -x /sbin/restorecon ] && /sbin/restorecon -R ${BDII_VAR_DIR} mkdir -p /run/bdii/db chown -R ${BDII_USER}:${BDII_USER} /run/bdii [ -x /sbin/restorecon ] && /sbin/restorecon -R /run/bdii/db $RUNUSER -s /bin/sh ${BDII_USER} -c "rm -f ${SLAPD_DB_DIR}/stats/* 2>/dev/null" $RUNUSER -s /bin/sh ${BDII_USER} -c "rm -f ${SLAPD_DB_DIR}/glue/* 2>/dev/null" $RUNUSER -s /bin/sh ${BDII_USER} -c "rm -f ${SLAPD_DB_DIR}/grid/* 2>/dev/null" $RUNUSER -s /bin/sh ${BDII_USER} -c "rm -f ${BDII_VAR_DIR}/old.ldif 2>/dev/null" $RUNUSER -s /bin/sh ${BDII_USER} -c "ln -sf ${DB_CONFIG} ${SLAPD_DB_DIR}/grid/" $RUNUSER -s /bin/sh ${BDII_USER} -c "ln -sf ${DB_CONFIG} ${SLAPD_DB_DIR}/stats/" $RUNUSER -s /bin/sh ${BDII_USER} -c "ln -sf ${DB_CONFIG} ${SLAPD_DB_DIR}/glue/" if [ ${SLAPD_CONF} = "/etc/bdii/bdii-top-slapd.conf" ] ; then $RUNUSER -s /bin/sh ${BDII_USER} -c "ln -sf ${DB_CONFIG}_top ${SLAPD_DB_DIR}/grid/DB_CONFIG" $RUNUSER -s /bin/sh ${BDII_USER} -c "ln -sf ${DB_CONFIG}_top ${SLAPD_DB_DIR}/stats/DB_CONFIG" $RUNUSER -s /bin/sh ${BDII_USER} -c "ln -sf ${DB_CONFIG}_top ${SLAPD_DB_DIR}/glue/DB_CONFIG" $RUNUSER -s /bin/sh ${BDII_USER} -c "rm -f ${BDII_VAR_DIR}/gip/cache/gip/top-urls.conf/* 2>/dev/null" $RUNUSER -s /bin/sh ${BDII_USER} -c "rm -f ${BDII_VAR_DIR}/gip/cache/gip/top-urls.conf-glue2/* 2>/dev/null" else if [ -r "${BDII_VAR_DIR}/gip/cache" ]; then $RUNUSER -s /bin/sh ${BDII_USER} -c "rm -f ${BDII_VAR_DIR}/gip/cache/gip/site-urls.conf/* 2>/dev/null" $RUNUSER -s /bin/sh ${BDII_USER} -c "rm -f ${BDII_VAR_DIR}/gip/cache/gip/site-urls.conf-glue2/* 2>/dev/null" fi fi if [ $RETVAL1 -ne 0 ] ; then cd /tmp echo -n "Starting BDII slapd: " COMMAND="${SLAPD} -f ${SLAPD_CONF} -h ${SLAPD_HOST_STRING} -u ${BDII_USER}" eval ${COMMAND} touch ${SLAPD_LOCK_FILE} [ -f "${SLAPD_PID_FILE}" ] || sleep 2 check_slapd reset RETVAL=$? if [ ${RETVAL} -gt 0 ]; then echo -n "BDII slapd failed to start" 1>&2 eval log_failure_msg # TODO check a sysconfig option DEBUG_MODE echo "${COMMAND} -d 256" ${COMMAND} -d 256 return 1 else eval log_success_msg fi fi if [ $RETVAL2 -ne 0 ] ; then cd /tmp echo -n "Starting BDII update process: " export SLAPD_CONF=${SLAPD_CONF} $RUNUSER -s /bin/sh ${BDII_USER} -c "sh -l -c '${BDII_UPDATE} -c ${BDII_CONF} -d'" touch ${UPDATE_LOCK_FILE} [ -f ${UPDATE_PID_FILE} ] || sleep 2 check_updater reset RETVAL=$? if [ ${RETVAL} -gt 0 ]; then echo -n "BDII update process failed to start" 1>&2 eval log_failure_msg return 1 else eval log_success_msg fi fi touch $lockfile return 0 } function stop(){ check_slapd RETVAL1=$? check_updater RETVAL2=$? RETVAL=0 echo -n "Stopping BDII update process: " if [ $RETVAL1 -gt 0 ] ; then echo -n "already stopped" 1>&2 eval log_success_msg else UPDATE_PID=$(cat ${UPDATE_PID_FILE}) $RUNUSER -s /bin/sh ${BDII_USER} -c "kill -15 ${UPDATE_PID} 2>/dev/null" ps ${UPDATE_PID} >/dev/null 2>&1 if [ $? = 0 ]; then sleep 2 ps ${UPDATE_PID} >/dev/null 2>&1 if [ $? = 0 ]; then $RUNUSER -s /bin/sh ${BDII_USER} -c "kill -9 ${UPDATE_PID} 2>/dev/null" sleep 2 ps ${UPDATE_PID} >/dev/null 2>&1 if [ $? = 0 ]; then echo -n "Could not kill BDII update process ${UPDATE_PID}" 1>&2 eval log_failure_msg RETVAL=1 fi fi fi if [ ${RETVAL} = 0 ]; then rm -f ${UPDATE_PID_FILE} rm -f ${UPDATE_LOCK_FILE} eval log_success_msg fi fi echo -n "Stopping BDII slapd: " if [ $RETVAL1 -gt 0 ] ; then echo -n "already stopped" 1>&2 eval log_success_msg else SLAPD_PID=$(cat ${SLAPD_PID_FILE}) $RUNUSER -s /bin/sh ${BDII_USER} -c "kill -15 ${SLAPD_PID} 2>/dev/null" ps ${SLAPD_PID} >/dev/null 2>&1 if [ $? = 0 ]; then sleep 2 ps ${SLAPD_PID} >/dev/null 2>&1 if [ $? = 0 ]; then $RUNUSER -s /bin/sh ${BDII_USER} -c "kill -9 ${SLAPD_PID} 2>/dev/null" sleep 2 ps ${SLAPD_PID} >/dev/null 2>&1 if [ $? = 0 ]; then echo -n "Could not kill BDII slapd process ${SLAPD_PID}" 1>&2 eval log_failure_msg RETVAL=2 fi fi fi if [ ${RETVAL} -ne 2 ]; then rm -f ${SLAPD_PID_FILE} rm -f ${SLAPD_LOCK_FILE} eval log_success_msg fi fi if [ ${RETVAL} -ne 0 ]; then return 1 else mountpoint -q ${SLAPD_DB_DIR} && umount ${SLAPD_DB_DIR} rm -f $lockfile return 0 fi } function status(){ check_slapd RETVAL1=$? check_updater RETVAL2=$? if [ $RETVAL1 -eq 1 ] ; then echo -n "BDII slapd stopped" 1>&2 eval log_success_msg elif [ $RETVAL1 -eq 2 ] ; then echo -n "BDII slapd aborted" 1>&2 eval log_failure_msg else echo -n "BDII slapd running " eval log_success_msg fi if [ $RETVAL2 -eq 1 ] ; then echo -n "BDII updater stopped" 1>&2 eval log_success_msg elif [ $RETVAL2 -eq 2 ] ; then echo -n "BDII updater aborted" 1>&2 eval log_failure_msg else check_updater_hanging RETVAL=$? if [ $RETVAL -eq 1 ] ; then echo -n "BDII update process hanging" 1>&2 eval log_failure_msg else echo -n "BDII updater running " eval log_success_msg fi fi if [ $RETVAL1 -eq 1 -a $RETVAL2 -eq 1 ] ; then return 3 fi if [ $RETVAL1 -eq 2 -o $RETVAL2 -eq 2 ] ; then return 1 fi return 0 } case "$1" in start) start RETVAL=$? ;; stop) stop RETVAL=$? ;; status) status RETVAL=$? ;; reload) ;; restart | force-reload) stop start RETVAL=$? ;; condrestart | try-restart) if [ -f ${SLAPD_LOCK_FILE} ] || [ -f ${UPDATE_LOCK_FILE} ]; then stop start RETVAL=$? fi ;; *) echo $"Usage: $0 {start|stop|restart|status|condrestart}" RETVAL=1 esac exit ${RETVAL} bdii-6.0.3/etc/logrotate.d/000077500000000000000000000000001463754072600154425ustar00rootroot00000000000000bdii-6.0.3/etc/logrotate.d/bdii000066400000000000000000000001521463754072600162720ustar00rootroot00000000000000/var/log/bdii/bdii-update.log { daily rotate 30 missingok compress copytruncate } bdii-6.0.3/etc/sysconfig/000077500000000000000000000000001463754072600152245ustar00rootroot00000000000000bdii-6.0.3/etc/sysconfig/bdii000066400000000000000000000001171463754072600160550ustar00rootroot00000000000000#SLAPD_CONF=/etc/bdii/bdii-slapd.conf #SLAPD=/usr/sbin/slapd #BDII_RAM_DISK=no bdii-6.0.3/etc/systemd/000077500000000000000000000000001463754072600147105ustar00rootroot00000000000000bdii-6.0.3/etc/systemd/bdii-slapd-start000066400000000000000000000061151463754072600200010ustar00rootroot00000000000000#! /bin/bash BDII_CONF=${BDII_CONF:-/etc/bdii/bdii.conf} [ -r "${BDII_CONF}" ] && . "${BDII_CONF}" BDII_USER=${BDII_USER:-ldap} BDII_VAR_DIR=${BDII_VAR_DIR:-/var/lib/bdii} SLAPD=${SLAPD:-/usr/sbin/slapd} SLAPD_CONF=${SLAPD_CONF:-/etc/bdii/bdii-slapd.conf} SLAPD_HOST=${SLAPD_HOST:-0.0.0.0} SLAPD_PORT=${SLAPD_PORT:-2170} BDII_IPV6_SUPPORT=${BDII_IPV6_SUPPORT:-no} SLAPD_HOST6=${SLAPD_HOST6:-::} SLAPD_DB_DIR=${SLAPD_DB_DIR:-$BDII_VAR_DIR/db} DB_CONFIG=${DB_CONFIG:-/etc/bdii/DB_CONFIG} DELAYED_DELETE=${DELAYED_DELETE:-${BDII_VAR_DIR}/delayed_delete.pkl} BDII_RAM_SIZE=${BDII_RAM_SIZE:-1500M} if [ "${BDII_IPV6_SUPPORT}" == "yes" ]; then SLAPD_HOST_STRING="ldap://${SLAPD_HOST}:${SLAPD_PORT} ldap://[${SLAPD_HOST6}]:${SLAPD_PORT}" else SLAPD_HOST_STRING="ldap://${SLAPD_HOST}:${SLAPD_PORT}" fi if [ -x /sbin/runuser ] ; then RUNUSER=/sbin/runuser else RUNUSER=su fi # Create RAM Disk if [ "${BDII_RAM_DISK}" = "yes" ]; then mkdir -p ${SLAPD_DB_DIR} mount -t tmpfs -o size=${BDII_RAM_SIZE},mode=0755 tmpfs ${SLAPD_DB_DIR} fi # Remove delayed_delete.pkl if it exists if [ -f "${DELAYED_DELETE}" ] ; then rm -f ${DELAYED_DELETE} fi #Initialize the database directory. mkdir -p ${SLAPD_DB_DIR}/stats mkdir -p ${SLAPD_DB_DIR}/glue mkdir -p ${SLAPD_DB_DIR}/grid mkdir -p ${BDII_VAR_DIR}/archive chown -R ${BDII_USER}:${BDII_USER} ${BDII_VAR_DIR} chown -R ${BDII_USER}:${BDII_USER} ${SLAPD_DB_DIR} [ -x /sbin/restorecon ] && /sbin/restorecon -R ${BDII_VAR_DIR} mkdir -p /run/bdii/db chown -R ${BDII_USER}:${BDII_USER} /run/bdii [ -x /sbin/restorecon ] && /sbin/restorecon -R /run/bdii/db $RUNUSER -s /bin/sh ${BDII_USER} -c "rm -f ${SLAPD_DB_DIR}/stats/* 2>/dev/null" $RUNUSER -s /bin/sh ${BDII_USER} -c "rm -f ${SLAPD_DB_DIR}/glue/* 2>/dev/null" $RUNUSER -s /bin/sh ${BDII_USER} -c "rm -f ${SLAPD_DB_DIR}/grid/* 2>/dev/null" $RUNUSER -s /bin/sh ${BDII_USER} -c "rm -f ${BDII_VAR_DIR}/old.ldif 2>/dev/null" $RUNUSER -s /bin/sh ${BDII_USER} -c "ln -sf ${DB_CONFIG} ${SLAPD_DB_DIR}/grid/" $RUNUSER -s /bin/sh ${BDII_USER} -c "ln -sf ${DB_CONFIG} ${SLAPD_DB_DIR}/stats/" $RUNUSER -s /bin/sh ${BDII_USER} -c "ln -sf ${DB_CONFIG} ${SLAPD_DB_DIR}/glue/" if [ ${SLAPD_CONF} = "/etc/bdii/bdii-top-slapd.conf" ] ; then $RUNUSER -s /bin/sh ${BDII_USER} -c "ln -sf ${DB_CONFIG}_top ${SLAPD_DB_DIR}/grid/DB_CONFIG" $RUNUSER -s /bin/sh ${BDII_USER} -c "ln -sf ${DB_CONFIG}_top ${SLAPD_DB_DIR}/stats/DB_CONFIG" $RUNUSER -s /bin/sh ${BDII_USER} -c "ln -sf ${DB_CONFIG}_top ${SLAPD_DB_DIR}/glue/DB_CONFIG" $RUNUSER -s /bin/sh ${BDII_USER} -c "rm -f ${BDII_VAR_DIR}/gip/cache/gip/top-urls.conf/* 2>/dev/null" $RUNUSER -s /bin/sh ${BDII_USER} -c "rm -f ${BDII_VAR_DIR}/gip/cache/gip/top-urls.conf-glue2/* 2>/dev/null" else if [ -r "${BDII_VAR_DIR}/gip/cache" ]; then $RUNUSER -s /bin/sh ${BDII_USER} -c "rm -f ${BDII_VAR_DIR}/gip/cache/gip/site-urls.conf/* 2>/dev/null" $RUNUSER -s /bin/sh ${BDII_USER} -c "rm -f ${BDII_VAR_DIR}/gip/cache/gip/site-urls.conf-glue2/* 2>/dev/null" fi fi COMMAND="${SLAPD} -f ${SLAPD_CONF} -h \"${SLAPD_HOST_STRING}\" -u ${BDII_USER}" exec sh -c "${COMMAND}" bdii-6.0.3/etc/systemd/bdii-slapd.service000066400000000000000000000005641463754072600203070ustar00rootroot00000000000000[Unit] Description=Berkeley Database Information Index - slapd After=network.target network-online.target PartOf=bdii.service StopWhenUnneeded=true [Service] Type=forking EnvironmentFile=-/etc/sysconfig/bdii PIDFile=/run/bdii/db/slapd.pid ExecStart=/usr/share/bdii/bdii-slapd-start ExecStopPost=/bin/sh -c "mountpoint -q /var/lib/bdii/db && umount /var/lib/bdii/db || :" bdii-6.0.3/etc/systemd/bdii.service000066400000000000000000000012531463754072600172020ustar00rootroot00000000000000[Unit] Description=Berkeley Database Information Index Documentation=man:bdii-update(1) After=bdii-slapd.service Requires=bdii-slapd.service BindsTo=bdii-slapd.service [Service] Type=forking PIDFile=/run/bdii/bdii-update.pid EnvironmentFile=-/etc/sysconfig/bdii ExecStart=/bin/sh -c ' \ BDII_CONF=$${BDII_CONF:-/etc/bdii/bdii.conf} ; \ [ -r "$${BDII_CONF}" ] && . "$${BDII_CONF}" ; \ BDII_USER=$${BDII_USER:-ldap} ; \ BDII_UPDATE=$${BDII_UPDATE:-/usr/sbin/bdii-update} ; \ export SLAPD_CONF=$${SLAPD_CONF:-/etc/bdii/bdii-slapd.conf} ; \ /sbin/runuser -s /bin/sh $${BDII_USER} -c "$${BDII_UPDATE} -c $${BDII_CONF} -d ; sleep 2" \ ' [Install] WantedBy=multi-user.target bdii-6.0.3/man/000077500000000000000000000000001463754072600132205ustar00rootroot00000000000000bdii-6.0.3/man/bdii-update.1000066400000000000000000000015021463754072600154670ustar00rootroot00000000000000.TH BDII_UPDATE 1 .SH NAME bdii-update \- the bdii update process .SH SYNOPSIS .B bdii-update [-d ] -c .I config-file .SH DESCRIPTION The .B bdii-update process obtains the LDIF by reading files found the .B ldif directory, running providers found in the .B provider directory and running plugins found in the .B plugin directory. The difference between providers and plugins is that providers return complete entries and plugins provide modifications to existing entries. The process can be run either as a daemon that periodically synchronizes an LDAP database or as a command that will print the result to stdout. .SH OPTIONS .IP -d Run as a daemon process. .IP "-c config" The configuration to use. .SH FILES .I /etc/bdii.conf .RS The default configuration file. .SH AUTHOR Laurence Field bdii-6.0.3/tests/000077500000000000000000000000001463754072600136075ustar00rootroot00000000000000bdii-6.0.3/tests/ldif/000077500000000000000000000000001463754072600145255ustar00rootroot00000000000000bdii-6.0.3/tests/ldif/bad-data.ldif000066400000000000000000000022011463754072600170150ustar00rootroot00000000000000ofjsdafpojwfpjepjftg409uj4tg0p9ujq-p09jue43oimnvlfk lde'asjg 89u05u09u50950u8t0wre9ifvujaeifoweoifjerfw20povj -pfjrf u0m utofewfwe f]pewfg089uf 4e03thg430uvj 09w5sreu ghqa034jgh 0e9r ugv04u tg gegergepojewrgpjegtfg34]- au f9u09t0u t0u g4r utc4 30r9ti43q09t 8 q4309iotuj43ep0ijgt43aoi WEFJ E4ORFJVOEDJ gc jarvoj4a3 \gjo4rigfgt4hq4 ju6w5j[ kjapo4tgjverw-[p0iugje0rwp9iugt-43it fewnat fewgerj4nwg regrqegrjheqgrqegqregrqe grehytr5yh yg gregregregregreg grhtr5ge3a[p'\orgm ,i-035u4i w-y bhreAeqgq5 email mat regu77uquij2n w0ms8 qu9j4u[smjvy5[m 209q y0ub62[VLREOI;KARIJTG09 y9wubv2q0wuj g[ ywv]0 erfy6 htrREFE-0RO9U3E0O T3209U7FGTE439UT5 1QITUGE09W[RUT[0U TG4WEUGT340UTG0[WE9U[05N TGW0EUG43Q09UTGERW0[9UG[09TUGFVSDF[0 G4[3UGV BD[FZ09UT1[03J GV OISJRT[JV OIJf09uj4[G0UBH[09UFvb09ujRD{f)q(ujV[09URf[0q(ufv[091URDF0[qb098OIHOoijoj jhyyu7t jj5yt kw45v[ w094u0w945u m[0j v[0w c0jw40 ujyv540uj vq1]u yh-09 w] eytjytj jwrjtrjhtrghlkermjglkerwmjflkrwejflkrwejg \r ;ljgrel;pkjrelknjregt ikjw-9 ]pf2pojne[poj][pkf ]polprnpomjfgpomjfpoj EPOLZM4[PKJFpinhg]PKIGpi3REPNJGpnj G[OJGponhgPONJP[G[pjG]PK] [klgEPONJ[POkj[fg;x-]f[4reolf;pem3 t';epofjEOLMJFGEPIAJGP4RIJNG bdii-6.0.3/tests/ldif/default.ldif000066400000000000000000000006631463754072600170160ustar00rootroot00000000000000dn: o=shadow objectClass: organization o: shadow dn: o=grid objectClass: organization o: grid dn: mds-vo-name=local,o=grid objectClass: MDS mds-vo-name: local dn: mds-vo-name=resource,o=grid objectClass: MDS mds-vo-name: resource dn: o=glue objectClass: organization o: glue dn: GLUE2GroupID=resource, o=glue objectClass: GLUE2Group GLUE2GroupID: resource dn: GLUE2GroupID=grid, o=glue objectClass: GLUE2Group GLUE2GroupID: grid bdii-6.0.3/tests/ldif/nordugrid.ldif000066400000000000000000000001411463754072600173560ustar00rootroot00000000000000dn: mds-vo-name=nordugrid_1,o=Grid objectclass: Mds objectclass: GlueTop mds-vo-name: nordugrid_1bdii-6.0.3/tests/ldif/service-bad-suffix.ldif000066400000000000000000000007261463754072600210600ustar00rootroot00000000000000dn: GlueServiceUniqueID=service_bad_suffix,mds-vo-name=resource,o=bad objectClass: GlueTop objectClass: GlueService objectClass: GlueKey objectClass: GlueSchemaVersion GlueServiceUniqueID: service_bad_suffix GlueServiceName: Test Service Bad Suffix GlueServiceType: bdii GlueServiceVersion: 3.0.0 GlueServiceEndpoint: ldap://host-invalid:2170/mds-vo-name=resource,o=grid GlueForeignKey: GlueSiteUniqueID=my-site-name GlueSchemaVersionMajor: 1 GlueSchemaVersionMinor: 3 bdii-6.0.3/tests/ldif/service-encoding.ldif000066400000000000000000000022721463754072600206140ustar00rootroot00000000000000dn: GlueServiceUniqueID=service_\\slash,mds-vo-name=resource,o=grid objectClass: GlueTop objectClass: GlueService GlueServiceUniqueID: service_\slash dn: GlueServiceUniqueID=service_\,comma,mds-vo-name=resource,o=grid objectClass: GlueTop objectClass: GlueService GlueServiceUniqueID: service_,comma dn: GlueServiceUniqueID=service_\=equals,mds-vo-name=resource,o=grid objectClass: GlueTop objectClass: GlueService GlueServiceUniqueID: service_=equals dn: GlueServiceUniqueID=service_\+plus,mds-vo-name=resource,o=grid objectClass: GlueTop objectClass: GlueService GlueServiceUniqueID: service_+plus dn: GlueServiceUniqueID=service_\;semi,mds-vo-name=resource,o=grid objectClass: GlueTop objectClass: GlueService GlueServiceUniqueID: service_;semi dn: GlueServiceUniqueID=service_\"quote,mds-vo-name=resource,o=grid objectClass: GlueTop objectClass: GlueService GlueServiceUniqueID: service_"quote dn: GlueServiceUniqueID=service_\>greater,mds-vo-name=resource,o=grid objectClass: GlueTop objectClass: GlueService GlueServiceUniqueID: service_>greater dn: GlueServiceUniqueID=service_\ ${working_dir}/bdii.conf sed -i "s#BDII_READ_TIMEOUT=.*#BDII_READ_TIMEOUT=3#" ${working_dir}/bdii.conf sed -i "s#BDII_BREATHE_TIME=.*#BDII_BREATHE_TIME=10#" ${working_dir}/bdii.conf sed -i "s#BDII_DELETE_DELAY=.*#BDII_DELETE_DELAY=2#" ${working_dir}/bdii.conf sed -i "s#ERROR#DEBUG#" ${working_dir}/bdii.conf sed -i "s#/var/log/bdii#${working_dir}#" ${working_dir}/bdii.conf export BDII_CONF=${working_dir}/bdii.conf /etc/init.d/bdii restart command="ldapsearch -LLL -x -h $(hostname -f) -p 2170 -b o=grid" command_glue2="ldapsearch -LLL -x -h $(hostname -f) -p 2170 -b o=glue" RETVAL=0 echo "Waiting 10 seconds for the BDII to start." sleep 10 echo -n "Testing the timeout for hanging providers: " ${command} >/dev/null 2>/dev/null if [ $? -eq 32 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing static LDIF file: " filter=GlueServiceUniqueID ${command} ${filter} | grep "service_1" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing GLUE2 service: " filter=objectClass=GLUE2Service ${command_glue2} ${filter} | grep "glue2-service" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing GlueTop modification: " filter="objectClass=MDS" ${command} ${filter} | grep "nordugrid_1" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing provider: " filter="GlueServiceUniqueID=service_2" ${command} ${filter} | grep "service_2" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing GLUE2 provider: " filter="objectClass=GLUE2Service" ${command_glue2} ${filter} | grep "cream-06" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing the handling of long DNs: " filter="GlueServiceUniqueID" ${command} ${filter} | grep "really_long" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing ignoring of junk files: " filter="GlueServiceUniqueID=service_4" ${command} ${filter} | grep "service_4" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "OK" else echo "FAIL" RETVAL=1 fi echo -n "Testing basic plugin: " filter=GlueServiceStatus=Failed ${command} ${filter} | grep "Failed" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing plugin mutlivalued delete: " filter=GlueServiceAccessControlBaseRule ${command} ${filter} | grep "atlas" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "OK" else echo "FAIL" RETVAL=1 fi echo -n "Testing plugin mutlivalued add: " filter=GlueServiceAccessControlBaseRule=cms ${command} ${filter} | grep "cms" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing plugin modify: " filter=GlueServiceStatusInfo ${command} ${filter} | grep "Broken" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing two plugins extending the attribute: " filter="GlueServiceUniqueID=service_7" ${command} ${filter} | grep "vo_1" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi rm -f ${working_dir}/ldif/service-long-dn.ldif rm -f ${working_dir}/ldif/service-unstable.ldif rm -f ${working_dir}/ldif/service-spaces-2.ldif sed -i "s#Failed#Unknown#" ${working_dir}/plugin/service-status sed -i "s#=# = #" ${working_dir}/ldif/service-spaces-1.ldif sed -i "s#2011-02-07T10:57:48Z#2011-02-07T10:58:57Z#" ${working_dir}/provider/glue2-provider echo "Wating for update ..." sleep 14 echo -n "Testing modify on update: " filter=GlueServiceStatus=Unknown ${command} ${filter} | grep "Unknown" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing modify GLUE2 Service: " filter=objectClass=GLUE2Service ${command_glue2} ${filter} | grep "GLUE2_Serivce_OK" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing GLUE2 provider updated: " filter="objectClass=GLUE2Service" ${command_glue2} ${filter} | grep "2011-02-07T10:58:57Z" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing delayed delete: " filter=GlueServiceUniqueID ${command} ${filter} | grep "_long_" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing ignoring spaces in dn: " filter=GlueServiceUniqueID ${command} ${filter} | grep "service_5" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing dn special character '\': " filter=GlueServiceUniqueID ${command} ${filter} | grep "slash" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing dn special character ',': " filter=GlueServiceUniqueID ${command} ${filter} | grep "comma" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing dn special character '=': " filter=GlueServiceUniqueID ${command} ${filter} | grep "equal" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing dn special character '+': " filter=GlueServiceUniqueID ${command} ${filter} | grep "plus" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing dn special character '\"': " filter=GlueServiceUniqueID ${command} ${filter} | grep "quote" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing dn special character ';': " filter=GlueServiceUniqueID ${command} ${filter} | grep "semi" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing dn special character '<': " filter=GlueServiceUniqueID ${command} ${filter} | grep "less" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing dn special character '>': " filter=GlueServiceUniqueID ${command} ${filter} | grep "greater" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi cp ldif/service-unstable.ldif -f ${working_dir}/ldif/ echo "Wating for update ..." sleep 13 echo -n "Testing deleting obsolete entry: " filter=GlueServiceUniqueID ${command} ${filter} | grep "_long_" >/dev/null 2>/dev/null if [ ! $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing delete with space in uniqueID: " filter=GlueServiceUniqueID ${command} ${filter} | grep "service 6" >/dev/null 2>/dev/null if [ ! $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing unstable service is not deleted: " filter=GlueServiceUniqueID ${command} ${filter} | grep "service_7" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi rm -f ${working_dir}/ldif/service-unstable.ldif echo "Wating for update ..." sleep 13 echo -n "Testing unstable service is not deleted: " filter=GlueServiceUniqueID ${command} ${filter} | grep "service_7" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo "Wating for update ..." sleep 13 echo -n "Testing unstable service is deleted: " filter=GlueServiceUniqueID ${command} ${filter} | grep "service_7" >/dev/null 2>/dev/null if [ ! $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi /etc/init.d/bdii stop mv ${working_dir}/bdii-update.log /tmp rm -rf ${working_dir} if [ ${RETVAL} -eq 1 ]; then echo "Test Failed" exit 1 else echo "Test Passed" exit 0 fi